content
stringlengths
86
994k
meta
stringlengths
288
619
[racket] Style and Performance question From: Robby Findler (robby at eecs.northwestern.edu) Date: Mon May 9 23:37:25 EDT 2011 On Mon, May 9, 2011 at 10:25 PM, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote: > On Mon, May 9, 2011 at 11:19 PM, Robby Findler > <robby at eecs.northwestern.edu> wrote: >> Even without typed racket, this: >> (define (my-sqrt y) >> (let loop [(x (/ y 2.0))] >> is much faster than this: >> (define (my-sqrt y) >> (let loop [(x (/ y 2))] > Yes, this is one of the optimizations TR does. :) It is not a meaning preserving transformation to turn the 2 into a 2.0 in this function, I don't think. Also, Matthias explicitly wrote "2.0" in the tr version. He didn't show the other version, but my timing results suggest he wrote "2" in the other one (and Patrick certainly did). > As to why TR is so much faster on this program: this is exactly the > sort of program that TR excels at -- generic arithmetic on floats, > which we can turn into fast machine instructions. You won't find a > better case for TR than this, except if you add complex numbers. I don't think that this is accurate, Sam. This is the difference I see and it is more in line with what I would expect given how I think TR inserts unsafe calls, etc. [robby at gaoping] ~/git/plt/src/build$ racket ~/tmp-tr.rkt cpu time: 187 real time: 221 gc time: 6 [robby at gaoping] ~/git/plt/src/build$ racket ~/tmp.rkt cpu time: 301 real time: 400 gc time: 103 [robby at gaoping] ~/git/plt/src/build$ cat ~/tmp.rkt #lang racket/base (define epsilon 1e-7) (define (my-sqrt y) (let loop [(x (/ y 2.0))] (let [(error (- y (* x x)))] (if (< (abs error) epsilon) (loop (+ x (/ error 2 x))))))) (define (time-and-compare size count) (define n (build-list size (lambda (_) (random 10)))) (collect-garbage) (collect-garbage) (collect-garbage) (define l (time (for ([x (in-range 0 count)]) (for-each my-sqrt n)))) (time-and-compare 100 2000) [robby at gaoping] ~/git/plt/src/build$ cat ~/tmp-tr.rkt #lang typed/racket (define epsilon 1e-7) (: my-sqrt : Natural -> Real) (define (my-sqrt y) (let loop [(x (/ y 2.0))] (let [(error (- y (* x x)))] (if (< (abs error) epsilon) (loop (+ x (/ error 2 x))))))) (: time-and-compare : Natural Natural -> Void) (define (time-and-compare size count) (define n (build-list size (lambda (_) (random 10)))) (collect-garbage) (collect-garbage) (collect-garbage) (define l (time (for ([x (in-range 0 count)]) (for-each my-sqrt n)))) (time-and-compare 100 2000) Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2011-May/045380.html","timestamp":"2014-04-18T14:19:36Z","content_type":null,"content_length":"8312","record_id":"<urn:uuid:50d707f4-ed03-4257-be8e-a78ee2b9bc66>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
Partial fraction method (in s-space) April 18th 2011, 10:15 AM #1 Partial fraction method (in s-space) The following is a part of a current I(s) (in $s$-space): What am I doing wrong in using this formula for finding $A$ and $B$: $a$ is a pole. I easily find A, but not B: end of PROBLEM The solution (which is correct) is What am I doing wrong inside the PROBLEM section? Or is the formula for $A$ (or $B$) wrong? The mistake is at the beginning: $B/L=\displaystyle{\lim_{s\rightarrow b}(s-b)F(s)}}$. Last edited by ojones; April 19th 2011 at 01:24 PM. You wrote this, right? So, is the formula I used for $A$ wrong? Why do you divide $B$ by $L$ ( B/L )? Seeing the correct solution, shouldn't you multiply by $L$? Yes, I wrote that. For some reason Math Help Forum wasn't compiling Latex at the time I wrote it. I'm still having trouble with their Latex and so I won't use it. No, the formula for A is correct. You divide B by L because sL is in the denominator of F(s). You don't multiply by L; B/L gives the correct answer. Last edited by ojones; April 19th 2011 at 03:33 PM. Aren't s and ( $R_2$ + sL) the two denominators of $F(s)$? Or is $F(s)$ something entirely different (than the one I used inside PROBLEM tags)? In short, what am I doing wrong inside the PROBLEM section? Thank you for the help. Why are you bothering with the formalism of limits? This is just a (very trivial) partial fraction decomposition you should have learned in precalculus -- You have the following after clearing fractions: (1). A * (R + sL) + B * (s) = U Let s = 0 => A = U/R Let s = -R/L => B = -UL/R Therefore, after substitution for A and B into your original decomposition, we obtain: (A/s) + (B/(R + sL)) = (U/(Rs)) - (UL/(R(R+sL))). The answer is easily verifiable by recombining fractions. I'm not questioning whether the solution you (and ojones) gave is correct (and that the problem is (very) trivial). But what the heck is wrong with "the limit formula" (or my way of (blindly) using it)? Because you're not setting up the correct expression for the limit. It should be the limit as s => -R/L of the expression [U/s] because the factors (R + sL) will cancel. Where you obtain the factor (s + R/L) I have no idea; this is why you don't get a perfect cancellation, have to combine denominators and introduce an extra L. By the way, this is known as the Heaviside "cover up" method. Here is a helpful resource you should consider: http://www.math-cs.gordon.edu/course.../heavyside.pdf. SO now, you can forget about this limit formalism and just "cover up" the factors (only applies when all factors are linear). I am looking at what's in the PROBLEM section. The statement B=lim (s-b)F(s) is not correct. Why do you think it should be? (Latex still not working. Fix it MHF!) April 18th 2011, 08:59 PM #2 Senior Member May 2010 Los Angeles, California April 19th 2011, 02:37 AM #3 April 19th 2011, 12:37 PM #4 Senior Member May 2010 Los Angeles, California April 20th 2011, 07:25 AM #5 April 20th 2011, 08:11 AM #6 Sep 2009 April 20th 2011, 08:45 AM #7 April 20th 2011, 08:56 AM #8 Sep 2009 April 20th 2011, 01:27 PM #9 Senior Member May 2010 Los Angeles, California
{"url":"http://mathhelpforum.com/calculus/177988-partial-fraction-method-s-space.html","timestamp":"2014-04-18T13:42:10Z","content_type":null,"content_length":"53626","record_id":"<urn:uuid:9a229da2-3157-4251-921c-ebe2207250ce>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
West Chester, PA Math Tutor Find a West Chester, PA Math Tutor ...I believe that everyone can learn and enjoy math. All they need is someone to show them math isn't all bad and can be fun once you get the hang of it. I have taught at an inner city school, a rural school, and a private school, so I have dealt with every kind of student. 6 Subjects: including algebra 1, algebra 2, calculus, geometry ...I am currently a Junior, enrolled in the Elementary Education program at West Chester University of Pennsylvania. I have experience working at two different day cares, where I often assist school-age children with their homework. Along with the rigorous pedagogy curriculum, I also have experience teaching lessons and creating lesson plans to gain practice. 12 Subjects: including geometry, reading, GED, grammar ...I started all four years of high school at Wilson in Reading, PA. I was named to the All-Berks Honorable Mention team twice. I also played three years at a Division III college in NYC. 20 Subjects: including calculus, precalculus, swimming, basketball ...Geometry is a foundation class in which so many skills are gained. These skills such as logic, reasoning and spacial geometry are useful not only in future math classes, but in many different careers. Logic and reasoning are used in science, computer programming, medicine, and so many others; while spacial reasoning is used in interior design, architecture and landscaping. 30 Subjects: including calculus, writing, statistics, linear algebra ...I expect a lot out of myself and therefore expect a lot from my students, thus if you are not completely satisfied with my session I will not bill you. But I also want to hear from you and provide feedback on my lessons. I also expect a 2-hour cancellation notice with rescheduling considerations. 12 Subjects: including algebra 1, physiology, anatomy, geometry Related West Chester, PA Tutors West Chester, PA Accounting Tutors West Chester, PA ACT Tutors West Chester, PA Algebra Tutors West Chester, PA Algebra 2 Tutors West Chester, PA Calculus Tutors West Chester, PA Geometry Tutors West Chester, PA Math Tutors West Chester, PA Prealgebra Tutors West Chester, PA Precalculus Tutors West Chester, PA SAT Tutors West Chester, PA SAT Math Tutors West Chester, PA Science Tutors West Chester, PA Statistics Tutors West Chester, PA Trigonometry Tutors
{"url":"http://www.purplemath.com/west_chester_pa_math_tutors.php","timestamp":"2014-04-18T13:51:26Z","content_type":null,"content_length":"24097","record_id":"<urn:uuid:7461d31f-ab76-48e9-9d11-fa795d3bbf7a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Style for pad implementation in 'pad' namespace or functions under np.lib [Numpy-discussion] Style for pad implementation in 'pad' namespace or functions under np.lib Travis Oliphant travis@continuum... Thu Mar 29 11:13:54 CDT 2012 While namespaces are a really good idea, I'm not a big fan of both "module" namespaces and underscore namespaces. It seems pretty redundant to me to have pad.pad_mean. On the other hand, one could argue that pad.mean could be confused with calculating the mean of a padded array. So, it seems like the function names need to be called something more than just "mean, median, etc." Something like padwith_mean, padwith_median, etc. actually makes more sense. Or pad.with_mean, pad.with_median. The with_ in this case is not really a "namespace" it's an indication of functionality. With that said, NumPy was designed not to be "deep" in terms of naming. We should work hard to ensure that functions and Classes to be instantiated by the user are no more than 2 levels down (i.e. either in the NumPy namespace or in the numpy.<subpackage> namespace. So, either we need to move pad into a new subpackage and call things or keep the functions in numpy.lib I don't think this functionality really justifies an entire new sub-package in NumPy, though. So, I would vote for keeping things accessible as 1) The functions in pad.py be imported into the lib namespace 2) The functions all be renamed to padwith_mean On Mar 29, 2012, at 9:55 AM, Tim Cera wrote: > On Wed, Mar 28, 2012 at 6:08 PM, Charles R Harris <charlesr.harris@gmail.com> wrote: > I think there is also a question of using a prefix pad_xxx for the function names as opposed to pad.xxx. > If I had it as pad.mean, pad.median, ...etc. then someone could > from numpy.pad import * > a = np.arange(5) > # then you would have a confusing > b = mean(a, 2) > Because of that I would vote for 'pad.pad_xxx' instead of 'pad.xxx'. In fact at one time I named the functions 'pad.pad_with_xxx' which was a little overkill. > Kindest regards, > Tim > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20120329/4392d732/attachment.html More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-March/061512.html","timestamp":"2014-04-18T21:47:07Z","content_type":null,"content_length":"5941","record_id":"<urn:uuid:0705f667-54a7-4b64-860d-6869d7ffb04c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
[Solution] [Quiz] Turtle Graphics (#104) Here is my quick solution. ( Quick, so, it will fail the unit test, since it does not check the parameters and throw the expected exception; however, it shows the example graphics correctly ) # Created by Morton Goldberg on November 02, 2006. # Modified on November 14, 2006 # turtle.rb # An implementation of Turtle Procedure Notation (TPN) as described in # H. Abelson & A. diSessa, "Turtle Geometry", MIT Press, 1981. # Turtles navigate by traditional geographic coordinates: X-axis pointing # east, Y-axis pointing north, and angles measured clockwise from the # Y-axis (north) in degrees. class Turtle include Math # turtles understand math methods DEG = Math: attr_accessor :track alias run instance_eval def initialize # Place the turtle at [x, y]. The turtle does not draw when it changes # position. def xy=(coords) @xy = coords.dup # Set the turtle's heading to <degrees>. def heading=(degrees) @heading = degrees # Raise the turtle's pen. If the pen is up, the turtle will not draw; # i.e., it will cease to lay a track until a pen_down command is given. def pen_up @pen = false # Lower the turtle's pen. If the pen is down, the turtle will draw; # i.e., it will lay a track until a pen_up command is given. def pen_down @pen = true # Is the pen up? def pen_up? # Is the pen down? def pen_down? # Places the turtle at the origin, facing north, with its pen up. # The turtle does not draw when it goes home. def home @xy = [0.0, 0.0] @heading = 0.0 # Homes the turtle and empties out it's track. def clear @track = [] # Turn right through the angle <degrees>. def right(degrees) @heading += degrees # Turn left through the angle <degrees>. def left(degrees) # Move forward by <steps> turtle steps. def forward(steps) x = @xy[0] + steps * cos((90 - @heading) * DEG) y = @xy[1] + steps * sin((90 - @heading) * DEG) # Move backward by <steps> turtle steps. def back(steps) # Move to the given point. def go(pt) if @pen if !@track.empty? && @track.last.last == @xy @track.last << pt.dup @track << [@xy.dup, pt.dup] @xy = pt.dup # Turn to face the given point. def toward(pt) @heading = 90 - acos((pt[0] - @xy[0]) / distance(pt)) / DEG # Return the distance between the turtle and the given point. def distance(pt) sqrt((@xy[0] - pt[0]) ** 2 + (@xy[1] - pt[1]) ** 2) # Traditional abbreviations for turtle commands. alias fd forward alias bk back alias rt right alias lt left alias pu pen_up alias pd pen_down alias pu? pen_up? alias pd? pen_down? alias set_h heading= alias set_xy xy= alias face toward alias dist distance
{"url":"http://www.velocityreviews.com/forums/t836138-solution-quiz-turtle-graphics-104-a.html","timestamp":"2014-04-19T09:49:35Z","content_type":null,"content_length":"28418","record_id":"<urn:uuid:66023010-7f87-4581-b385-5f02b545f6b4>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Are there topological restrictions to the existence of almost quaternionic structures on compact manifolds? up vote 15 down vote favorite I start with some background, but people familiar with the subject may jump directly to the question. Let $M^{4n}$ be a compact oriented smooth manifold. Recall that an almost hypercomplex structure on $M$ is a 3-dimensional sub-bundle $Q\subset End(TM)$ spanned by three endomorphisms $I$, $J$ and $K$ satisfying the quaternionic identities: $I^2=J^2=-Id$, $IJ=-JI=K$. An almost quaternionic structure on $M$ is a 3-dimensional sub-bundle $Q\subset End(TM)$ which is locally spanned by three endomorphisms with the above property. In both cases one may assume (by an averaging procedure) that $M$ is endowed with a Riemannian metric $g$ compatible with $Q$ in the sense that $Q\subset End^-(TM)$, i.e. $I$, $J$ and $K$ are almost Hermitian. Using this one sees that an almost hypercomplex or quaternionic structure corresponds to a reduction of the structure group of $M$ to $\mathrm{Sp}(n)$ or $\mathrm{Sp}(1)\mathrm{Sp}(n)$ respectively, but this is not relevant for the question below. Notice that in dimension $4$ every manifold has an almost quaternionic structure (since $\mathrm{Sp}(1)\mathrm{Sp}(1)=\mathrm{SO}(4)$), but there are well-known obstructions to the existence of almost hypercomplex structures. For example $S^4$ is not even almost complex. Finally, here comes the question: Are there any known topological obstructions to the existence of almost quaternionic structures on compact manifolds of dimension $4n$ for $n\ge 2$? EDIT: Thomas Kragh has shown in his answer that there is no almost quaternionic structure on the sphere $S^{4n}$ for $n\ge 2$. I have found further obstructions in the litterature and summarized them in my answer below. at.algebraic-topology dg.differential-geometry quaternionic-geometry Isn't the case of $S^8$ (or $S^{2n}$ for $n \geq 4$) also excluded since they don't admit almost complex structures? – Gunnar Magnusson Jan 18 '11 at 12:07 No, this is the point: almost quaternionic does not imply almost complex (see the example of S^4). More generally, the quaternionic projective spaces $\mathbb{H}\mathrm{P}^n$ are quaternion-K\ "ahler, but have no almost complex structure (Hirzebruch, 1953). – Andrei Moroianu Jan 18 '11 at 12:16 What do you mean by $Sp(1)Sp(n)$? since $Sp(1)$ is a sub-Lie-group of $Sp(n)$ this is with the obvious definition simply $Sp(n)$ gain. – Thomas Kragh Jan 18 '11 at 19:20 $Sp(1) Sp(n)$ is shorthand notation in this context denoting the Lie group $Sp(1) \times Sp(n) / \{\pm 1\}$. – Spiro Karigiannis Jan 18 '11 at 19:28 @Thomas: this is standard notation,, although, of course slightly misleading. In fact $Sp(1)$ is obtained by right multiplication with unit quaternions on $\mathbb{H}^n$, while $Sp(n)$ is the centralizer of $Sp(1)$, and is given by left multiplication with matrices with quaternionic entries. The diagonal $Sp(1)\subset Sp(n)$ is of course different from the former $Sp(1)$! – Andrei Moroianu Jan 18 '11 at 19:35 show 2 more comments 3 Answers active oldest votes It seems to me that if I understood the comments to my comment correctly that the map $$\mathrm{Sp}(1) \times \mathrm{Sp}(n) \to \mathrm{SO}(4n)$$ induced by right unit quarternionic multiplaction on $\mathbb{H}^n$ of the left factor and right matrix multiplication on $\mathbb{H}^n$ of the left factor has kernel $\{ \pm 1\}$. Since the source is simply connected it must lift to the spin group. So we have a map $$\mathrm{Sp}(1) \times \mathrm{Sp}(n) \to \mathrm{Spin}(4n)$$. Covering the map $$\mathrm{Sp}(1)\mathrm{Sp}(n) \to \mathrm{SO}(4n)$$ Since the covering fiber is $\mathbb{Z}/2\mathbb{Z}$ and we can check that after taking the functor $B$ both fibers are $K(\mathbb{Z}/2\mathbb{Z},1)$-spaces we see that $$\begin{matrix} B(\mathrm{Sp}(1)\times \mathrm{Sp}(n)) & \longrightarrow & B(\mathrm{Spin}(4n)) \\\ \downarrow && \downarrow \\\ B(\mathrm{Sp}(1)\mathrm{Sp}(n)) & \longrightarrow & B(\ mathrm{SO}(4n)) \end{matrix} $$ is homotopy cartesian. So if $M$ is spinable and has an almost Quarternionic structure it means that its classifying map lifts to $B(\mathrm{Sp}(1) \times \mathrm{Sp}(n))$ up vote 3 down vote Edit: The conclusion (which is now removed) was wrong, but at least it seems to simplify the picture when $M$ is spin. Added: For spheres $S^{4n}$ we may use the above on the $4n$th homotopy group and deloop. This implies that if we had a quartenionic structure on $S^{4n}$ we would have that the image of the map $$\pi_{4n-1}(\mathrm{Sp}(1)\times \mathrm{Sp}(n) ) \to \pi_{4n-1} (\mathrm{SO}(4n))$$ contains the image of the map $\mathbb{Z} \cong \pi_{4n-1}(\Omega S^{4n}) \to \pi_{4n-1}(\mathrm{SO}(4n)) \cong \mathbb{Z}\times \mathbb{Z}$ (*) induced by the delooping of the classifying map for the tangent bundle of $S^{4n}$. We know that not having an almost hypercomplex structure implies that the image of $\pi_{4n-1}(\mathrm{Sp}(n)) \to \pi_{4n-1} (\mathrm{SO}(4n))$ never contains this image, and since $\pi_ {4n-1}(\mathrm{Sp}(1))$ is torsion for $n\geq 2$ the above map can not do so either for $n\geq 2$. (*) $\pi_{4n-1}(\mathrm{SO}(4n)) \cong \mathbb{Z}\times\mathbb{Z}$ follows WHEN $n\geq 4$ from the paper Barratt, M. G.; Mahowald, M. E. The metastable homotopy of O(n). Bull. Amer. Math. Soc. 70 1964 775-760. I think this is true in general. Indeed, it is true for $n=1$ where the above is not a contradiction because there $\pi_3(\mathrm{Sp}(1))\cong \mathbb{Z}$. Andrei pointed out in a comment that this is also true for $n=1,2$. Hope my addition makes the point clear - and that there are no errors :). – Thomas Kragh Jan 18 '11 at 23:20 Ups - My reference for $\pi_{4n-1}(SO(4n))$ only works for $n\geq 4$. So have not settled $S^8$ and $S^12$. – Thomas Kragh Jan 18 '11 at 23:44 I put this into the answer, and also added the extra stable $\math{Z}$ in $\pi_{4n-1}(SO(4n))$ which I had forgotten (I replaced $\mathbb{Z}$ with $\mathbb{Z}\times\mathbb{Z}$. Sorry for the numerous edits. – Thomas Kragh Jan 19 '11 at 0:28 I agree that your proof works for all spheres, including for $S^8$ and $S^{12}$, using the fact that your relation (*) actually holds for all $n\ge 2$. I have nevertheless found a shorter proof based on Sutherland's theorem about almost complex structures on sphere bundles over spheres. I wrote this proof in a separate answer. – Andrei Moroianu Jan 19 '11 at I realized that non-surjectivity of the map on $\pi_{4n}$ is detected by the Euler class. That is - the Euler class evaluates to 0 on the image. So any manifold with non-trivial euler class restricted to $\pi_{4n}$ cannot have a quaternionic structure for $n\geq 2$. This of course does not help with $\mathbb{C}P^{2n}$. – Thomas Kragh Jan 21 '11 at 16:44 add comment Thomas' proof for the fact that $S^{4n}$ has no almost quaternionic structure is correct, but I have found an alternative argument for this statement using the twistor space. Indeed, if $S^{4n}$ has an almost quaternionic structure $Q$, then the twistor space $S(Q)$ is an $S^2$-bundle over $S^{4n}$ whose total space has an almost complex structure. On the other hand, Theorem 1.4 in the following article by W. Sutherland shows that this can only happen for $n=1$. Of course, the general question is still open. I do not know, in particular, whether the complex projective spaces $\mathbb{C}\rm{P}^{2n}$ have almost quaternionic structures for $n\ge 2$, although I strongly suspect they don't. up vote 3 down vote accepted EDIT: In fact there are topological obstructions! The first one (which I should have been aware of) is that the second Stiefel-Whitney class of an almost quaternionic manifold of real dimension $8n$ vanishes. This was first noticed by Marchiafava and Romani in 1975, then by Salamon in 1982, and thus rules out the complex projective spaces $\mathbb{C}\rm{P}^{4n}$. Moreover, in dimension 8, Cadek and Vanzura not only have found further obstructions (e.g. $4p_2(M)=p_1^2(M)+8e(M)$), but they also gave sufficient topological conditions for the existence of a $\rm{Sp}(1)\rm{Sp}(2)$ structure on 8-dimensional manifolds. Their article is available here. add comment I know this is a bit late, but as you mentioned Cadek's and Vanzura's paper, I'd like to point out (selfishly?) that there's also my paper which uses a bit of their work and gives some integrality conditions on the existence of quaternionic structures on closed manifolds - an example is below. I should emphasize that I really mean honest quaternionic not just almost quaternionic here, although the referee believed that the same should hold for only almost quaternionic structures too. up vote 3 down vote Theorem: Let M be an 8-dimensional compact quaternionic manifold with Pontryagin classes $p_1(TM)$ and $p_2(TM)$ and a fundamental class $[M]$. Then the following expressions are integers $ \biggl(\frac{143}{960}p_{1}^{2}-\frac{89}{240}p_{2}\biggr)[M], \quad \biggl( -\frac{17}{480}p_{1}^{2}+\frac{71}{120}p_{2}\biggr)[M].$ add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology dg.differential-geometry quaternionic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/52396/are-there-topological-restrictions-to-the-existence-of-almost-quaternionic-struc?sort=oldest","timestamp":"2014-04-19T12:17:06Z","content_type":null,"content_length":"76731","record_id":"<urn:uuid:0f3bb78e-fafc-4979-9b71-1c1da0453f8b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Approximation of Wiring Delay in MOSFET LSI Results 1 - 10 of 52 - Integration, the VLSI Journal , 1996 "... This paper presents a comprehensive survey of existing techniques for interconnect optimization during the VLSI physical design process, with emphasis on recent studies on interconnect design and optimization for high-performance VLSI circuit design under the deep submicron fabrication technologies. ..." Cited by 103 (32 self) Add to MetaCart This paper presents a comprehensive survey of existing techniques for interconnect optimization during the VLSI physical design process, with emphasis on recent studies on interconnect design and optimization for high-performance VLSI circuit design under the deep submicron fabrication technologies. First, we present a number of interconnect delay models and driver/gate delay models of various degrees of accuracy and efficiency which are most useful to guide the circuit design and interconnect optimization process. Then, we classify the existing work on optimization of VLSI interconnect into the following three categories and discuss the results in each category in detail: (i) topology optimization for highperformance interconnects, including the algorithms for total wire length minimization, critical path length minimization, and delay minimization; (ii) device and interconnect sizing, including techniques for efficient driver, gate, and transistor sizing, optimal wire sizing, and simultaneous topology construction, buffer insertion, buffer and wire sizing; (iii) highperfbrmance clock routing, including abstract clock net topology generation and embedding, planar clock routing, buffer and wire sizing for clock nets, non-tree clock routing, and clock schedule optimization. For each method, we discuss its effectiveness, its advantages and limitations, as well as its computational efficiency. We group the related techniques according to either their optimization techniques or optimization objectives so that the reader can easily compare the quality and efficiency of different solutions. - Proceedings of the IEEE/ACM Design Automation Conference , 1998 "... A closed form solution for the output signal of a CMOS inverter driving an RLC transmission line is presented. This solution is based on the alpha power law for deep submicrometer technologies. Two figures of merit are presented that are useful for determining if a section of interconnect should be ..." Cited by 75 (24 self) Add to MetaCart A closed form solution for the output signal of a CMOS inverter driving an RLC transmission line is presented. This solution is based on the alpha power law for deep submicrometer technologies. Two figures of merit are presented that are useful for determining if a section of interconnect should be modeled as either an RLC or an RC impedance. The damping factor of a lumped RLC circuit is shown to be a useful figure of merit. The second useful figure of merit considered in this paper is the ratio of the rise time of the input signal at the driver of an interconnect line to the time of flight of the signals across the line. AS/X circuit simulations of an RLC transmission line and a five section RC P circuit based on a 0.25 m IBM CMOS technology are used to quantify and determine the relative accuracy of an RC model. One primary result of this study is evidence demonstrating that a range for the length of the interconnect exists for which inductance effects are prominent. Furthermore, it ... , 1992 "... In the design of high performance VLSI systems, minimization of clock skew is an increasingly important objective. Additionally, wirelength of clock routing trees should be minimized in order to reduce system power requirements and deformation of the clock pulse at the synchronizing elements of the ..." Cited by 73 (12 self) Add to MetaCart In the design of high performance VLSI systems, minimization of clock skew is an increasingly important objective. Additionally, wirelength of clock routing trees should be minimized in order to reduce system power requirements and deformation of the clock pulse at the synchronizing elements of the system. In this paper, we first present the Deferred-Merge Embedding (DME) algorithm, which embeds any given connection topology to create a clock tree with zero skew while minimizing total wirelength. The algorithm always yields exact zero skew trees with respect to the appropriate delay model. Experimental results show an 8% to 15% wirelength reduction over previous constructions in [17] [18]. The DME algorithm may be applied to either the Elmore or linear delay model, and yields optimal total wirelength for linear delay. DME is a very fast algorithm, running in time linear in the number of synchronizing elements. We also present a unified BB+DME algorithm, which constructs a clock tree t... , 2000 "... A closed-form expression for the propagation delay of a CMOS gate driving a distributed line is introduced that is within 5 % of dynamic circuit simulations for a wide range of loads. It is shown that the error in the propagation delay if inductance is neglected and the interconnect is treated as a ..." Cited by 69 (16 self) Add to MetaCart A closed-form expression for the propagation delay of a CMOS gate driving a distributed line is introduced that is within 5 % of dynamic circuit simulations for a wide range of loads. It is shown that the error in the propagation delay if inductance is neglected and the interconnect is treated as a distributed line can be over 35 % for current on-chip interconnect. It is also shown that the traditional quadratic dependence of the propagation delay on the length of the interconnect for lines approaches a linear dependence as inductance effects increase. On-chip inductance is therefore expected to have a profound effect on traditional high-performance integrated circuit (IC) design methodologies. The closed-form delay model is applied to the problem of repeater insertion in interconnect. Closed-form solutions are presented for inserting repeaters into lines that are highly accurate with respect to numerical solutions. models can create errors of up to 30 % in the total propagation delay of a repeater system as compared to the optimal delay if inductance is considered. The error between the and models increases as the gate parasitic impedances decrease with technology scaling. Thus, the importance of inductance in high-performance very large scale integration (VLSI) design methodologies will increase as technologies scale. Index Terms—CMOS, high-performance, high-speed interconnect, propagation delay, VLSI. - Proc. IEEE , 2001 "... this paper, bears separate focus. The paper is organized as follows. In Section II, an overview of the operation of a synchronous system is provided. In Section III, fundamental definitions and the timing characteristics of clock skew are discussed. The timing relationships between a local data path ..." Cited by 57 (5 self) Add to MetaCart this paper, bears separate focus. The paper is organized as follows. In Section II, an overview of the operation of a synchronous system is provided. In Section III, fundamental definitions and the timing characteristics of clock skew are discussed. The timing relationships between a local data path and the clock skew of that path are described in Section IV. The interplay among the aforementioned three subsystems making up a synchronous digital system is described in Section V; particularly, how the timing characteristics of the memory and logic elements constrain the design and synthesis of clock distribution networks. Different forms of clock distribution networks, such as buffered trees and H-trees, are discussed. The automated layout and synthesis of clock distribution networks are described in Section VI. Techniques for making clock distribution networks less sensitive to process parameter variations are discussed in Section VII. Localized scheduling of the clock delays is useful in optimizing the performance of high-speed synchronous circuits. The process for determining the optimal timing characteristics of a clock distribution network is reviewed in Section VIII. The application of clock distribution networks to high-speed circuits has existed for many years. The design of the clock distribution network of certain important VLSI-based systems has been described in the literature, and some examples of these circuits are described in Section IX. In an effort to provide some insight into future and evolving areas of research relevant to high-performance clock distribution networks, some potentially important topics for future research are discussed in Section X. Finally, a summary of this paper with some concluding remarks is provided in Section XI , 1998 "... Design, analysis, and verification of the clock hierarchy on a 600-MHz Alpha microprocessor is presented. The clock hierarchy includes a gridded global clock, gridded major clocks, and many local clocks and local conditional clocks, which together improve performance and power at the cost of verific ..." Cited by 50 (1 self) Add to MetaCart Design, analysis, and verification of the clock hierarchy on a 600-MHz Alpha microprocessor is presented. The clock hierarchy includes a gridded global clock, gridded major clocks, and many local clocks and local conditional clocks, which together improve performance and power at the cost of verification complexity. Performance is increased with a windowpane arrangement of global clock drivers for lowering skew and employing local clocks for time borrowing. Power is reduced by using major clocks and local conditional clocks. Complexity is managed by partitioning the analysis depending on the type of clock. Design and characterization of global and major clocks use both an AWEsim-based computer-aided design (CAD) tool and SPICE. Design verification of local clocks relies on SPICE along with a timing-based methodology CAD tool that includes datadependent coupling, data-dependent gate loads, and resistance effects. - IEEE Trans. Comput.-Aided Des , 1997 "... We develop an analytical delay model based on rst and second moments to incorporate inductance e ects into the delay estimate for interconnection lines. Delay estimates using our analytical model are within 15 % of SPICE-computed delay across a wide range of interconnect parameter values. We also ex ..." Cited by 37 (4 self) Add to MetaCart We develop an analytical delay model based on rst and second moments to incorporate inductance e ects into the delay estimate for interconnection lines. Delay estimates using our analytical model are within 15 % of SPICE-computed delay across a wide range of interconnect parameter values. We also extend our delay model for estimation of source-sink delays in arbitrary interconnect trees. For the small tree topology considered, we observe improvements of at least 18 % in the accuracy of delay estimates when compared to the Elmore model (which isindependent of inductance), even though our estimates are as easy to compute as Elmore delay. The speedup of delay estimation via our analytical model is several orders of magnitude when compared to a simulation methodology such as SPICE. 1. - Proceedings of the ACM/IEEE Design Automation Conference , 2000 "... Abstract—Closed-form solutions for the 50 % delay, rise time, overshoots, and settling time of signals in an tree are presented. These solutions have the same accuracy characteristics of the Elmore delay for trees and preserves the simplicity and recursive characteristics of the Elmore delay. Specif ..." Cited by 30 (8 self) Add to MetaCart Abstract—Closed-form solutions for the 50 % delay, rise time, overshoots, and settling time of signals in an tree are presented. These solutions have the same accuracy characteristics of the Elmore delay for trees and preserves the simplicity and recursive characteristics of the Elmore delay. Specifically, the complexity of calculating the time domain responses at all the nodes of an tree is linearly proportional to the number of branches in the tree and the solutions are always stable. The closed-form expressions introduced here consider all damping conditions of an circuit including the underdamped response, which is not considered by the Elmore delay due to the nonmonotone nature of the response. The continuous analytical nature of the solutions makes these expressions suitable for design methodologies and optimization techniques. Also, the solutions have significantly improved accuracy as compared to the Elmore delay for an overdamped response. The solutions introduced here for trees can be practically used for the same purposes that the Elmore delay is used for trees. - Operations Research , 2005 "... informs ® doi 10.1287/opre.1050.0254 © 2005 INFORMS This paper concerns a method for digital circuit optimization based on formulating the problem as a geometric program (GP) or generalized geometric program (GGP), which can be transformed to a convex optimization problem and then very efficiently s ..." Cited by 27 (7 self) Add to MetaCart informs ® doi 10.1287/opre.1050.0254 © 2005 INFORMS This paper concerns a method for digital circuit optimization based on formulating the problem as a geometric program (GP) or generalized geometric program (GGP), which can be transformed to a convex optimization problem and then very efficiently solved. We start with a basic gate scaling problem, with delay modeled as a simple resistor-capacitor (RC) time constant, and then add various layers of complexity and modeling accuracy, such as accounting for differing signal fall and rise times, and the effects of signal transition times. We then consider more complex formulations such as robust design over corners, multimode design, statistical design, and problems in which threshold and power supply voltage are also variables to be chosen. Finally, we look at the detailed design of gates and interconnect wires, again using a formulation that is compatible with GP or GGP. - Proc. DATE , 2000 "... As the CMOS technology scaled down, the horizontal coupling capacitance between adjacent wires plays dominant part in wire load, crosstalk interference becomes a serious problem for VLSI design. We focused on delay increase caused by crosstalk. On-chip bus delay is maximized by crosstalk effect when ..." Cited by 25 (1 self) Add to MetaCart As the CMOS technology scaled down, the horizontal coupling capacitance between adjacent wires plays dominant part in wire load, crosstalk interference becomes a serious problem for VLSI design. We focused on delay increase caused by crosstalk. On-chip bus delay is maximized by crosstalk effect when adjacent wires simultaneously switch for opposite signal transition directions. This paper proposes a bus delay reduction technique by intentional skewing signal transition timing of adjacent wires. An approximated equation of bus delay shows our delay reduction technique is effective for repeater-inserted bus. The result of SPICE simulation shows that the total bus delay reduction by from 5% to 20% can be achieved.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1054115","timestamp":"2014-04-20T17:19:08Z","content_type":null,"content_length":"42086","record_id":"<urn:uuid:cbd763ce-6ff4-4054-a950-c9d1e5e779db>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Problems with Fixed Effects Regression Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Problems with Fixed Effects Regression From Maarten Buis <maartenlbuis@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Problems with Fixed Effects Regression Date Mon, 30 May 2011 12:32:52 +0200 On Mon, May 30, 2011 at 11:45 AM, Abubakr Saeed wrote: > I'm performing a panel data analysis (400 firms, 12 years, and 6 > industries). I got two questions regarding my analysis: > My LM test supports for random individual effects. After estimating > the Hausman test for deciding between fixed or random effect > regression, result was in favor of the fixed effect regression. I am > using Industry dummies which are constant over time. While running FE > my all industry dummies were dropped because of collinearity. What I > do? Should I just neglect industry dummies or choose random effect? When you estimate a fixed effects model you are controlling for everything that is constant within a firm. This is why Stata is dropping every variable that is constant within firms, these cannot explain anything after controlling for the fixed effects. If industry is just a control variable and you are not substantively interested in it, than you can just use a fixed effects model and leave industry out. You are not ignoring anything, as the fixed effects are already controlling for all firm-constant variables, including industry. The same is true for all other firm-constant variables, like your other variable whether or not the firm was listed. Things will obviously get more complicated when you are substantively interested in these effects. Hope this helps, Maarten L. Buis Institut fuer Soziologie Universitaet Tuebingen Wilhelmstrasse 36 72074 Tuebingen * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-05/msg01562.html","timestamp":"2014-04-18T00:40:05Z","content_type":null,"content_length":"9191","record_id":"<urn:uuid:1dc0732c-b820-4ee9-b481-8727e587ea0c>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help January 9th 2007, 02:04 PM #1 Junior Member Jan 2007 Hi, I just did these two problems and the domain was the same for both problems, am I overlooking something? h(x) = 4x/Square Root(x^2 - 16) j(x) = Square Root[4x/(x^2 - 16)] Maybe she's trying to trick me? One more quick thing. I didn't quite know how to approach this one: The function y=k(x) is a polynomial with x-intercepts 1,2,5 and 8. The solution set to k(x) is (1, 2)U(5,8). Use this information to find the domain of y= Square Root[k(x)] The denominator cannot be zero. And the radical must be positive. Thus, $x>4 \mbox{ or }x<-4$ j(x) = Square Root[4x/(x^2 - 16)] Over here we need term inside the radical to be positive. First $x^2-16ot = 0$ thus, $xot = \pm 4$. Both positive or both negative 1) $4x\geq 0 \to x\geq 0$ $x^2-16 >0$ thus, $x>4 \mbox{ or }x<-4$ Combination of those two leads us to, 2) $4x\leq 0 \to x\leq 0$ $x^2-16 >0$ thus, $x>4 \mbox{ or }x<-4$. Combination of those two leads us to, Even though they are different functions they have the same solution set. Typically if you want a polynomial with specific x-intercepts you can simply write them out term by term: If you have the intercepts 1, 2, 5, and 8 a polynomial that fits this is: $f(x) = (x - 1)(x - 2)(x - 5)(x - 8)$ Now we want the function $y = \sqrt{k(x)}$ to have the domain $(1, 2) \cup (5, 8)$ which means that this is the part of the (real) x-axis for which k(x) is positive. So let's graph f(x). (See the attachment below.) Note that in the graph the set $(1, 2) \cup (5, 8)$ is the region where f(x) is negative. So we need to "flip the function over" to make this the region where the function is positive. This is easy. Just define k(x) to be the negative of f(x): $k(x) = -f(x) = -(x - 1)(x - 2)(x - 5)(x - 8)$. Then $y = \sqrt{k(x)}$ has the correct domain. Note: Technically the domain is $[1, 2] \cup [5, 8]$. I couldn't think of a way to define a finite polynomial that excludes the end-points of the sets. (Unless I'm missing the obvious.) You might wish to point this out to your teacher. January 9th 2007, 02:10 PM #2 Junior Member Jan 2007 January 9th 2007, 03:44 PM #3 Global Moderator Nov 2005 New York City January 9th 2007, 05:55 PM #4
{"url":"http://mathhelpforum.com/pre-calculus/9768-domains.html","timestamp":"2014-04-18T06:00:02Z","content_type":null,"content_length":"45451","record_id":"<urn:uuid:81098a99-3dac-4d06-9d31-8ab41f2f83cb>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
my first program. works perfectly but i dont understand why. few newbie questions. 08-18-2009 #1 Registered User Join Date Aug 2009 my first program. works perfectly but i dont understand why. few newbie questions. hey there guys. i wrote a program to calculate the roots of a cubic equation using Cardanos formula, it works great so far i posted a thread asking for help on using the pow() function and i was having trouble getting a function prototype to work, but that sorted out now, i think. im really happy i got this working. im so pleased i did it myself!! now the issues im having. 1. is there a pi function in C or do i have to define PI like that each time? 2. i dont understand how my sign function works(i have no idea how i actually managed to write it with almost no problems) this is what i understand of it i declare at the beginning of my program float sign(float S); /* sign function prototype */ this tells the compiler that i've declared a function to be used in the main(), so when it reaches this function it knows that it exists. then at the end of my function i wrote the function itself as float sign(float S) if (S > 0) return 1; else if (S < 0) return -1; return 0; okay what i understand of this is that the float sign(float S) means, sign(float S) is expecting to be a floating point result. im more of a mathematician so ill speak in terms more familiar to sign(float S) means f(x) where the input x will be a decimal/floating point number, this is correct the output of this function will always be 1, -1 or 0, so can i define this function as int sign(float S)? does this save space or make it "better" or something? furthermore - is S a global variable as i have defined it outside my main body? this is to say that, could i write sign function as... int sign(float n), then when i call the function, use it as sign(S) later in the main body? when i wrote this part of the code Q = (b*b - 3.*c)/(9.0); S = (2.*b*b*b - 9.*b*c + 27.*d)/(54.0); my professor explained that the . after the numbers was important, i dont remember why.. infact i have no idea why. as practice in our lab class we wrote a quadratic program and the numbers used in calculating the discriminant were of the same for, eg: 2.*d*d/3.0 how come i need the .'s there? 4. is this program clean and easy to understand? is my constant use of if statements poor programming form? is there anything i can clean up? this is the code #include <stdio.h> #include <math.h> #define PI (3.141592653589793238462643) float Q, S, a, b, c, d, A1, x1, x2, x3, theta; float sign(float S); /* sign function prototype */ int main() /* Get coefficients from user */ printf("Cubic coefficient a? "); scanf("%f", &a); printf("Cubic coefficient b? "); scanf("%f", &b); printf("Cubic coefficient c? "); scanf("%f", &c); printf("Cubic coefficient d? "); scanf("%f", &d); /* Make sure a = 1, if it doesn't, recalculate the rest of the coefficents */ if (a != 1) b = b/a; /* values of b, c, d must be calculated before a is set to 1 */ c = c/a; d = d/a; a = 1; /* Now to do some calculations to be used later in the program */ Q = (b*b - 3.*c)/(9.0); S = (2.*b*b*b - 9.*b*c + 27.*d)/(54.0); /* temporary printf function to make sure results are being calculated correctly */ printf("%f = Q, %f = S, %f = a, %f = b, %f = c %f = d\n", Q, S, a, b, c, d); printf("%f = sign(S) %f = A1\n", sign(S), A1); printf("%f = Q*Q*Q - S*S\n", Q*Q*Q - S*S); if (Q*Q*Q - S*S > 0) theta = acos(S/sqrt(Q*Q*Q)); x1 = -2*sqrt(Q)*cos(theta/3.) - b/3.; x2 = -2*sqrt(Q)*cos((theta + 2*PI)/3.) - b/3.; x3 = -2*sqrt(Q)*cos((theta + 4*PI)/3.) - b/3.; printf("The cubic has three real roots, %f = x1, %f = x2, %f = x3\n", x1, x2, x3); if (Q*Q*Q - S*S <= 0) A1 = -sign(S)*pow((sqrt(S*S - Q*Q*Q) + fabs(S)),1/3.); if (A1 = 0) x1 = A1 - b/3.; printf("The cubic has only one real root, %f = x1\n", x1); x1 = A1 + Q/A1 - b/3.; printf("The cubic has only one real root, %f = x1\n", x1); return 0; float sign(float S) if (S > 0) return 1; else if (S < 0) return -1; return 0; Last edited by jdi; 08-18-2009 at 06:04 AM. 1) You can use the constant M_PI (defined in math.h) or, if you like, calculated it "on the fly" as the result of 4.0 * atan(1.0). a) It really doesn't matter much. Either way works fine (though I've usually seen it as an int). b) The name of a function parameter has no binding outside of the function. If you have a global variable by the same name it won't be visible within the scope of that function though, obviously. 3) The '.' simply tell the compiler that it's a floating-point constant (as opposed to an integer). 4) It looks fine, although you could probably optimize things a bit by not doing repetitious calculations (eg: Q*Q*Q is used in several parts of the program whereas it could be calculated just once), and personally I'd be sure to leave space between operators and operands (eg: Q * Q * Q). 1. No, but just define a constant. You may find non-standard M_PI in math.h 2. I'd declare it as int sign(const float s), as it makes more sense. Since floating points are approximations, and why approximate on -1, 0 and 1? (Even though at least 0 can be exactly represented as a float) 3. They are, since it makes them double constants, without them they would be integer constants and the resulting arithmetic would be different. i.e. x = (5 / 2), x is 2 since it's integer division, but x = (5. / 2), results in 2.5 since it's "floating point division". 4. It's mostly fine, but: □ Use more comments □ Why calculate the same thing twice? i.e. Q*Q*Q*Q, why not store the result in say, q4? Since you use it more than once. □ The variable names are pretty poor and nondescript. About the return type of sign() Whether it should be int or float depends on how you use it. Remember, that comparing floating point numbers (especially using ==) is tricky, since the decimal representation is not what is internally used by computer. Hence it may be that the value looking exact, is not exact in the computer's point of view. On the other hand, if sign() returns an int, then be carefull when you use it in calculations. Mixing floats and integers may get your result floored to nearest integer. Obsession to networking and protocols made me cook up these: NSN - Network Status Notifier epb - Ethernet Packet Bombardier T.H.O.N.G.S - Textmode Helper On Network Getting Sniffed Nibbles - console UDP print listener/filter + something else Feel free to try, comment and improve =) Why return 0 if and only if 's' equals 0? Doesn't that obfuscate the concept of signs? int sign(const float s) { if( s >= 0) return 1; // s is not negative // s must be negative return -1; 08-18-2009 #2 08-18-2009 #3 08-18-2009 #4 08-18-2009 #5 Registered User Join Date Aug 2009
{"url":"http://cboard.cprogramming.com/c-programming/118729-my-first-program-works-perfectly-but-i-dont-understand-why-few-newbie-questions.html","timestamp":"2014-04-18T01:28:39Z","content_type":null,"content_length":"65449","record_id":"<urn:uuid:21d687f5-2513-43c9-896e-01f50aa25c33>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Principle of Computational Equivalence Kovas Boguta kovasb at wolfram.com Mon Nov 19 16:06:50 EST 2007 Bill Taylor wrote: > As it relates to theoretical computational devices, this is surely > a rephrasing of the result that "there exist Universal Turing > Machines", > (or their equivalents in other computational contexts). I've seen this line of thinking raised before, so I will try to explain how the PCE goes beyond the Church-Turing thesis (CTT) in its content as well as its implications. The statement of the PCE is a very compact way of relating three separate ideas. Essentially there are three kinds of equivalences that the PCE asserts or assumes, each of which goes beyond any other theoretical framework or principle: -- Equivalence between complexity of the *behavior* of the system, as judged by methods of analysis and perception, to the computational properties of the system The PCE relates how a system appears to behave to a more abstract quality that Wolfram calls "computational sophistication", which is not quite the same thing as universality but has other concrete implications described below. The CTT doesn't make any comment on the behavior of systems (only on their theoretical capabilities), and therefore for example is silent on the issue of if for example rule 110 and the 2,3 machine should be universal, given the empirical fact of their complex behavior. -- Equivalence between different computations Whereas the CTT talks about the equivalence of systems, the PCE talks about the equivalence of specific computations performed by systems. In effect, it is a much stronger statement. A universal machine may perform a simple computation, or it may perform a sophisticated computation, depending on the program it is running. The PCE is concerned with the sophisticated computations. And it claims that one sophisticated computation is equivalent to the others in its sophistication. A concrete consequence, not predicted by any other framework, is these "sophisticated" computations cannot be run more efficiently by other, presumably more sophisticated computations. So the most efficient way to evolve rule 30 is just by running the rule itself, rather than some other system that is being more clever in how it computes. (with the exception of say linear-time speedups from the use of bigger lookuptables etc) (in the language of computation theory, the PCE says that the most efficient 'effective procedure' for determining the halting of a system engaged in sophisticated computation is to run the system itself) This phenomena of computational irreducibility is important, because it implies that models of nature that seek to reproduce complex phenomena must themselves be capable of doing sophisticated computation. This is a theoretical justification for why computers are fundamentally necessary, as opposed to just compensating for a lack of cleverness. It is also important even in the scientific study of simple programs, since it implies that closed-form solutions to the behavior of these systems is extremely rare, and that experimental methods are really the only way to ascertain their behavior. -- Equivalence between computational processes and physical processes The CTT is not a statement about the physical universe, but rather a statement about human ability to come up with "procedures". The idea that the universe is computable does not come for free from the CTT. There could easily exist non-computable processes in the universe, yet we as humans could never have access to them or wrap our minds around them to the extent needed to falsify the CTT. Nevertheless the idea on its own is nowadays commonplace, as so this aspect PCE is not original in that respect, though it should be noted that this popular belief does not have an origin in any real principle. The PCE says more than just that the universe is computable though. It also claims that the origin of complexity in physical processes is in fact their computational sophistication. The PCE claims that in effect, that the natural systems do not have a fundamentally different character than computational systems, and therefore the same phenomena and principles that apply in one world apply to the other. Just as phenomena first discovered in cellular automata are later revealed in turing machines, so too phenomena from these more abstract systems will appear in their physical cousins. Clearly many aspects of the PCE are subject to debate, and there is no shortage of people who strongly disagree with certain implications. For example many in the computational complexity community and the complex natural systems community take issue with the PCE's marginalization of complexity classes. The wide variety of (often contradictory) opinions on the PCE I think is also an indicator that it goes beyond the safe territory of 70 year old More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2007-November/012305.html","timestamp":"2014-04-17T22:01:14Z","content_type":null,"content_length":"7315","record_id":"<urn:uuid:1181fea3-260a-422b-ab06-0673bd837f28>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove martingale March 8th 2010, 01:40 PM #1 Mar 2010 Prove martingale Hi all, I have been asked to prove the following process is martingale: exp{aX(t) - 1/2 * a^2t} where X(t) is a standard Brownian motion. I'm stuck with this question and would really appreciate it if anyone could give me any suggestions on how to do it. Thanks very much in advance! i would start by writing the process as a stochastic differential equation using ito's lemma. if the process is indeed a martingale, the drift term (dt) in ito should fall out. you will be left with your diffusion term. from here, you only need prove that the SDE is integrable. March 9th 2010, 04:02 PM #2 Mar 2010
{"url":"http://mathhelpforum.com/calculus/132746-prove-martingale.html","timestamp":"2014-04-16T19:22:52Z","content_type":null,"content_length":"30540","record_id":"<urn:uuid:5b4effdd-8a13-4a72-9a97-b90da0f2fe24>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
Android Forums - View Single Post - .9999...=1 Originally Posted by thats only true if you assume 0.3333... = 1/3 which (depending on context, may or may not be true 1/3 is exactly = .333... repeated for ever and you can easily discover that for yourself by dividing 1 by 3. If you do it manually using the standard you will constantly get repeating 3's forever until you decide you've had enough punishment. I wanted to avoid the use of higher mathematics, but the reason behind the equality is stated right in the wiki article: The equality of 0.999... and 1 is closely related to the absence of nonzero infinitesimals in the real number system, the most commonly used system in mathematical analysis. The equality 0.999... = 1 has long been accepted by mathematicians and is part of general mathematical education. Nonetheless, some students find it sufficiently counterintuitive that they question or reject it, commonly enough that the difficulty of convincing them of the validity of this identity has been the subject of several studies in mathematics education Whether anyone chooses to accept it or not is their choice, but nonetheless, it is 100% true. I often run into non-believers when I teach this topic in the Infinite Series part of Calculus II, granted the proof for that class is different giving the context of the class.
{"url":"http://androidforums.com/5246304-post21.html","timestamp":"2014-04-24T21:51:33Z","content_type":null,"content_length":"15913","record_id":"<urn:uuid:de9ecb87-aaa6-4c87-9a9f-be11a0eb27f6>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Bonferroni for outlier detection? up vote 0 down vote favorite I am reading a book on time series analysis and I am having problems understanding the section about outlier detection. The authors say that when you want to know whether at a certain time $T$ there was an outlier, you should use a certain test statistic and a test with size less than $\alpha$. But when you don't know where an outlier could be and you have a time series of size $n$ then you should use the same test statistic for each point but you should use tests of size $\alpha/n$. They say that this is an application of the conservative Bonferroni correction. I just don't understand this. Doesn't this mean that there will be lots of outliers that you detect in short time series but don't detect in large ones? After all, spam filters don't have stronger spam criteria for people with more incoming email, right? add comment 1 Answer active oldest votes If you do $n$ tests of size $\alpha/n$, then $\alpha$ is the Bonferroni bound on at least one of the tests succeeding. It is conservative because it is the worst possible bound up vote 2 down without any further information about dependency between the tests. It is only exact if the tests are disjoint (i.e. at most one can be true at once). vote accepted Brendan and mine Bonferroni connection goes a long way... – Gil Kalai Sep 6 '11 at 15:46 Thank you for the clarifications on the Bonferroni bound! But still: would you say it is correct to use a test of size $0.5/n$, say, to find outliers in time series of size $n$? I would say it is wrong to divide by $n$ (as it would be wrong to use stronger spam criteria for people with more email traffic). – Frank Sep 7 '11 at 8:24 If your customers insist that they want to lose at most one valid email per week on average due to false spam positives, you do indeed need to apply a stronger test to those who get more email. If instead they demand that at most some fraction $p$ of their valid emails are rejected on average, then the volume of email is irrelevant. So it depends on what you are trying to achieve. – Brendan McKay Sep 8 '11 at 4:59 add comment Not the answer you're looking for? Browse other questions tagged st.statistics or ask your own question.
{"url":"http://mathoverflow.net/questions/74653/bonferroni-for-outlier-detection","timestamp":"2014-04-16T22:05:11Z","content_type":null,"content_length":"52822","record_id":"<urn:uuid:9acb845c-d2f8-4182-8315-ad86f4866a3c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Properties Of Rational Number Properties Of Rational Number We will discuss different properties of rational numbers in this session. Rational numbers are the numbers which can be expressed in the form of p/q, where p and q are the integers and q in not equal to zero. Here we will take the properties of rational numbers: 1. Closure property: We mean by closure property that if there are two rational numbers, then Closure property of addition holds true, which means that the sum of two rational numbers is also a rational number. Closure property of subtraction holds true, which means that if there exist two rational numbers, then the difference of the two rational numbers is also a rational number. Closure property of multiplication holds true, which means that if there exist two rational numbers, then the product of the two rational numbers is also a rational number. Closure property of division holds true, which means that if there exist two rational numbers, then the quotient of the two rational numbers is also a rational number. Know More About Fourier Transform Of Coulomb Potential Tutorcircle.com Page No. : ­ 1/4 2. Commutative property of rational number: Commutative property of rational numbers holds true for addition and multiplication but does not hold true for subtraction and division. It means that if p1/q1 and p2/q2 are any two rational numbers, then according to commutative property of rational numbers, we mean that : P1/q1 + p2/q2 = p2/q2 + p1/q1 P1/q1 * p2/q2 = p2/q2 * p1/q1 P1/q1 - p2/q2 <> p2/q2 - p1/q1 P1/q1 ÷ p2/q2 <> p2/q2 ÷ p1/q1 3. Additive Identity of Rational numbers: According to additive identity property, If we have a rational number p/q, then there exist a number zero (0), such that if we add the number zero to any number, the result remains unchanged. So we write it as : p/q + 0 = p/q 4. Multiplicative identity of Rational numbers: According to multiplicative identity property of rational numbers, If we have a rational number p/q, then there exist a number one (1), such that if we multiply the number one to any number, the result remains unchanged. So we write it as : p/q * 1 = p/q 5. Power of zero: By the property Power of zero, we mean that there exists a number zero, such that if we multiply zero to any rational number, then the product id zero itself. So if we have p/q as a rational number, then we say: p/q * 0 = 0 . 6. Associative property of Rational numbers: Associative property of rational numbers holds true for addition and multiplication but does not hold true for subtraction and division. Read More About Irrational Numbers Examples Tutorcircle.com Page No. : ­ 2/4 It means that if p1/q1 , p2/q2 and p3/q3 are any three rational numbers, then according to associative property of rational numbers, we mean that : (P1/q1 + p2/q2) + p3/q3 = P1/q1 + (p2/q2 + p3/q3 ) (P1/q1 * p2/q2) * p3/q3 = P1/q1 * (p2/q2 * p3/q3 ) (P1/q1 - p2/q2) - p3/q3 <> P1/q1 - (p2/q2 - p3/q3 ) (P1/q1 ÷ p2/q2) ÷ p3/q3 <> P1/q1 ÷ (p2/q2 ÷ p3/q3 ) 7.Distributive property of multiplication over addition and subtraction of rational numbers holds true, which states: P1/q1 * ( p2/q2 + p3/q3) = (p1/q1 * p2/q2) + (p1/q1 * p3/q3) P1/q1 * ( p2/q2 - p3/q3) = (p1/q1 * p2/q2) - (p1/q1 * p3/q3) Tutorcircle.com Page No. : ­ 3/4 Page No. : ­ 2/3 Thank You
{"url":"http://www.docstoc.com/docs/120248530/Properties-Of-Rational-Number","timestamp":"2014-04-21T01:21:50Z","content_type":null,"content_length":"56892","record_id":"<urn:uuid:60af1109-1350-4e33-90ec-9e857047a771>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Paying In Advance October 16th 2009, 09:29 AM Paying In Advance The premiums on an insurance policy are $60 every 3 months, payable at the beginning of each three-month period. If the policy holder wishes to pay 1 year’s premiums in advance, how much should be paid provided that the interest rate is 4.3% compounded quarterly? What formula would I use for this? The Present value annutiy formula is what i believe it should be... October 16th 2009, 10:17 AM The premiums on an insurance policy are $60 every 3 months, payable at the beginning of each three-month period. If the policy holder wishes to pay 1 year’s premiums in advance, how much should be paid provided that the interest rate is 4.3% compounded quarterly? What formula would I use for this? The Present value annutiy formula is what i believe it should be... The question is worded poorly, normally you'd expect premium payments to stay stable so it would be $240. However, if you mean the premiums go up 4.3% a quarter then use the compound interest $A(t) = A_0(1+x)^t$ October 16th 2009, 10:50 AM Look at it this way to "unconfuse(!)" yourself: assume that the insured left $180 at beginning (after paying 1st $60) in an account that pays 4.3% compounded quarterly? He then withdraws 60 every 3 months to pay next 3 quarterly premiums. Calculate the interest he will receive in that account. That interest is the amount by which the $240 would be reduced.
{"url":"http://mathhelpforum.com/business-math/108430-paying-advance-print.html","timestamp":"2014-04-18T06:28:15Z","content_type":null,"content_length":"6663","record_id":"<urn:uuid:e34bc798-6d71-46e4-b01b-ae40a2bb5094>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Accurate tools for measuring plant uniformity Seed spacing may vary due to planter errors, poor planter maintenance, or seed germination problems. Average plant spacing and standard deviation of plant spacing are often used to determine plant spacing accuracy. Having only an average of plant to plant spacing is not a good measure of what is present in the field, because spacing between plants is not normally distributed (not uniform). The standard deviation is not a very good measurement either since it is based on squared deviations of the mean and is therefore influenced by a few very large spacings (skips or misses). Because of these problems, Kachman and Smith (1995) concluded that the mean and standard deviation are not appropriate for summarizing distributions of plant spacing. They compared four other measures that were based on theoretical spacing and found that they do a good job of summarizing distributions of plant spacing for single seed planters. Theoretical spacing is the targeted distance between plants, assuming no skips, no multiples, and no variability in seed drop. Theoretical spacing is abbreviated as xref . The existing stand is assessed relative to what the plant to plant spacing should be. The theoretical spacing is used to divide the observed spacings into five divisions: Division I = 0 to 0.5 x[ref]. These are multiple seeds at the same spot or seed spacings that are closer than ½ the theoretical spacing. Division II = 0.5 x[ref] to 1.5 x[ref]. These are single plant spacings that are close to the theoretical spacing. Division III =1.5 x[ref] to 2.5 x[ref]. These are single skips. Division IV = 2.5 to 3.5 x[ref]. These are double skips. Division V = over 3.5 x[ref]. These are triple skips etc. Four measures of plant-spacing accuracy are based on the frequency of spacings that occur in the five divisions. They are as follows: Multiples index, D (doubles, triples, etc.), is a percent of spacings that are less than or equal half to the theoretical spacing, D = n[I] / N x 100 where: n[I] is the number of spacings in region I and N is total number of spacings. Smaller values of D indicate better performance than larger values. Miss index, M (skips), is the percentage of spacings greater that 1.5 times the theoretical spacing: M = (n[III] + n[IV] + n[V] ) / N x 100 where: n[III], n[IV], and n[V] are the number of spacings in regions III, IV, and V and N is total number of spacings. These skips could be due to the failure of the planter to drop a seed or the failure of a seed to produce a seedling. Smaller values of M indicate better performance than larger values. Quality of feed index, A, the percentage of spacings that are more than half but no more than 1.5 times the theoretical spacings: A = n[II]/N x 100 where: n[II] is the number of spacings in region II and N is total number of spacings. This is a measure of how close the spacings are to the theoretical spacing. It is another way to look at information in the other two indices since: 100 - (D + M) = A. Larger values of A indicate better performance than smaller values. Precision, C, a measure of the variability in plant spacing after removing the variability due to skips and multiples. Precision is similar to a coefficient of variation for the spacings that are classified as singles (i.e. plants in region II): C = s[II]/x[ref] where: s[II] is the standard deviation of the n2 observations in zone II and x[ref] is the theoretical spacing. It is not affected by outliers, multiples or skips. A practical upper limit is 29%. Smaller values of C indicate better performance than larger values. Kachman, S.D. and J.A. Smith. 1995. Alternative measures of accuracy in plant spacing for planters using singe seed metering. Transaction of the American Society of Agricultural Engineers. 38 This text, written by Roger Elmore, is taken from a Crop Watch article (University of Nebraska extension newsletter) written May 17, 2002.
{"url":"http://www.agronext.iastate.edu/corn/production/management/planting/uniformity.html","timestamp":"2014-04-17T04:25:45Z","content_type":null,"content_length":"13067","record_id":"<urn:uuid:129cc43e-b072-419a-8a5c-49c1bc9d2366>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
ApplianceMagazine.com | Web Exclusive: Modified Energy Factor - 61st Annual Laundry Appliances Report issue: September 2008 APPLIANCE Magazine Printable format 61st Annual Laundry Appliances Report Email this Article Web Exclusive: Modified Energy Factor Search by Lisa Bonnema, Senior Editor Modified Energy Factor (MEF) is an equation that takes into account the amount of dryer energy used to remove the remaining moisture content in washed items, in addition to the machine energy and water heating energy of the washer. MEF is the energy performance metric for clothes washers qualifiying for the Energy Star rating, a joint program of The U.S. Department of Energy and the U.S. Environmental Protection Agency. The higher the MEF, the more efficient the clothes washer is. MEF is the quotient of the capacity of the clothes container, C, divided by the total clothes washer energy consumption per cycle, with such energy consumption expressed as the sum of the machine electrical energy consumption, M, the hot water energy consumption, E, and the energy required for removal of the remaining moisture in the wash load, D. The equation is: MEF = C / (M + E + D) Water Factor is the number of gallons per cycle per cubic foot that the clothes washer uses. The lower the water factor, the more efficient the washer is. If a clothes washer uses 30 gallons per cycle and has a tub volume of 3.0 cubic feet, then the water factor is 10.0. WF is the quotient of the total weighted per-cycle water consumption, Q, divided by the capacity of the clothes washer, C. If a clothes washer uses 30 gallons per cycle and has a tub volume of 3.0 cubic feet, then the water factor is 10.0. The equation is: WF = Q/C Currently, all Energy Star qualified clothes washers must have a minimum MEF of 1.72 and a water factor of 8.0.
{"url":"http://www.appliancemagazine.com/editorial.php?article=2063","timestamp":"2014-04-16T21:56:07Z","content_type":null,"content_length":"34415","record_id":"<urn:uuid:1bf9012b-6697-4236-991e-fc45a0885dee>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
Sacred Texts Zoroastrianism Index Previous Next Buy this Book at Amazon.com Pahlavi Texts, Part I (SBE05), E.W. West, tr. [1880], at sacred-texts.com CHAPTER XXI 3. 1. I write the indication of the midday shadow; may it be fortunate! 2. Should the sun come 4 into Cancer the shadow is one foot of the man, at the fifteenth degree of Cancer it is one foot; when the sun is at Leo it is one foot and a half, at the fifteenth of Leo it is two feet; when the sun is at Virgo it is two feet and a half, at the fifteenth of Virgo it is three feet and a half; at Libra it is four 1 feet and a half, at the fifteenth of Libra it is five feet and a half 2; at Scorpio it is six feet and a half, at the fifteenth of Scorpio it is seven 3 feet and a half; at Sagittarius it is eight feet and a half, at the fifteenth of Sagittarius it is nine feet and a half; at Capricornus it is ten feet, at the fifteenth of Capricornus it is nine 4 feet and a half; at Aquarius it is eight 5 feet and a half, at the fifteenth of Aquarius it is seven feet and a half; at Pisces it is six feet and a half, at the fifteenth of Pisces it is five feet and a half; at Aries it is four feet and a half, at the fifteenth of Aries it is three feet and a half; at Taurus it is two feet and a half, at the fifteenth of Taurus it is two feet; at Gemini it is one foot and a half, at the fifteenth of Gemini it is one foot 6. 3. The midday shadow is written 1, may its end be good! 4. I write the indication of the Aûzêrîn (afternoon) 2 period of the day; may it be well and fortunate by the help of God (yazdân)! 5. When the day is at a maximum (pavan afzûnŏ), and the sun comes unto the head 3 of Cancer, and one's shadow becomes six feet and two parts 4, he makes it the Aûzêrîn period (gâs). [paragraph continues] 6. Every thirty days it always increases one foot and one-third, therefore about every ten days the reckoning is always half a foot 1, and when the sun is at the head of Leo the shadow is seven 2 feet and a half. 7. In this series every zodiacal constellation is treated alike, and the months alike, until the sun comes unto the head of Capricornus, and the shadow becomes fourteen feet and two parts. 8. In Capricornus it diminishes again a foot and one-third 3; and from there where it turns back, because of the decrease of the night and increase of the day, it always diminishes one foot and one-third every one of the months, and about every ten days the reckoning is always half a foot, until it comes back to six feet and two parts; every zodiacal constellation being treated alike, and the months alike 4. 397:3 The contents of this chapter, regarding the lengths of midday and afternoon shadows, immediately follow a tale of Gôst-i Fryânô, which is appended to the book of Ardâ-Vîrâf's journey to the other world, both in M6 and K20. As will be seen from the notes, these details about shadows were probably compiled at Yazd in Persia, as they are suitable only for that latitude. 397:4 Reading âyad-ae (a very rare form), or it may be intended for hômanâe, 'should it be,' but it is written in both MSS. exactly like the two ciphers for the numeral 5. Mullâ Fîrûz in his Avîgeh Dîn, p. 279 seq., takes 5 khadûk pâî as implying that the shadow is under the sole of the foot, or the sun overhead; but neither this reading, nor the more literal 'one-fifth of a foot,' can be reconciled with the other measures; though if we take 5 as standing for pangak, 'the five toes or sole,' we might translate as follows: 'When the sun is at Cancer, the shadow is the sole of one foot of the man.' 398:1 K20 has 'three' by mistake. 398:2 M6 omits 'and a half' by mistake. 398:3 K20 has 'six' by mistake. 398:4 Both MSS. omit one cipher, and have only 'six,' but the shadow must be the same here as at the fifteenth of Sagittarius. 398:5 Both MSS. have 'seven,' which is clearly wrong. 398:6 It is obvious that, as the length of a man's shadow depends upon the height of the sun, each of these observations of his noonday shadow determines the altitude of the sun at noon, and is, therefore, a rude observation for finding the latitude of the place, provided we know the ratio of a man's foot to his stature. According to Bund. XXVI, 3 a man's stature is eight spans (vitast), and according to Farh. Okh. p. 41 a vitast is twelve finger-breadths, and a foot is fourteen (see Bund. XXVI, 3, note), so that a man's stature of eight spans is equivalent to 6 6/7 feet. Assuming this to have been the ratio adopted by the Observer, supposing the obliquity of the ecliptic to have been 23° 35´ (as it p. 399 was about A.D. 1000), and calculating the latitude from each of the thirteen different lengths of shadow, the mean result is 32° 1´ north latitude, which is precisely the position assigned to Yazd (the head-quarters of the small remnant of Zoroastrians in Persia) on some English maps, though some foreign maps place it 15´ or 20´ farther south. With regard to the rough nature of this mode of observation it may be remarked that, as the lengths of the shadows are noted only to half a foot, there is a possible error of a quarter-foot in any of them; this would produce a possible error of 2° 4´ in the midsummer observation of latitude, and of 39´ in the midwinter one; or a mean possible error of 1° 22´ in any of the observations; so that the possible error in the mean of thirteen observations is probably not more than 6´, and the probable error is even less, provided the data have been assumed correctly. 399:1 Reading nipist, but only the first and last letters are legible in M6, and the middle letter is omitted in K20. 399:2 See Bund. XXV, 9. 399:3 The word sar, 'head,' usually means 'the end,' but it must be here taken as 'the beginning;' perhaps, because the zodiacal signs are supposed to come head-foremost. 399:4 What portion of a foot is meant by bâhar, 'part,' is doubtful. It can hardly be a quarter, because 'two quarters' would be too clumsy a term for 'a half.' But it appears from § 5-7 that the shadow, necessary to constitute the Aûzêrîn period, is taken as increasing uniformly from six feet and two parts to fourteen feet and two parts, an increase of eight feet in six months, or exactly one foot and one-third per month, as stated in the text. And, deducting this monthly increase of one feet and one-third from the seven and a half feet shadow at the end of the first month, we have six feet and one-sixth remaining for the shadow at the p. 400 beginning of the month. Hence we may conclude that the 'two parts' are equal to one-sixth, and each 'part' is one-twelfth of a foot. 400:1 Meaning that the increase of shadow is to be taken into account as soon as it amounts to half a foot, that is, about every ten days. Practically, half a foot would be added on the tenth and twentieth days, and the remaining one-third of a foot at the end of the month. 400:2 Both MSS. have 'eight,' but this would be inconsistent with the context, as it is impossible that 'six feet and two parts' can become 'eight feet and a half' by the addition of 'one foot and one-third,' whatever may be the value of the 'two parts' of a foot. 400:3 Both MSS. have 3 yak-1 pâî, instead of pâî 3 yak-1. 400:4 This mode of determining the beginning of the afternoon period is not so clumsy as it appears, as it keeps the length of that period exceedingly uniform for the six winter months with some increase in the summer time. In latitude 32° north, where the longest day is about 13 hours 56 minutes, and the shortest is 10 hours 4 minutes, these observations of a man's shadow make the afternoon period begin about 3¾ hours before sunset at mid-summer, p. 401 diminishing to 2¾ hours at the autumnal equinox, and then remaining very nearly constant till the vernal equinox. Next: Chapter XXII
{"url":"http://www.sacred-texts.com/zor/sbe05/sbe0581.htm","timestamp":"2014-04-18T01:06:50Z","content_type":null,"content_length":"12844","record_id":"<urn:uuid:f62968f4-b7f1-4a6c-815b-8e97b44f857e>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
A silly question? Author A silly question? When I played with "rules round-up game", there's one question I can't understand. Joined: Apr Sorry, I can't remember what exact the question is. 08, 2005 Posts: 18 Q: what is the result if we do 270 >> 33? 1) 277 >> 1 2) 0 3) -1 I thought if you right shift more than the bits the number has, you always get 0 while the correct answer is "277 >> 1". I don't know how this come out and what's this mean. [ April 12, 2005: Message edited by: l bb ] [ April 12, 2005: Message edited by: l bb ] SCJP (In Progress) Joined: Mar Please refer to Chapter 3 of K & B book. 11, 2005 Posts: 29 It is explained in the Exam Watch"s... Hi, SandeepChicago: Joined: Apr I just can't understand the explain..... 08, 2005 and can't understant why the result is not a number but a expression 277 >> 1 Posts: 18 It is explained in the Exam Watch"s.. Sheriff "l bb"- Welcome to the JavaRanch! Please adjust your displayed name to meet the Joined: Feb 05, 2001 JavaRanch Naming Policy. Posts: 17249 You can change it I like... Thanks! and welcome to the JavaRanch! Perfect World Programming, LLC - Two Laptop Bag - Tube Organizer How to Ask Questions the Smart Way FAQ I've changed. Joined: Apr 08, 2005 Is there some one can explain that problem for me? Posts: 18 Thank you in advance Ranch Hand Joined: Dec The >> operator only works on the number of bits in the primitive type. Because 270 is an int it only has 32 bits. The shift value is modded by 32 (shift%32). 29, 2004 Posts: 1071 If you did 270L it would be modded by 64, but in this case that would not change the shift operator. The >> operator only works on the number of bits in the primitive type. Because 270 is an int it only has 32 bits. The shift value is modded by 32 (shift%32). Joined: Apr 08, 2005 Posts: 18 Steven, thank you. I understand a little bit now. use int as example, does this mean if any number larger than 32 will be modded. i.e 64 will be equal to 0, so don't shift a bit. Why doesn't it give a compile error to say that you can't shift more bits than the primitive type has? Who does this mod for us, JVM? [ April 12, 2005: Message edited by: lauren bai ] [ April 12, 2005: Message edited by: lauren bai ] Ranch Hand Originally posted by lauren bai: Joined: Apr Steven, thank you. I understand a little bit now. use int as example, does this mean if any number larger than 32 will be modded. i.e 64 will be equal to 0, so don't shift a bit. 10, 2005 Why doesn't it give a compile error to say that you can't shift more bits than the primitive type has? Who does this mod for us, JVM? Posts: 195 In the shift operators section of the JLS it says this: If the promoted type of the left-hand operand is int, only the five lowest-order bits of the right-hand operand are used as the shift distance. It is as if the right-hand operand were subjected to a bitwise logical AND operator & (�15.22.1) with the mask value 0x1f. The shift distance actually used is therefore always in the range 0 to 31, inclusive. The JVM may not actually be performing a mod (%) operation as it normally would, but the end result is the same, assuming the right operand is a positive integer. The right-hand operand will effectively be modded by 32 before being used by the shift operator. So 64 acts like 0 and 32 acts like 0. Yes, the JVM does the mod for us. The compiler doesn't give a compile-time error because the shift operator only uses the low-order bits of the right-hand operator, so there can be no problem with shifting more bits than the left-hand operator's bit depth. Incidentally, the result is particularly difficult to calculate by hand when the right-hand operand is negative. SCJA 1.0 (98%), SCJP 1.4 (98%) Ranch Hand Joined: Mar Just I explain a simple formula to find the value 18, 2005 Posts: 143 int has 32 bit so x=8 store like 0000 0000 0000 0000 0000 0000 0000 1000 if x>>33 then 33 is greater than no of bits for int. for this situation you find the remainder ie., 33%32=1 so bits shifted 1 bit right. so Ans is 4. Raghu J<br />SCJP 1.4<br /> <br />The Wind and waters are always<br />on the side of the ablest navigators.<br /><a href="http://groups.yahoo.com/group/scjp_share" target="_blank" rel= "nofollow">SCJP Group</a><br /><a href="http://groups.yahoo.com/group/JavaBeat_SCWCD" target="_blank" rel="nofollow">SCWCD Group</a> Ranch Hand Originally posted by l bb: Joined: Apr When I played with "rules round-up game", there's one question I can't understand. 29, 2002 Sorry, I can't remember what exact the question is. Posts: 200 Q: what is the result if we do 270 >> 33? 1) 277 >> 1 2) 0 3) -1 I thought if you right shift more than the bits the number has, you always get 0 while the correct answer is "277 >> 1". I don't know how this come out and what's this mean. [ April 12, 2005: Message edited by: l bb ] [ April 12, 2005: Message edited by: l bb ] Is that supposed to be 270 >> 1? Be Afraid...Be very Afraid... Thanks Raghu! You explaination make me understood finnally. Joined: Apr I thought the 5 lowest-order bits of the right-hand operand and & 0x1f is the point. 08, 2005 Yes,Paulo. It should be 270 >> 1? Posts: 18 Thanks for everyone who give me help. I like this place now. subject: A silly question?
{"url":"http://www.coderanch.com/t/248697/java-programmer-SCJP/certification/silly","timestamp":"2014-04-16T05:37:11Z","content_type":null,"content_length":"43889","record_id":"<urn:uuid:71ee630c-a12b-4c41-aca1-0126d1bdab51>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
The Discrete-Time Sliding Mode Control with Computation Time Delay for Repeatable Run-Out Compensation of Hard Disk Drives Mathematical Problems in Engineering Volume 2013 (2013), Article ID 505846, 13 pages Research Article The Discrete-Time Sliding Mode Control with Computation Time Delay for Repeatable Run-Out Compensation of Hard Disk Drives ^1College of Mechanical and Electrical Engineering, China Jiliang University, 528 Xueyuan Road, Xiasha Univ Park, Hangzhou 310018, China ^2School of Information Science and Engineering, Ocean University of China, Qingdao 266100, China ^3State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China Received 22 August 2012; Accepted 9 January 2013 Academic Editor: Kwok-Wo Wong Copyright © 2013 T. H. Yan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The presented is a study on the problem of disturbance rejection, specifically the periodic disturbance, by applying discrete-time sliding mode control method. For perturbations such as modeling errors and external disturbances, their compensation is formulated using the designed sliding mode control. To eliminate the effect of these perturbations, the convergence rate between the disturbance and their compensation has been shaped by an additional parameter. Decoupling of the resultant perturbation estimation dynamics from the closed loop dynamics is achieved. Computation time delay is also presented to address the perturbation effects. The approach developed ensures the robustness of the sliding mode dynamics to parameter uncertainties and exogenous disturbances, in addition to the complete rejection of the periodic disturbance component. Satisfactory simulation results as well as experimental ones have been achieved based on a fast servo system of a modern hard disk drive to illustrate the validity of the controller for repeatable run-out (RRO) compensation. 1. Introduction One of the most attractive features of the variable structure system (VSS) with sliding mode is its invariance and robustness to perturbations including modeling errors and external disturbances. Originated in the late 1950s, sliding mode theory has been developed mostly in the continuous time domain especially after the earliest works published by Itkis [1] and Utkin [2]. Thereafter, researches on this topic had increased rapidly, for example, Hui and Żak studied the discrete-time variable structure sliding mode control [3]; Kalsi et al. presented a high gain approach [4]. Employing describing function techniques, sliding mode control of DC servo mechanisms is analyzed in the presence of unmodeled stator and sensor dynamics by Xu et al. [5]. A sliding-mode-based learning controller for track-following in hard disk drives was presented by Wu and Liu [6], and the computation time delay is treated as a fault to be detected, using an appropriate controller to minimize its effects by Garcia et al. [7]. Sliding mode requires high-speed discontinuous action to steer the states of a system into a sliding surface and to maintain the subsequent motion on that surface. Nowadays, digital computers have been widely used in all kinds of control systems. However, limited sampling frequency results in control inputs to be constant between two sampling intervals, leading to difficulties for instantaneous actions to be implemented in the sampled-data system. This means that when system dynamics cross the sliding surface between sampling intervals, the control input cannot immediately take measures to enable the system to remain on that sliding surface, leading to VSS controllers implemented in discrete-time system not to possess those desirable properties due to finite sampling time [8]. Therefore, implementation of continuous VSS controllers on digital computers presents difficulties due to limited sampling rate, sample/hold effects, and discretization errors which can possibly lead to unacceptable results [9]. In order to achieve the robustness required, much research on discrete-time variable structure control (DVSC) and discrete-time sliding mode control (DSMC) has been presented recently. Sarpturk et al. [10] and Kotta [11] showed the necessary but not sufficient conditions for achieving a sliding motion of discrete variable structure systems, and proposed a new and more strict condition required to be met. Hung et al. [12] proposed a reaching law for discrete-time variable structure control. Gao et al. [13] formulated the concept of discrete sliding mode by making use of the reaching law proposed in [12] to ensure reaching conditions that are satisfied for stability and convergence of . Bartolini et al. [14] incorporated adaptive control strategy into the modeling of system uncertainties and designed a control law in terms of a discrete-time equivalent control. Tesfaye and Tomizuka [15] proposed the concept of time-delay control (TDC) and a robust discrete-time sliding mode control using the delta () operator. The behavior of the discrete-time sliding mode controller for linear, nonlinear, and stochastic systems was investigated by Su et al. [16] who conducted a detailed analysis of sampling and hold effects. Koshkouei and Zinober [17] introduced the notation of the sliding lattice and clarified the concept of discrete-time sliding mode control (DSMC). Corradini and Orlando [18] employed the concept of TDC to eliminate the effects of system perturbation within switching region. With proper sampling period , Su et al. [19] verified that the thickness of the boundary layer can be reduced to for continuous smooth disturbance. Eun et al. [20] proposed a decoupled disturbance compensator (DDC) by directly using the variable structure framework and combined it with DVSC. In his DVSC with DDC structure, however, system response may become slow, and controller implementation can become rather complex. To enable tracking error vector to remain close to zero without using any additional disturbance estimation scheme, Kim and Cho [21] proposed a DVSC method with a recursive switching function (R-DVSC). After that, the R-DSVC with an additional parameter to tune the transient response was proposed by Furuta [9]. Furthermore, a sliding surface that allows the transient response to be shaped by introducing information of past states in the sliding surface was proposed by S. M. Lee and B. H. Lee [22]. Beside the limited sampling rate, there is an inherent computation time delay because of the existence of measurement delays of feedback signals as well as the execution time of instructions when a control algorithm is implemented on a digital computer. S. M. Lee and B. H. Lee [22] summarized some of the previous works. Kondo and Furuta [23] examined the computation time delay for an optimal full state-feedback regulator problem. Ha and Ly [24] formulated the computation time delay in W-domain. Misawa [25] showed that the presence of computation time delay not only reduces relative stability and robustness but also degrades performance and proposed that a controller that embraces the concept of TDC proposed by Tesfaye and Tomizuka [15] to compensate the effects of perturbations should be used. For the repetitive control of discrete-time systems, one of the earlier classical works was presented by Tomizuka et al. [26]. Recently, the discrete-time sliding-mode congestion control with time-varying delay for multisource communication networks was presented by Ignaciuk and Bartoszewicz [27]. Sliding mode control for time-varying delayed systems based on a reduced-order observer is considered by Yan et al. [28]. Due to its importance and wide use, a lot of scientists focused on high performance controller for real systems, for example, [29–33]. Here, the high performance servo system for HDDs is investigated via discrete-time sliding mode approach. Since discrete-time sliding mode control has been developed for guaranteeing the asymptotic stability of uncertain sampled-data systems, as well as to reducing the chattering phenomenon that arises when the controller was implemented on a digital computer, the computation time delay should also be considered in the discrete-time sliding mode structure for improving the system performance. This paper presents a discrete-time sliding mode controller with an additional parameter to adjust the convergence rate for improving the perturbation compensation, and then based on the spirit of [22], the effects of computation time delay are considered for complete disturbances compensation. Experiments have been implemented based on a modern hard disk drive (HDD). It is assumed that the computation time delay is constant and smaller than the sampling time. For the clarity of presentation, a controller with an additional adjustable parameter that does not take computation time delay into account is firstly presented. Then the controller that considers computation time delay is developed. By introducing an additional parameter, the difference between perturbation and its compensation convergences to zero asymptotically. The controller design law makes use of the concept of TDC to compensate the effects of perturbations. In the presence of the influences of perturbations by unknown external disturbances and parametric uncertainties, the developed controller generates a compensation signal to cancel these influences through the mechanism of time delay. The DSMC with decoupled estimator presented in this paper are intrinsic robust to parameter uncertainties and exogenous disturbances and especially effective with the periodical disturbances. For the real servo systems, for example, the hard disk drives have strong repeatable runouts (RROs) due to the eccentric characteristics of the rotating disk and shaft; the high-speed and high-precision multiaxis stages in wire bonder or the ICs (integrated circuits) exposure lithography equipments have the strong periodic disturbances during stepping under the operating modes. The periods of the disturbances are different when the operation modes are changed, and these could be estimated within control algorithm automatically. The positioning accuracy of the control systems could be improved significantly when taking the DSMC with the decoupled estimator when considering computational time delay. The remains of the paper are organized as follows. Discrete-time models with and without computation time delay are briefly outlined in Section 2. In Section 3, a discrete-time-sliding mode controller with shaped perturbation rejection that does not consider the computation time delay is presented. Afterwards, the controller that accounts for the effects of computation time delay is developed. In Section 4, numerical simulations to verify the proposed control methods are conducted based on the servo-control model of a commercial HDD. In Section 5, experimental digital servo system for track-to-track seeking of HDD is implemented, from which the experimental measured results are provided to show the validity of the proposed method. Finally, conclusions are drawn in Section 6. 2. Preliminaries Consider the following SISO LTI system with parametric uncertainties and exogenous disturbances described by the continuous-time model: where is the -dimensional state vector, is the system input, is the measured output, and is the perturbation, which represents the effects of external disturbances and parametric uncertainties. , , , , and are properly dimensioned constant matrices. The continuous-time matching condition is assumed to hold; that is, there exists a row scalar vector and a scalar such that and . 2.1. Discrete-Time Model without Computation Time Delay The discrete-time model with sampling period is denoting with the index the variable evaluated in . In the above system, the pair is assumed to be controllable, and as is shown by Tesfaye and Tomizuka [15], the matching condition can be enforced in discrete system model, if a suitable sampling time is chosen by means of the assumption below. Assumption 1 (see [15]). Let the sampling interval be small enough so that the first two terms in the Taylor expansion of a function give an acceptably close approximation. Hence, the following discrete-time model can be obtained [18]: where , , , and are obtained as follows: In (4), the disturbance is constructed as including the parametric uncertainty and external disturbance vector . Assumption 2. Both the matching condition and the pair are completely controllable and are assumed to hold through the paper. So, the switching function is defined as follows: 2.2. Discrete-Time Model with Computation Time Delay Assume there exists a delay , which is caused mainly by the execution time of the instructions that generate the control input after the sampling instant when a control algorithm was implemented on a digital computer. Following S. M. Lee and B. H. Lee [22], it is assumed here that the delay is constant and smaller than one sampling interval , and this condition is assumed to hold through the paper. Therefore, the control input should be chosen as follows: The discrete-time model of (1) with control input (12) is given as follows [22, 23, 34] where , , and . In spite of the existence of computation time delay, the controllability and observability assumptions are preserved in this model. Although the discrete-time matching condition is not satisfied in system (8), the effects of perturbation can be sufficiently suppressed under the assumption that the perturbation is relatively slower than the sampling frequency . 3. Discrete-Time Sliding Mode Control Design The concept of TDC [15] consists of estimating the uncertain dynamics of the system through past observations of the system response, so that a control function can be generated (with some delay) to approximately counterbalance their effects. In order to cancel the effects of perturbation inside the switching region, Corradini and Orlando [18] proposed a discrete-time VSC, in which the perturbation estimator was formulated by using the concept of TDC. Furthermore, a sliding surface was selected to allow the transient time response to be shaped by introducing the past states in the sliding surface. In discrete-time systems, instead of having a hyperplane as in the case of continuous time, a countable set of points is defined a comprising the so-called lattice, and the surface on which these sliding points lie is named the latticewise hyperplane [17, 35]. Here, using the concept of discrete-time sliding mode illustrated by the sliding lattice, the DSMC with a decoupled perturbation compensator (DPC) that does not consider computation time delay is firstly presented. Then regarding the inherent computation time delay, an improved DSMC with DPC is presented. 3.1. Discrete-Time Sliding Mode Control Design without Computation Time Delay Considering the discrete-time linear time-invariant system given by (10), the discrete-time equivalent control for the nominal plant without perturbation can be obtained by solving the equation as The control objective of the discrete-time sliding mode control is to achieve . When , that is, when the state vector remains on the sliding surface, the closed-loop dynamics becomes The state vector remains on the sliding surface defined in (11) and converges to the origin in the state space, if is a contractive matrix. Based on the spirit of the reaching law proposed by Gao et al. [13], the transient response can be shaped by introducing an additional adjustable parameter to the desired sliding function dynamics such that . By solving this equation, the control input that can shape the transient response is obtained as follows: This control law for the nominal plant can be modified for the system with the unknown perturbations. If denotes the perturbation compensation input, the following modified control input is obtained: Then, substituting (11) into the expression of defined by (6), the following equation can be obtained: From (13), we have The additive input term ensures the stability and robustness of the closed-loop system under the presence of perturbations [18], [36]. However, the control cannot be actually implemented because of a lack of knowledge about . If is bounded and considerably slower than the sampling frequency , the difference between and is of [36]. If the disturbance is smooth function, the difference between and is of [16], so can be used as an estimate of . From (14), a one-step backward shifted perturbation becomes The compensation input is defined as an estimate of : . So from (15), the perturbation compensation law can be obtained as Obviously, (16) is the separate disturbance estimator, and the convergence rate of is only related to . If the perturbation is constant or slowly varying, an adjustable parameter is added to the item that contains the switching function in (16) to make converges asymptotically to zero in a desired sliding function dynamics, leading to the following equations to be obtained: Equation (18) is the separate disturbance estimator to be used to compensate system perturbations. The transient response has been shaped by introducing the past state in sliding surface. It can be easily shown that this control law forces the system to be reached at the discrete-time sliding mode and the perturbation can be compensated in a finite time instant, if is a decreasing sequence for [37, 38]. Theorem 1. For the system described in (4), if one chooses the control law (11) and (12) and the disturbance compensation law (18), with being the compensation error, the following closed-loop sliding mode dynamics and compensation error dynamics are satisfied: Proof. From (13), (14) can be easily proven. By substituting (16) and (19) into the equation , (20) can be derived as follows: Theorem 1 implies that the disturbance estimation dynamics and the sliding mode dynamics are decoupled. It is similar to the separation principle, allowing both dynamics modes to be tuned separately. The developments for the robustness of the proposed method are presented in the following theorems. Theorem 2. If for (20) holds for with , and for some constant , then for all , there exists some integer such that Proof. It follows from (20) that Since , for all , the compensation error satisfies Iterating the above inequality gives rise to This implies It is shown in Theorem 2 that the compensation error in (20) converges asymptotically to zero if the disturbance is constant or of slowly varying nature. Theorem 3. If with and for (19) and (20), for all k and , then there exists some integer such that for all we have Proof. It follows from (20) that The robustness of the closed-loop sliding mode dynamics to the disturbance is guaranteed from Theorem 3 and the asymptotic bound on the switching function is proportional to the bound on . By Theorem 3, the switching function is ensured to converge to a bound proportional to the reciprocal of adjustable parameters and . Similarly, the theorems can be proposed and proved for the periodic perturbation compensation. To the periodic disturbance with known period , and if is assumed slower than the sampling frequency , then the compensation input is defined as an estimate of . From (18), the compensation for periodic disturbance is obtained as It can be easily shown that this control law forces the system to be reached at the discrete-time sliding mode and the periodic disturbance can be compensated in a finite time instant, if is a decreasing sequence for [37, 39]. 3.2. Discrete-Time Sliding Mode Control Design with Computation Time Delay The discrete-time sliding mode controller considering the effects of computation time delay is considered here. Similar to the control law mentioned in the previous section, the new control law utilizes the concept of TDC to formulate the perturbation estimation process [36]. Considering the discrete-time model with computation time delay described by (13), the following equivalent control can be obtained for the nominal plant without perturbation: For the desired sliding function dynamics , the control input that can shape the transient response is obtained as For the compensation of unknown perturbation, the perturbation compensation input is added to the above control input, leading to a control law as Note that the control input is included in the equivalent control input . Since contains the time delay control input , the input that is defined for the nominal plant should be modified to exclude the control term . From (32), we have and redefine the equivalent control input for nominal system as follows: So, the control law with compensation for the unknown perturbation can be obtained as In fact, the control cannot be actually implemented because of a lack of knowledge of current . Under the assumption that the perturbation is bounded and much slower than the sampling frequency , can be used as an estimate of . Substituting (34) into the expression of , the following equation is obtained: Based on (35), a one-step backward shifted perturbation becomes The compensation input is defined as an estimate of : . From (36), the following perturbation compensation law can be obtained: Obviously, (37) is the decoupled perturbation compensator and the convergence rate of is only related to . If the perturbation is constant or slowly varying, another adjustable parameter was added on the item that contains the switching function in (37) to make converges asymptotically to zero in a desired sliding function dynamics, and hence the following equations can be obtained: For a periodic disturbance with the known period , the current disturbance can be actually obtained as a one-period shifted disturbance as The compensation input is defined as an estimate of , thus, Obviously (40) is the decoupled compensator for periodic disturbance. In theory, control signal can steer the errors from any finite value to the sliding surface in one step if the magnitude of the control signal can be arbitrarily large. In real control implementation, there is always a limitation [ for the actual control signal, whose amplitude must be within certain range. If the control input signal is too large, some components of the systems would be damaged. In order to avoid this, the range must be chosen so as to ensure the steering of the errors to the sliding surface in a finite number of steps instead of a single step. Similar to Bartolini et al. [14], in this study, the control input is selected as follows: where is the actual control limit. This control law also forces the errors to reach the discrete-time sliding mode with a decreasing sequence [39]. Remark 4. If there exists no computation time delay, that is, if , then the matrix becomes , and the term disappears. When there exists the time delay, the forms of the control law and disturbance estimator had been changed due to and , and so the complete disturbance rejection can also be achieved via the above (40). 4. Application to a Hard Disk Drive 4.1. Modeling the Plant In hard disk drive (HDD) model, the rigid body model of the voice coil motor (VCM) and the resonance dynamics are the most important for servo control. The block diagram of the HDD servomechanism with DSMC is shown in Figure 1. The nominal model, that is, the rigid body dynamics of VCM, is formulated as a double integrator model as follows: where (tracks) is the position of read/write head, is the VCM input current, is the position measurement gain, and is the acceleration constant. There exist several resonance modes due to the flexibility of the actuator assembly. The transfer function for the resonance dynamics [34] can be expressed as follows: During the controller design, system resonance dynamics was treated as model uncertainties and only double integrator model for the voice coil motor is considered, and as a result, the corresponding state space model of the system becomes where and are the head displacement and velocity, respectively, and is the VCM coil current input. Let , ; if the sampling time is assumed to be , using (8), and their the series expansion expression in [34], the matrix of discrete-time system can be obtained as ( in this case ) and the discrete-time model of the nominal plant becomes The measured frequency response function of the system is shown in Figure 2, and the system parameters can be identified as track·m^−1, ms^−2A^−1, A, = 1/12e3s, and the spindle motor speed 4200rpm. The periodic disturbance (RRO) in HDD is assumed to be + + . Let the computation time delay be , using the expansion equation in [34] and following the above procedure, then the discrete-time model becomes 4.2. Controller Design The DSMC and the compensator for periodic disturbance are developed as (14) and (20). Based on the nominal plant model (47), that is, the double integrator model, the switching function is designed by defining . The other parameters in the closed-loop model are chosen as and . The saturation for control effort is selected as . Simulation results are shown clearly in Figure 3. The periodic disturbance can be estimated accurately after three periods, as shown in Figure 3(a); the proposed method also reduces position error signal (PES) effectively due to the resultant periodic compensation control input, as shown in Figure 3(b). For parametric uncertainties and nonperiodic disturbances, the resonant modes of the head actuator are taken into account; that is, by adding the first fourth resonances to the SIMULINK model for nominal plant, the system with the proposed DSMC and compensator has been found to be just as stable and to exhibit the same performance. This demonstrates that the DSMC had effectively suppressed the uncertainties due to the resonance modes of the system with different control effort, as shown in Figure 3(c). To examine the robust stability to parameter variations, the plant model parameters are assumed to be estimated with errors from their nominal values, and here the resonant dynamics is still taken as part of model uncertainties. Figure 3(d) depicts the simulation results comparison using the same control parameters for the following fours cases: (1) no estimation error; (2) with 10% estimation error; (3) with 20% estimation error; (4) with 30% estimation error. Figure 3(d) clearly exhibits the robust performance of the designed controller. 5. Experimental Results Figure 4 shows the experimental setup used for this investigation. An HP signal analyzer (HP 35670A) was used for modal testing. The control desk consists of the laser Doppler vibrometer (LDV), an HDD, and a PC installed with the DSpace DSP system (DSPACE 1103) and the MATLAB software packages. The detailed configurations of the control system are shown in Figure 5. A sampling time of μs was used during the control implementation. Experimental results for track-to-track seeking in different cases are shown in Figures 6–10. Figure 6 shows the closed-loop response of the system with the proposed DSMC after the parameters were adjusted to become , , and with as the saturation for control effort. The steady-state error of the closed-loop system is zero; that is, PES does not exist in track-to-track following by the DSMC when there does not exist external disturbance in the servo system. The settling time that occurs in the step response of time domain is just about 1ms for one track. The overshoot is also lower than half of the reference signal. When the external disturbances with frequency 420Hz (period Hz) as shown in Figure 7 were added to the servo system, the PES of the closed-loop system with the proposed controller was increased significantly in the case where decoupled disturbance compensator was not incorporated into the control system. The PES of such closed-loop system for track following of HDD with RRO is very significant as shown in Figure 8. On the other hand, PES measurement of the closed-loop system with DSMC and decoupled periodic disturbance compensator for track-to-track seeking is shown in Figure 9. The amplitude of PES was greatly decreased from 25mv to 6.2mv after optimizing the parameters of the compensator, decreased more than 75% as compared with that of Figure 7. The PES of the closed-loop system for track following of HDD with RRO was significantly reduced when a disturbance compensator was incorporated, as shown in Figure 10. Compared with Figure 8, it can be seen clearly that the PES has been reduced by more than 75%. As shown in Figure 10, however, the disturbance could not be compensated completely probably due to the computation time delay as will be shown next. Experimental PES measurement of DSMC with disturbance compensator that considers the computation time delay is shown in Figures 11 and 12. After the computation time delay was adjusted to become μs, responses of DSMC with disturbances compensation for track-to-track seeking were measured and are shown in Figure 11. Obviously, from Figure 11, periodic disturbance has almost been compensated completely, and virtually there is no difference between the reference and the steady response. The effectiveness of DSMC and disturbance compensator with computation time delay can be also seen from the experimental results obtained from track following control, as shown in Figure 12. From which, it can be seen clearly that PES has been suppressed considerably. 6. Conclusions A method of achieving robustness for perturbation rejection in applying discrete-time sliding mode control has been presented. The developed DMSC with DPC control method, which considers computation time delay, achieves robust tracking in the presence of unknown disturbances, which include external disturbances as well as parameter uncertainties. The presented method decouples the disturbance estimation dynamics from the closed-loop sliding mode dynamics. An additional parameter has been introduced to allow the asymptotic convergence in a desired sliding dynamics in both tracking and regulation problems. The satisfied simulation results as well as experimental results based on the servomechanism of a modern 2.5^″ HDD have been achieved to demonstrate the practicality of the proposed method. The simulation results show the satisfactory feature of the complete rejection of the periodic component. In the experimental implementation of the control based on the DSP1102, the performances of the closed-loop system using DSMC and the perturbation compensator control were improved greatly when computation time delay was considered. This demonstrates the robustness of the sliding mode dynamics to parameter uncertainties and exogenous disturbances containing periodic components and such robustness is always guaranteed for developed control scheme. This work was supported by the Natural Science Foundation of China (51075377, 41176076, 51121002, and 51175486), Natural Science Foundation of Zhejiang province (R1100015, Y7100512), and Open Foundation of State Key Laboratory of Fluid Power and Mechatronic Systems in Zhejiang University (GZKF-201017). 1. U. Itkis, Control Systems of Variable Structure, Wiley, New York, NY, USA, 1976. 2. V. I. Utkin, “Variable structure systems with sliding modes,” Institute of Electrical and Electronics Engineers. Transactions on Automatic Control, vol. 22, no. 2, pp. 212–222, 1977. View at Zentralblatt MATH · View at MathSciNet 3. S. Hui and S. H. Żak, “On discrete-time variable structure sliding mode control,” Systems & Control Letters, vol. 38, no. 4-5, pp. 283–288, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 4. K. Kalsi, J. Lian, S. Hui, and S. H. Żak, “Sliding-mode observers for systems with unknown inputs: a high-gain approach,” Automatica, vol. 46, no. 2, pp. 347–353, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 5. J. X. Xu, T. H. Lee, and Y. J. Pan, “On the sliding mode control for DC servo mechanisms in the presence of unmodeled dynamics,” Mechatronics, vol. 13, no. 7, pp. 755–770, 2003. View at Publisher · View at Google Scholar · View at Scopus 6. W. C. Wu and T. S. Liu, “Sliding mode based learning control for track-following in hard disk drives,” Mechatronics, vol. 14, no. 8, pp. 861–876, 2004. View at Publisher · View at Google Scholar · View at Scopus 7. J. P. F. Garcia, L. M. C. F. Garcia, G. C. Apolinário, and F. B. Rodrigues, “Sliding mode for detection and accommodation of computation time delay fault,” Mathematics and Computers in Simulation , vol. 80, no. 2, pp. 449–465, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 8. J. H. Kim, S. H. Oh, D. I. Cho, and J. K. Hedrick, “Robust discrete-time variable structure control methods,” Transactions of the ASME Journal of Dynamic Systems, Measurement and Control, vol. 122, no. 4, pp. 766–775, 2000. View at Scopus 9. K. Furuta, “Sliding mode control of a discrete system,” Systems & Control Letters, vol. 14, no. 2, pp. 145–152, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 10. S. Z. Sarpturk, Y. Istefanopulos, and O. Kaynak, “On the stability of discrete-time sliding mode control systems,” IEEE Transactions on Automatic Control, vol. 32, pp. 930–932, 1987. 11. U. Kotta, “Comments on 'On the stability of discrete-time sliding mode control systems,” IEEE Transactions on Automatic Control, vol. 34, no. 9, pp. 1021–1022, 1989. View at Publisher · View at Google Scholar 12. J. Y. Hung, W. Gao, and J. C. Hung, “Variable structure control: a survey,” IEEE Transactions on Industrial Electronics, vol. 40, pp. 2–22, 1993. 13. W. Gao, Y. Wang, and A. Homaifa, “Discrete-time variable structure control systems,” IEEE Transactions on Industrial Electronics, vol. 42, no. 2, pp. 117–122, 1995. View at Publisher · View at Google Scholar 14. G. Bartolini, A. Ferrara, and V. I. Utkin, “Adaptive sliding mode control in discrete-time systems,” Automatica, vol. 31, no. 5, pp. 769–773, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 15. A. Tesfaye and M. Tomizuka, “Robust control of discretized continuous systems using the theory of sliding modes,” International Journal of Control, vol. 62, no. 1, pp. 209–226, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 16. W.-C. Su, S. V. Drakunov, and Ü. Özgüner, “Implementation of variable structure control for sampled-data systems,” in Robust Control via Variable Structure and Lyapunov Technique, F. Garofalo and L. Glielmo, Eds., vol. 217 of Lecture Notes in Control and Information Sciences, pp. 87–106, Springer, Berlin, Germany, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 17. A. J. Koshkouei and A. S. I. Zinober, “Discrete-time sliding mode control design,” in Proceedings of the 13th IFAC world congress, pp. 481–486, 1996. 18. M. L. Corradini and G. Orlando, “Variable structure control for uncertain sampled data systems,” in Proceedings of the IEEE International Workshop on Variable Structure Systems (VSS '96), pp. 117–121, December 1996. View at Scopus 19. W.-C. Su, S. V. Drakunov, and Ü. Özgüner, “An O(T^2) boundary layer in sliding mode for sampled-data systems,” IEEE Transactions on Automatic Control, vol. 45, no. 3, pp. 482–485, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 20. Y. Eun, J. H. Kim, K. Kim, and D. I. Cho, “Discrete-time variable structure controller with a decoupled disturbance compensator and its application to a CNC servomechanism,” IEEE Transactions on Control Systems Technology, vol. 7, no. 4, pp. 414–423, 1999. View at Publisher · View at Google Scholar · View at Scopus 21. J. Kim and D. Cho, “Discrete-time variable structure control using recursive switching function,” in Proceedings of the American Control Conference, pp. 1113–1117, 2000. 22. S. M. Lee and B. H. Lee, “A discrete-time sliding mode controller and observer with computation time delay,” Control Engineering Practice, vol. 7, no. 8, pp. 943–955, 1999. View at Scopus 23. R. Kondo and K. Furuta, “Sampled-data optimal control of continuous systems for quadratic criterion function taking account of delayed control action,” International Journal of Control, vol. 41, no. 4, pp. 1051–1060, 1985. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 24. C. Ha and U. L. Ly, “Sampled-data system with computation time delay: optimal W-synthesis method,” Journal of Guidance, Control, and Dynamics, vol. 19, no. 3, pp. 584–591, 1996. View at Scopus 25. E. A. Misawa, “Observer-based discrete-time sliding mode control with computational time delay: the linear case,” in Proceedings of theAmerican Control Conference, pp. 1323–1327, June 1995. View at Scopus 26. M. Tomizuka, T. C. Tsao, and K. K. Chew, “Analysis and synthesis of discrete-time repetitive controllers,” Transactions of the ASME Journal of Dynamic Systems, Measurement and Control, vol. 111, no. 3, pp. 353–358, 1989. View at Scopus 27. P. Ignaciuk and A. Bartoszewicz, “Discrete-time sliding-mode congestion control in multisource communication networks with time-varying delay,” IEEE Transactions on Control Systems Technology, vol. 19, no. 4, pp. 852–867, 2011. View at Publisher · View at Google Scholar · View at Scopus 28. X.-G. Yan, S. K. Spurgeon, and C. Edwards, “Sliding mode control for time-varying delayed systems based on a reduced-order observer,” Automatica, vol. 46, no. 8, pp. 1354–1362, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 29. Q. Hu, C. Du, L. Xie, and Y. Wang, “Discrete-time sliding mode control with time-varying surface for hard disk drives,” IEEE Transactions on Control Systems Technology, vol. 17, no. 1, pp. 175–183, 2009. View at Publisher · View at Google Scholar · View at Scopus 30. E. Azadi Yazdi, M. Sepasi, F. Sassani, and R. Nagamune, “Automated multiple robust track-following control system design in hard disk drives,” IEEE Transactions on Control Systems Technology, vol. 19, no. 4, pp. 920–928, 2011. View at Publisher · View at Google Scholar · View at Scopus 31. J. Nie and R. Horowitz, “Control design of hard disk drive concentric self-servo track writing via H[2] and ${H}_{\infty }$ synthesis,” IEEE Transactions on Magnetics, vol. 47, no. 7, pp. 1951–1957, 2011. View at Publisher · View at Google Scholar · View at Scopus 32. T. Semba, M. T. White, and F. Y. Huang, “Adaptive cancellation of self-induced vibration,” IEEE Transactions on Magnetics, vol. 47, no. 7, pp. 1958–1963, 2011. View at Publisher · View at Google Scholar · View at Scopus 33. J. T. Fei, T. H. Li, F. Wang, and W. R. Juan, “A novel sliding mode control technique for indirect current controlled active power filter,” Mathematical Problems in Engineering, vol. 2012, Article ID 549782, 18 pages, 2012. View at Publisher · View at Google Scholar 34. G. F. Franklin, J. D. Powell, and M. L. Workman, Digital Control of Dynamic Systems, Addison-Wesley, Reading, Mass, USA, 3rd edition, 1998. 35. A. J. Koshkouei and A. S. I. Zinober, “Sliding mode state observers for SISO linear discrete-time systems,” in Proceedings of the UKACC International Conference on Control, vol. 96, pp. 837–842, September 1996. View at Scopus 36. K. D. Young, V. I. Utkin, and Ü. Özgüner, “Control engineer's guide to sliding mode control,” in Proceedings of the 1996 International Workshop on Variable Structure Systems (VSS '96), pp. 1–14, December 1996. View at Scopus 37. C. Y. Chan, “Robust discrete-time sliding mode controller,” Systems & Control Letters, vol. 23, no. 5, pp. 371–374, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 38. A. J. Koshkouei and A. S. I. Zinober, “Sliding lattice design for discrete-time linear multivariable systems,” in Proceedings of the 35th IEEE Conference on Decision and Control, pp. 1497–1502, December 1996. View at Scopus 39. M. X. Sun and Y. Y. Wang, “Periodic disturbance compensation in discrete-time sliding mode control,” in Proceedings of the International Conference on Control and Automation (ICCA '02), Xiamen, China, 2002.
{"url":"http://www.hindawi.com/journals/mpe/2013/505846/","timestamp":"2014-04-19T13:43:56Z","content_type":null,"content_length":"89546","record_id":"<urn:uuid:66c28970-5704-4c80-98bb-996c87d401ac>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
The Fortran95 Computer Code for Finite-Difference Numerical Generation and Simulation of a 1D Seismic Wavefield in a 1D Heterogeneous Viscoelastic Medium Using the Displacement-Stress Staggered-Grid Finite-Difference Scheme The Fortran95 Computer Code for Finite-Difference Numerical Generation and Simulation of a 1D Seismic Wavefield in a 1D Heterogeneous Viscoelastic Medium Using the Displacement-Velocity-Stress Staggered-Grid Finite-Difference Scheme The Fortran95 Computer Code for Finite-Difference Numerical Generation and Simulation of a 1D Seismic Wavefield in a 1D Heterogeneous Viscoelastic Medium Using the Velocity-Stress Staggered-Grid Finite-Difference Scheme A parallel/serial 2d spectral element code for wave propagation and rupture dynamics The program is designed for computation of seismic wavefields in 3D heterogeneous surface geological structures with planar free surface due to surface and near-surface point doublecouple sources An analytical solution to anelastic wave propagation in a homogenous medium An analytical solution to poroelastic wave propagation in a homogenous medium Modelling of propagation of surface waves in 3D structures by mode coupling method Direct Solution Method DSM software for calculating full wave synthetics in a laterally homogeneous spherically symmetric earth model Exact 2D Response For VACUUM/ELASTIC Interface, Directional Source (Lamb's Problem) Exact 2D Response For ACOUSTIC/ELASTIC or ELASTIC/ELASTIC Interface, Compressional Source Exact 2D Response for Vacuum/Elastic Interface, Compressional Source (Garvin's Problem) Finite-difference solver of the elastic wave equation in a spherical section. FD3S allows to model seismic wave attenuation as well as anisotropy with radial symmetry axis. The finite difference scheme is of fourth order in space and of second order in time. Arbitrarily complex Earth models can easily be included. FD3S(AD) is an extension of FD3S (see above) that allows to compute the derivative of an objective functional defined on a recorded seismic wavefield with respect to the model parameters. The computation of the derivative is based on the adjoint method. This is a very simple training code for Finite Differences (FD) in 2D. Calculation of synthetic seismograms for global, spherically symmetric media based in direct evaluation of Green's functions Gar6more 3D Analytical Solutions of Waves Propagation Problems in Stratified Media in 3D. Gar6more 2D Analytical Solutions of Waves Propagation Problems in Stratified Media in 2D. Fault zone waves: Analytical solutions ISOLA moment tensor retrieval software package Fortran code ISOLA to retrieve isolated asperities from regional or local waveforms based on multiple-point source representation and iterative deconvolution. Calculates the 2 components of the 2D wavefield generated by a line dislocation at the interface between two different elastic solids. Normal modes Normal-mode based computation of seismograms for spherically symmetric Earth models Calculates the 3 components wavefield generated by a point dislocation at the interface between two different elastic solids. This is a very simple code for elastic wave simulation in 2D using a Pseudo-Spectral Fourier method Ray theory Ray-theoretical approach to the calculation of synthetic seismograms in global Earth models. Reflectivity method Code for calculating the response of a stratified set of uniform solid layers to excitation by a point moment tensor source using the Reflectivity Method (RM). SHaxi performs elastic and anelastic global SH wave propagation in the Earth's mantle for axi-symmetric geometries. Package of Spherical Harmonics ANalysis and SYNthesis tools for UNIX/GMT environments. Required to plot model coefficients downloaded from Becker and Boschi's database. It is an interactive application for the pre- & post-processing of data for seismic wave modelling. Three integrated modules ( GeomodVi, MeshVi, DataVi ) allow for an easy preparation, processing and visualisation of such data. Surface Wave Ray Tracing with Azimuthal Anisotropy surface wave ray tracing as described by Boschi and Woodhouse GJI 2006 Analytical solution for 3D anelastic (transversely isotropic) homogeneous material due to a point force source. Time-frequency Misfits The Fortran95 program TF-MISFITS is designed for computation of time-frequency misfits between tested and reference seismograms Understanding seismic radiation patterns (tutorial) The Matlab routine anaseis.m shows the 3D radiation pattern and the corresponding seismograms for various source types. Volpis is a Fortran code for source inversion in volcano seismology. The code can be used to retrieve the time-dependent source components (moment tensor and single forces) of volcanic events. Calculation of phase velocity dispersion curves Software package membraneSphere simulates membrane waves over the whole globe. A spectral element package for 2D wave propagation and earthquake rupture dynamics
{"url":"http://www.spice-rtn.org/library/software.1.html","timestamp":"2014-04-16T21:51:21Z","content_type":null,"content_length":"21178","record_id":"<urn:uuid:591a1d23-9dcc-419e-a03f-0606311c791c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Return to List Careers in Mathematics, DVD A co-publication of the AMS, Society for Industrial and Applied Mathematics, and Mathematical Association of America with funding by the Sloan Foundation. &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp 2001; DVD The "Careers in Mathematics" DVD contains interviews with mathematicians working in industry, business and government. The purpose of the DVD is to allow the viewer to hear from people working outside academia what their day-to-day work life is like and how their background in mathematics contributes to their ability to do their job. Interviews were ISBN-10: conducted on site, showing the work environment and some of the projects mathematicians contributed to as part of multidisciplinary teams. People interviewed come from industrial 0-8218-4081-9 based firms such as Kodak and Boeing, business and financial firms such as Price Waterhouse and D. E. Shaw & Co., and government agencies such as the National Institute of Standards and Technology and the Naval Sea system Command. 978-0-8218-4081-8 This DVD is also available as a video. See Careers in Mathematics (VHS) List Price: US$15 Careers in Mathematics was developed jointly by the American Mathematical Society (AMS), the Society for Industrial and Applied Mathematics (SIAM), and the Mathematical Association of America (MAA). Order Code: CMDVD
{"url":"http://cust-serv@ams.org/bookstore?fn=20&arg1=videos&ikey=CMDVD","timestamp":"2014-04-17T15:40:05Z","content_type":null,"content_length":"13951","record_id":"<urn:uuid:862bb356-9de9-40b0-a9bf-19c1c328b8ab>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
Rowlett Prealgebra Tutor Find a Rowlett Prealgebra Tutor ...This involves everything from how to use an agenda, time management, organizing their study area, effective communication techniques to understand what the teacher really wants, how to approach the subjects necessary to study that evening. I also, teach my students to recognize their learning st... 21 Subjects: including prealgebra, reading, English, writing I hold a BS in biochemistry and a PhD in cell biology, so I have extensive education in science and math. I like to adapt my tutoring style to the student and work with him/her to figure out the method that they best respond to. I like to provide structured tutoring experiences, so please send as much detail about the subject matter that requires tutoring as possible. 9 Subjects: including prealgebra, chemistry, geometry, biology ...I was a tutor in college for students that needed help in math. I have a Master's degree in civil engineering and have practiced engineering for almost 40 years where math important to performing my job. I hold a Master's Degree in Education with emphasize on instruction in math and science for grades 4th through 8th. 11 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...My research at SMU focused on genetic pathways. On one of my projects I had to construct a very specific fly strain, a fly strain that didn't already exist. Through the use of classic genetics and selection, I was able to design the fly. 30 Subjects: including prealgebra, reading, chemistry, English ...I taught all the subjects in 3rd grade to bilingual students who's 1st language was Spanish. I also taught Spanish to Dual-Language students who's first language was English. I gave Spanish to adults in after school programs. 14 Subjects: including prealgebra, Spanish, geometry, ESL/ESOL Nearby Cities With prealgebra Tutor Allen, TX prealgebra Tutors Balch Springs, TX prealgebra Tutors Garland, TX prealgebra Tutors Heath, TX prealgebra Tutors Highland Park, TX prealgebra Tutors Lucas, TX prealgebra Tutors Mesquite, TX prealgebra Tutors Murphy, TX prealgebra Tutors Parker, TX prealgebra Tutors Plano, TX prealgebra Tutors Richardson prealgebra Tutors Rockwall prealgebra Tutors Sachse prealgebra Tutors The Colony prealgebra Tutors Wylie prealgebra Tutors
{"url":"http://www.purplemath.com/Rowlett_prealgebra_tutors.php","timestamp":"2014-04-19T05:14:29Z","content_type":null,"content_length":"23862","record_id":"<urn:uuid:36d2f611-4f5f-4395-b5b9-4150a24a42d4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help October 5th 2008, 03:38 PM #1 Mar 2008 (2.1) Examples of parametrised cubics. Some plane cubic curves can be parametrised, just as the conics: Nodal cubic $C: (y^2=x^3+x^2) \subset \mathbb{R}^2$ is the image of the map $\varphi: \mathbb{R}^1 \to \mathbb{R}^2$ given by $t\mapsto (t^2-1, t^3-t)$ Cuspidal cubic. $C: (y^2 = x^3)\subset \mathbb{R}^2$ is the image of $\varphi:\mathbb{R}^1 \to \mathbb{R}^2$ given by $t\mapsto (t^2,t^3)$ (1) Let $C: (y^2 = x^3 + x^2) \subset \mathbb{R}^2.$ Show that a varible line though $(0,0)$ meets $C$ at one further point, and hence deduce the parametrisation of $C$ given in (2.1). Do the same for $(y^2 = x^3)$ and $(x^3 = y^3-y^4).$ If anyone could show me how to do any of those in (1), I would really appreciate it.. I have no idea how to tackle this problem. Thanks! The first step is to determine what a line through the origin looks like. The equation of this line will be y = kx for some k. Then: $y^2 = x^3 + x^2$ Substituting for y: $(kx)^2 = x^3 + x^2$ $x^3 = (kx)^2 - x^2$ $x^3 = k^2x^2 - x^2$ $x^3 = (k^2 - 1)x^2$ For $x eq 0$: $x = k^2 - 1$ And since $y = kx$: $y = k(k^2 - 1)$ October 5th 2008, 03:59 PM #2 MHF Contributor Apr 2008
{"url":"http://mathhelpforum.com/number-theory/52140-cubics.html","timestamp":"2014-04-21T06:05:02Z","content_type":null,"content_length":"35311","record_id":"<urn:uuid:b50f31ac-3b4d-4f0d-9b83-eb7bb5b2a968>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Polynesians May Have Invented Binary Math How old is the binary number system? Perhaps far older than the invention of computers or even the invention of binary math in the West. The residents of a tiny Polynesian island may have been doing calculations in binary—a number system with only two digits—centuries before it was described by Gottfried Leibniz, the co-inventor of calculus, in 1703. If you’re reading this article, you are almost certainly a user of the decimal system. That system is also known as base-10 because of its repeating pattern of 10 digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 is followed by 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, and so forth. But the decimal system is not the only counting system available. The Babylonians used base-60. The Mayas used base-20. Some Australian aboriginal groups may have used base-5. And of course, today most counting and calculation is done by computers not in decimal but binary, the base-2 system of zeros and ones. Each system has subtle advantages depending on what sort of counting and calculations are needed. The decimal system is handy considering that people have 10 fingers. But when it comes to division, other systems are better. Because 10 has only two prime factors (2 and 5), dividing by thirds results in an annoyingly infinite approximation (0.3333 … ) whereas the base-12 counting system produces a nice finite solution. (Indeed, some mathematicians have advocated for a worldwide switch to base-12.) Binary, meanwhile, has a leg up on decimal when it comes to calculation, as Leibniz discovered 300 years ago. For example, although numbers in binary become much longer, multiplying them is easier because the only basic facts one must remember are 1 x 1 = 1 and 0 x 0= 1 x 0 = 0 x 1 = 0. But Leibniz may have been scooped centuries earlier by the people of Mangareva, a tiny island in French Polynesia about 5000 kilometers south of Hawaii. While studying their language and culture, Andrea Bender and Sieghard Beller, anthropologists at the University of Bergen in Norway, were astonished to find a mathematical system that seems to mix base-10 and base-2. “I was so thrilled that I couldn't sleep that night,” Bender says. It could be not only the first new indigenous arithmetic system discovered in decades, but also the first known example of binary arithmetic developed outside Like all Polynesians, the people who first settled on Mangareva more than 1000 years ago had a decimal counting system. But, according to Bender and Beller, the islanders added a binary twist over the ensuing centuries. Just like English has a few special words like a dozen for 12 and a score for 20, the Mangarevan language has special words for large groups. But their special counting words are all decimal numbers multiplied by powers of two, which are 1, 2, 4, 8 … . Specifically, takau equals 10; paua equals 20; tataua, 40; and varu, 80. Those big numbers are useful for keeping track of collections of valuable items, such as coconuts, that come in large numbers. Bender and Beller realized that the Mangarevan counting system makes it possible to use binary arithmetic for calculations of large numbers, they report today in the Proceedings of the National Academy of Sciences in a paper that even nonexperts will enjoy reading. But here’s the catch. Even if the native mathematical system of Mangareva employed binary arithmetic, the current residents of the island no longer use that system. Two centuries of contact with the West has resulted in a complete switch to decimal calculation. Even the Mangarevan language itself is now threatened with extinction. Bender and Beller are relying on their analysis of the language and an account of the traditional counting words written by ethnographers in 1938. They acknowledge that it is impossible to prove exactly when Mangareva developed the system, but the entrenchment of the number terms in the language suggests a far-reaching origin. Unfortunately, the anthropologists may have made their discovery just one generation too late to see Mangarevan math in action. “The hypothesis advanced by the authors is indeed plausible,” says Rafael Núñez, an anthropologist at the University of California, San Diego, “but the absence of original Mangarevan written records constitutes a real challenge.” However, Núñez notes that ironically, “it is the absence of written practices in this culture that makes the hypothesis plausible.” Keeping track of all those calculations in their heads would have been so much easier with the binary math built into the Mangarevan language, he says.
{"url":"http://news.sciencemag.org/archaeology/2013/12/polynesians-may-have-invented-binary-math","timestamp":"2014-04-16T10:56:14Z","content_type":null,"content_length":"82066","record_id":"<urn:uuid:ff128c36-4d0f-49c1-8d14-c8b796c18c03>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics On unique continuation for nonlinear Schrödinger equations. (English) Zbl 1041.35072 In this paper uniqueness properties of solutions to nonlinear Schrödinger equations of the form ${\partial }_{t}u+{\Delta }u+F\left(u,{u}^{*}\right)=0,\phantom{\rule{2.em}{0ex}}\left(1\right)$ with $\left(x,t\right)\in {ℝ}^{n}×ℝ$, are investigated. Precisely, the authors consider the following problem: if ${u}_{1}$ and ${u}_{2}$ are solutions of (1) with $\left(x,t\right)\in {ℝ}^{n}×\left[0,1\right]$, belonging to an appropriate class $X$ and such that for some domain $D\subset {ℝ}^{n}$, $De {ℝ}^{n}$, with ${u}_{1}\left(x,0\right)={u}_{2}\left(x,0\right)$, and ${u}_{1}\left(x,1\right)={u}_{2}\left(x,1\right)$, $\forall x\in D$, is then $ {u}_{1}\equiv {u}_{2}$? The authors answer the question in the of affirmative under very general assumptions on the nonlinearity expressed by the function $F\left(u,{u}^{*}\right)$. The main result is represented by Theorem 1.1, whose proof is divided into a few steps. In this regards, in Lemma 2.1 an exponential decay estimate is proved. Then, Corollaries 2.2, 2.3, 2.4, Theorem 2.5 and Corollaries 2.6 and 2.7 are important aspects of the proof of Theorem 1.1. The paper is technically sound and could be appreciated by people engaged in researches addressed to nonlinear analysis and mathematical physics. 35Q55 NLS-like (nonlinear Schrödinger) equations 35B60 Continuation of solutions of PDE
{"url":"http://zbmath.org/?q=an:1041.35072","timestamp":"2014-04-20T03:17:37Z","content_type":null,"content_length":"23978","record_id":"<urn:uuid:528fe6fd-5bed-4e08-97eb-37221509195d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Brevet US4829577 - Speech recognition method The present invention relates to a speech recognition method using Markov models and more particularly to a speech recognition method wherein the adapting of Markov model statistics to speaker input can be easily performed. In a speech recognition using Markov models, a Markov model is established for each word. Generally for each Markov model, plurality of states and transitions between the states are defined. For the transitions, occurrence probabilities and output probabilities of labels or symbols are assigned. Unknown speech is then converted into a label string and a probability of each word Markov model outputting the label string is determined based on the transition occurrence probabilities and the label output probabilities assigned to each respective word Markov model. The word Markov model having the highest probability of producing the label string is determined. The recognition is performed according to this result. In speech recognition using Markov models, the occurrence probabilities and label output probabilities (i.e., "parameters") can be estimated statistically. The details of the above recognition technique are described in the following articles. (1) "A Maximum Likelihood Approach to Continuous Speech Recognition" (IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-Vol.5, No. 2, pp. 179-190, 1983, Lalit R. Bahl, Frederick Jelinek and Robert L. Mercer) (2) "Continuous Speech Recognition by Statistical Methods" (Proceedings of the IEEE vol. 64, 1976, pp. 532-556, Frederick Jelinek) (3) "An Introduction to the Application of the Theory of Probabilistic Functions of a Markov Process to Automatic Speech Recognition" (The Bell System Technical Journal vol. 62, No. 4, 1983, pp. 1035-1074, April, S. E. Levinson, L. R. Rabiner and M. M. Sondhi) Speech recognition using Markov models generally needs a tremendous amount of speech data and the training thereof requires much time. Furthermore, a system trained with a certain speaker often does not get sufficient recognition scores for other speakers. And even for the same speaker, when there is a long time between the training and the recognition, there difference between the two events may result in poor recognition. As a consequence of the foregoing difficulties in the prior at, it is an object of the present invention to provide a speech recognition method wherein a trained system can be adapted for different circumstances (e.g., different speakers or the same speaker at different times), wherein the adaptation is easily performed. In order to accomplish the above object, the present invention stores frequencies of events, which frequencies have been used for estimating parameters of Markov models during initial training. Frequencies of events are next determined based on adaption data, referring to the parameters of the Markov models. Then new parameters are estimated utilizing frequencies derived during training and during adaptation. FIG. 2 shows examples of a trellis diagram illustrating label generation between states in a Markov model. In FIG. 2, the abscissa axis indicates passing time--in terms of generated labels--and the ordinate axis indicates states of Markov model. An inputted label string is shown as W[1], w[2] . . . w[1] along the time axis. The state of the Markov model, while time passes, changes from an initial state I to a final state F along a path. The broken line shows all paths. In FIG. 2, the frequency of passing from the i-th state to the j-th state and at the same time outputting a label k is represented by a "count" c^* (i,j,k). c^* (i,j,k), the frequency of passing through the multiple paths indicated by the arrows in FIG. 2 and outputting a label k is determined from parameter probabilities p(i,j,k). A p(i,j,k) probability is defined as the probability of passing through from i to j and outputting k. In this regard, the frequency of the Markov model being at the state i (S ^* (i)), as shown by the arc, is obtained by summing up frequencies C^* (i,j,k) for each j and each k. From the definition of frequencies C^* (i,j,k) and S^* (i), a new parameter P'(i,j,k) --indicating the probability of traversing from state i to state j while producing a specific label k given that the process is at state i--can be obtained according to the following estimation P'(i,j,k)=C^* (i,j,k)/S^* (i) Iterating the above estimation for various j and k values can result in relative probabilities P[0] (i,j,k) accurately reflecting P' probabilities derived from the training data. Here the subscript zero indicates that the value is after training. Similarly, S[0] ^* and C[0] hu * are values derived from training. According to the present invention, frequencies C[1] ^* (i,j,k) and S[1] ^* (i) derived from adaptation speech data are obtained using training data probabilities P[0] (i,j,k). The new "after adaptation" probability P[1] (i,j,k) is determined from training and adaptation data as follows. P[1] (i,j,k)={(λ)C[0] ^* (i,j,k)+(1-λ)C[1] ^* (i,j,k)}/{(λ)S[O] ^* (i)+(1-λ)S[1] ^* } where 0≦λ≦1 The frequencies needed for estimation are determined by interpolation. This interpolation renders the probability parameters P[0] (i,j,k), obtained by the initial training, adapatable to different recognition circumstance. According to the present invention, C[0] ^* (i,j,k)=P[0] (i,j,k)×S[0] ^* (i). From this relationship, the following estimation is derived. ##EQU1## Hence, the frequency C[0] ^* (i,j,k) need not be stored. When initial training data is significantly different from adaptation data, it is desirable to make use of the following term instead of P[0] (i,j,k). (1-μ)P[0] (i,j,k)+μe 0≦μ≦1 Here is a certain small constant number, preferably 1/(the number of the labels)x(the number of the branches). In the preferred embodiment to be hereinafter described, probabilities of each proceeding from one state to another state and outputting a label are used as probabilistic parameters of Markov models, though transition occurrence probabilities and label output probabilities may be defined separately and used as parameters. FIG. 1 is a block diagram illustrating one embodiment of the invention, FIG. 2 is a diagram for describing the invention. FIG. 3 is a flow chart describing the operation of the labeling block 5 of the example shown in FIG. 1. FIG. 4 is a flow chart describing the operation of the training block 8 of the example shown in FIG. 1. FIG. 5, FIG. 6 and FIG. 7 are diagrams for describing the flow of the operation shown in FIG. 4. FIG. 8 is a diagram for describing the operation of the adaptating block of the example shown in FIG. 1. FIG. 9 is a flow chart describing the modified example of the example shown in FIG. 1. In FIG. 1 inputted speech data is supplied to an analog/digital (A/D) converter 3 through a microphone 1 and an amplifier 2. A/D converter 3 transforms uttered speech into digital data, which is then supplied to a feature extracting block 4. The feature extracting block 4 can be an array processor made by Floating Point Systems Inc. In the feature extracting block 4, speech data is at first discrete-Fourier-transformed every ten milliseconds using a tweny millisecond-wide window. The Fourier-transformed data is then outputted at each channel of a 20 channel band pass filter and is subsequently provided to a labelling block 5. Labelling block 5 performs labelling by referring to a label prototype dictionary 6. In particular, speech is, by known methods, characterized as a plurality of clusters where each cluster is represented by a respective prototype. The number of prototypes in the present embodiment is 128. The labelling is for example performed as shown in FIG. 3, in which X is the inputted feature from block 4, Y[i] is the feature of the i-th prototype in the dictionary 6, N is the number of the all prototypes(=128), dist(X, Yi) is the Euclid distance between X and Y[i], and m is the minimum value among previous dist(X,Y[i])'s. m is initialized to a very large number. As shown in the FIG., inputted features X's are in turn compared with each feature prototype, and for each inputted feature the most similar prototype (that is, the prototype having the shortest distance) is outputted as an observed label or label number P. The labelling block 5 outputs a label string with an interval of ten millisec. between consecutive labels. The label string from the labelling block 5 is selectively provided to either a training block 8, an adaptation block 9, or a recognition block 10 through a switching block 7. Details regarding the operation of the training block 8 and the adaptation block 9 will be given later with reference to FIG. 4 and FIGS. thereafter. During initial training the switching block 7 is switched to the training block 8 to direct the label string thereto. The training block 8 determines parameter values of a parameter table 11 by training Markov model using the label string. During adaptation, the switching block 7 is switched to the adaptation block 9, which adapts the parameter values of the parameter table 11 based on the label string. During recognition, the switching block 7 is switched to the recognition block 10, which recognizes inputted speech based on the label string and the parameter table. The recognition block 10 can be designed according to Forward-backward algorithm calculations or Viterbi algorithms which are discussed in the above article (2) in detail. The output of the recognition block 10 is provided to a workstation 12 and is displayed on its monitor In FIG. 1 the blocks surrounded by the broken line are implemented in software on a host computer. An IBM 3083 processor is used as the host computer, and the IBM conversational monitoring system CMS and PL/1 are used as an operation system and a language, respectively. The above blocks can also be implemented in hardware. The operation of the training block 8 will be next described in detail. In FIG. 4 showing the procedure of the initial training, each word Markov model is first defined, step 13. In this embodiment the number of words is 200. A word Markov model is shown in FIG. 5. In this FIG., small solid circles indicate states and arrows show transitions. The number of states including the initial state I and the final State F is 8. There are three types of transitions, that is, transitions to the next states tN, transitions skipping one state tS, and transitions looping the same state tL. The number of labels in a label string corresponding to one word is about 40 to 50. The label string of the word is related to the word Markov model from the initial state to the final state thereof, looping sometimes and skipping sometimes. In training, probabilities for a given word model are determined based on the string(s) of labels generated when the given word is uttered. To define the Markov models involves establishing the parameter table of FIG. 1 tentatively. In particular, for each word a table format as shown in FIG. 6 is assigned and the probability parameters P(i,j,k) are initialized. The parameter P(i,j,k) refers to the probability that a transition from the state i to the state j occurs in a Markov model, and that a label k is produced at that i→j transition. Furthermore in this initialization, the parameters are set so that (a) a transition to the next state, (b) a looping transition, and (c) a skipping transition have respective probabilities of 0.9, 0.05 and 0.05. One each transition in accordacne with the initialization, all labels are produced at equal probability, that is 1/128. After defining word Markov models, initial training data is inputted, step 14, which data has been obtained by speaking 200 words to be recognized five times. Five utterances of a word have been put together and each utterance has been prepared to show which word it responds to, and in which order it was spoken. Here let U=(u[1], u[2], . . . ,u[5]) to indicate a set of five utterances of one specified word. Each utterance u[n] corresponds to a respective string of labels. After completing the inputting of the initial training data, the Forward calculation and Backward calculation in accordance with the forward-backward algorithm are performed, step 15. Though the following procedure is performed for each word, for convenience of description, consideration is only given to a set of utterances of one word. In Forward calculation and Backward calculation the following forward value f(i,x,n) and backward value b(i,x,n) are calculated. f(i,x,n): the frequency that the model reaches the state i at time x after starting from the initial state at time 0 for the label string u[n]. b(i,x,n): the frequency that the model extends back to the state i at the time x after starting from the final state at time r[n] for the label string u[n]. Forward calculation and Backward calculation are easily performed sequentially using the following equations. Forward Calculation ##EQU2## wherein P[t-1] is the parameter stored in the parameter table at that time, k is determined depending on the Markov model, and in the embodiment k=0, 1, or 2. Backward Calculation ##EQU3## wherein E is the number of the states of the Markov model. After completing the Forward and Backward calculations, the frequency that the model passes from the state i to the state j outputting the label k for the label string u[n], namely the count (i,j,k,n), is then determined. The count (i,j,k,n) is based on the forward value f(i,x,n) and the backward value b(i,k,n) for the label string u[n], step 16 according to the following expression: ## The above expression can be interpreted with reference to FIG. 7, which shows a trellis diagram for matching a word Markov model against the label string u[n] (=w[n1] w[n2] . . . w[n] r[n]). (For simplicity, the n subscript has not been included in FIG. 7). u[n] (w[nx]) is depicted along the time axis. When w[nx] =k, that is, delta(w[nx],k)=1, the w[nx] is circled. The condition w[nx] =k occurs when the label output considered in the word Markov model and the w[nx] label in the generated string are the same. Reference is now made to the path accompanied by an arrow and extending from the state i (state 3 in FIG. 7) to the state j (state 4 in FIG. 7) at the observing time x, at which the label w[nx] occurs. Both ends of the path are indicated by small solid circles in FIG. 7. In this case, the probability that the Markov model produces k=w[nx] is P[t-1] (i,j,w[nx]). Furthermore, the frequency that the Markov model extends from the initial state I to the solid lattice point of the state i at the time x-1 (as shown by the broken line f) is represented by the forward value f(i,x-1,n), and on the other hand the frequency that it extends back from the final state F to the solid lattic point of the state j at the time x (as shown by the broken line b) is represented by the backward value b(j,x,n). The frequency that k=w[nx] is outputted on the path p is therefore as follows. f(i,x-1,n).sup.. b(j,x,n).sup.. P(i,j,w[nx]) Count(i,j,k,n) is obtained by summing up the frequencies of circled labels for which the operation of γ(w[nx],k) yields 1. The count (i,j,k,n) for an utterance is thus expressed using the aforementioned expression. At this point it is noted that, in arriving at State i, (e.g., state 3) after (x-1) previous labels, there may be a variety of possible paths. The count calculation accounts for these various paths. For example, if x=5, it means four labels in the string were generated before state 3 was reached. This situation could result from three self-loop transitions T[L] at state I followed by a (horizontal) T[N] transition (see transition f in FIG. 7) or could result from a T[L] transition at state I followed by a T[N] transition between states I and 3, which is followed by two T[L] transistions at state 3. The paths are limited by the structure of the word Markov model. After obtaining count(i,j,k,n) for each label string u[n] (n=1 to 5), the frequency C[t] (i,j,k) over a set of label strings, U, is obtained, step 17. It should be noted that label strings u[n] are different from each other and that the frequencies of the label strings u[n], or total probabilities of the label strings T[n] are different from each other. Frequency count(i,j,k,n) should be therefore normalized by the total probability T[n]. Here T[n] =f(E,1[n],n) with E=8 in this specific embodiment. The frequency over the whole training data of the word to be recognized, C[t] (i,j,k), is determined as follows. ##EQU5## Next the frequency that Markov model is at the state i over the training data for the word to be recognized, S[t] (i) is determined likewise based on count(i,j,k,n), (step 18). ##EQU6## Based on the frequencies C[t] (i,j,k) and S[t] (i), the next parameter P[t+1] (i,j,k) is estimated as follows, (step 19). P[t] (i,j,k)=Ct(i,j,k)/S[t] (i) The above estimation process, or procedures of steps 14 through 19 is repeated a predetermined number of times (for example, five times) to complete the training of the target words, step 20. For other words the same training is performed. After completing the training, the final "after training" parameter Po(i,j,k)--i.e., P[t] after the predetermined number of processing cycles--is determined for the parameter table, FIG. 1, to be used for speech recognition which follows. The frequency which has been used for the last round of the estimation in step 18, S[o] (i), is also stored. This frequency S[o] (i) is to be used for the adaptation, which will be hereinafter described. The operation of the adaptation block 9 will be next described referring to FIG. 8. In FIG. 8 parts having counterparts in FIG. 4 are given the corresponding reference numbers, and detailed description thereof will not be given. In FIG. 8 adaptation data is inputted (Step 14A) by a speaker whose speech is to be recognized later. For purposes of adaptation, the speaker utters each word once. After this, the operations shown in the steps 15A through 18A are performed in the same way as in the case of the above mentioned training. Then two frequencies which are to be used for estimation are obtained respectively by interpolation. The new parameter P[1] (i,j,k) is obtained as follows, (Step 21); ##EQU7## wherein 0≦λ≦1 In this example the adaptation process is performed only once, though that process may be repeated. It is observed that Co (i,j,k) is equal to Po(i,j,k).So(i), so the following expression is used for the estimation of P1(i,j,k). ##EQU8## "a" of count (i,j,k,a) in FIG. 8 shows that this frequency corresponds to the label string for the adaptation data. The P[1] values are now available for storage in the parameter table for use in recognizing the speaker's subsequent utterances. After performing the steps mentioned above the adaptation is completed. From now on the speech of the speaker for whom the adaptation has been done is better recognized. According to this embodiment the system can be adapted for a different circumstance with small data and short training time. Additional optimization of the system can be achieved by adjusting the interior division ratio, λ, according to the quality of the adaptation data such as reliability. Assuming that the numbers of Markov models, branches, and labels are respectively X, Y, Z, then the amount of data increased due to by S(i) is X. On the other hand, the amount of data increased by P [0] (i,j,k) is XYZ. Therefore the amount of the data increased by this adaptation is very small: This embodiment also has an advantage that a part of software or hardware can be made common to the adaptation process and the intial training process because both processes have many identical Furthermore, adaptation can be repeated for a word wrongly recognized because adaptation is performed on a word-wise basis. Needless to say, adaptation for a word might not be performed until the word is wrongly recognized. A modification of the above-mentioned embodiment will be next described. Adaptation can be performed well through this modification when the quality of the adaptation data is quite different from that of the initial training data. FIG. 9 shows the adaptation process of this modified example. In this FIG., parts having counterparts in FIG. 8 are given the corresponding reference numbers, and detailed description thereof will not be given. In the modified example shown in FIG. 9. an interpolation using P[0] (i,j,k) is performed as follows, (step 22), before the new frequencies, C[1] (i,j,k) and S[1] (i) are obtained from the adaptation P[o] (i,j,k)=C(1-μ)Po(i,j,k)+μe The value obtained by interpolating the parameter P[o] (i,j,k) based on a small value e and an interior division ratio μ is utilized as the new parameter. In the training process during the adaptation process, how well parameters converge to actual values also depends heavily on initial values. Some paths which occurred rarely for initial training data may occur frequently for adaptation data. In this case adding a small number e to the parameter P[o] (i,j,k) provides a better convergence. As described hereinbefore, according to the present invention, adaptation of a speech recognition system can be done with a small amount of increased data and short time. The required storage capacity, the increase in the number of program steps, and required hardware components are also respectively very small. Additionally optimization of the system can be achieved by adjusting the interior division ratio according to the quality of the adaptation data.
{"url":"http://www.google.fr/patents/US4829577","timestamp":"2014-04-20T13:29:53Z","content_type":null,"content_length":"104088","record_id":"<urn:uuid:57bd71e7-1810-4a0b-b0ff-91a7a855c942>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Paul Linsay's Poisson Fit Paul Linsay contributes the following: Using Landsea’s data from here, plus counts of 15 and 5 hurricanes in 2005 and 2006 respectively, I plotted up the yearly North Atlantic hurricane counts from 1945 to 2004 and added error bars equal to $\pm \sqrt{count}$ as is appropriate for counting statistics. The result is in Figure 1. Figure 1. Annual hurricane counts with statistical errors indicated by the red bars. The dashed line is the average number of hurricanes per year, 6.1. There is no obvious long term trend anywhere in the plot. There is enough noise that a lot of very different curves could be well fit to this data, especially data as noisy as the SST data. I next histogrammed the counts and overlaid it with a Poisson distribution computed with an average of 6.1 hurricanes per year. The Poisson distribution was multiplied by 63, the number of data points so that its area would match the area of the histogrammed data. The results are shown in Figure 2. The Poisson distribution is an excellent match to the hurricane distribution given the very small number of data points available. I should also point out that I did no fitting to get this result. Figure 2. Histogram of the annual hurricane counts (red line) overlaid with a Poisson distribution (blue line) with an average of 6.1 hurricanes per year. I conclude from these two plots that 1. The annual hurricane counts from 1945 through 2006 are 100% compatible with a random Poisson process with a mean of 6.1 hurricanes per year. The trends and groupings seen in Figure 1 are due to random fluctuations and nothing more. 2. The trend in Judith Curry’s plot at the top of this thread is a spurious result of the 11 year moving average, an edge effect, and some random upward (barely one standard deviation) fluctuations following 1998. 188 Comments 1. Solow and Moore 2000, cited by Roger Pielke, also fitted a Poisson model to hurricane data and concluded that there was no trend to the hurricane data that then had accrued. The 2005 hurricane does appear loud to me in Poisson terms but then so would be 1933 and 1886, so the process may be a bit long-tailed. 2. Steve, nice job! I question the meaning of “error bars” in the first figure. Ignoring the undercount issues, we know the values exactly. The second graph makes the point that the data could have come from a Poisson population. Steve: TAC, you mean, “nice job, Paul “ added error bars equal to +- sqrt(count) as is appropriate for counting statistics Not very familiar with this, any reference for layman? 4. #3 — The error bars in Figure 1 assume a completely random process. That would be the null assumption (no deterministic drivers). The Poisson plot shows that the system has a driver, but is a random process within the bounds determined by the driver. It’s a lovely result, Paul, congratulations. You must have laughed with delight when you saw that correlation spontaneously emerge, and with no fitting at all. That feeling is the true reward of doing science. I expect Steve M. has experienced that, too, now. In a strategic sense, your result, Paul, shows that a large fraction of the population of climatologists have a pre-determined mental paradigm, namely AGW, and are looking for trends confirming that paradigm. They have gravitated toward analyses — an 11-year smoothing that produces autocorrelation, for example — that produce likely trends in the data. These are getting published by editors who also accept the paradigm and so accept unquestioningly as correct the analyses that support it. Ralph Ciccerone’s recent shameful accomodation of Hansen’s splice at PNAS is an especially obvious example of that. These are otherwise good scientists who have decided they know the answer without actually (objectively) knowing, and end up enforcing only their personal Honestly, your result deserves a letter to the same journal where Emanuel published his trendy (in both senses) hurricane analysis. Why not write it up? It’s clearly going to take outside analysts to bring analytical modesty back to the field. Being shown wrong is one thing in science. Being shown foolishly wrong is quite another. Actually, now that I think about it, does the pre-1945 count produce a Poisson distribution with a different median? If so, you could show that, and then include Margo’s correction of the pre-1945 count, add the corrected count to your data set and see if the Poisson relationship extends over the whole set. Co-publish with Margo. It will set the whole field on its ear. :-) Plus, you’ll have a really great time. 5. #4 “Emanuel” — that should have been Holland and Webster (Phil. Trans. Roy. Soc. A) — but then, your result deserves a wider readership than that. 6. Nice job, Paul! re #4: Actually, now that I think about it, does the pre-1945 count produce a Poisson distribution with a different median? If so, you could show that, and then include Margo’s correction of the pre-1945 count, add the corrected count to your data set and see if the Poisson relationship extends over the whole set. Co-publish with Margo. It will set the whole field on its ear. :-) Plus, you’ll have a really great time. For that, one could use, e.g., the test I referred here. Since people seem to have both interest and time (unfortunately I’m lacking both right now), just a small hint ;) : I think there was the SST data available somewhere here. Additionally R users look here: and Matlab users here: 7. Too funny! I was just reading Jean S’s suggestion on the other thread. My precise thought was: I should correct my analysis for poisson distributions! :) 8. Paul, ha, like yesterday you beat me to it again. I plotted the data from Judith Curry’s link from yesterday and it is not quite such a good fit to a Poisson distribution as the graph you show above, but pretty good. To look at the correlation to a Poisson distribution I plot the fractional probability from the hurricanes per year (the number of years out of 63 that have a particular number of hurricanes in it divided by the total 63 years) against the probability for that particular number of hurricanes from a Poisson distribution with that mean value. Then a perfect correlation would be a straight line of gradient 1. An analysis of the error bars on that graph shows a good correlation to a Poisson distribution with R2~0.7 but the error bars are so large (a consequence of the small numbers) that it very difficult to rule anything in or out. #2 TAC, the words ‘error bars’ here are not ‘errors’ in the sense of measurement errors. The assumption here is that the number of hurricanes is the actual true number with no undercounting or any other artifacts distorting the data (as if you had god-like powers and could count every one infallibly), but if you have a stoichastic process (something that is generated randomly) then just because you have 5 hurricanes in one year, then even if in the next year all the physical parameters are exactly the same, you may get 3, or 8. Observed over many years you will get a spread of numbers of hurricanes per year with a certain standard deviation. That perfectly natural spread is what has been referred to as the ‘error bar’. It is a property of a Poisson distribution that if you have N discrete counts of something, then the standard deviation of the distribution is equal to the square root of N. Try Poisson distribution on Wikipedia. As N becomes larger and larger, the asymmetry reduces and it looks much more like a normal distribution. But where you have a small number of discrete ‘things’ – as here with small numbers of hurricanes each year, then the distribution is asymmetrial (because you can’t have less than 0 storms per year). I would say that the excellent fit to a Poisson distribution shows that hurricanes are essentially randomly produced and what is more, the small number of hurricanes per year (in a statistical sense) makes determining trends statistically nonsense unless you have vastly more data (vastly more years). As I noted in the other thread yesterday, concluding that hurricanes are randomly produced does NOT preclude a correlation with SST, AGW, the Stock Market, marriages in the Church of England or anything else. If there is a correlation with SST over time, for the sake of argument, what you would see is a Poisson distribution in later years with a higher average than the earlier years. What always seemed to have been omitted in these arguments previously like in the moving 11 year average by Judith Curry is that the error bars (natural limitations on confidence) are very large. How certain you can be about whether that average really has gone up or not, is almost non-existant on this data. You might suspect a trend but can’t on these numbers show a significant increase with any level of confidence. By the same token, Landsea’s correction is way down in the noise. 9. #3 Steve and Paul, I apologize for mixing up your names. Oops! #4 Re error bars: By convention these are used to indicate uncertainty corresponding to a plotted point. In this case, there is no uncertainty (again, of course, ignoring the fact that there is uncertainty because of the undercount!). Thus the error bars should be omitted from the first figure. Going out on a limb, I venture to say that the error bars were computed to show something else: That each observation is individually “consistent with” the assumption of a Poisson df (perhaps with lambda equal to about 6). Anyway, it appears that each error bar is computed based only on a single datapoint (N=1). This procedure results in 62 distinct interval estimates of lambda. However, it is not clear at all why we would want these 62 estimates. The null hypothesis is that the data are iid Poison, so we can safely use all the data to come up with the “best” single estimate of lambda and then test H0 by considering how likely it is that we would have observed the 62 observations if H0 is true (e.g. with a Kolmogorov-Smirnov test). Finally, I agree with you that it is a “lovely result,” as you say :-) 10. Argh, sorry TAC, just realised I misread the comment numbers and should have addressed my comments on Poisson statistics to UC in #3, not you. 11. There was a link to hurricane data that was inadvertantly dropped. #3. UC, The best I can do for you is “Radiation Detection and Measurement, 2nd ed”, G. F. Knoll, John Wiley, 1989. Chapter 3 is a good discussion of counting statistics. Suppose you have a single measurement of N counts and assume a Poisson process. What is the best estimate of the mean? N. What is the best estimate of the variance? N. #4, Pat The Poisson plot shows that the system has a driver How does it do that? 12. The analyses all around here at CA on TC frequencies and intensities have been most revealing to me, but the results have not been all that surprising once I understood that the potentional for cherry picking (and along with the evidence of how poorly the picking process is understood by those doing it) was significantly greater than I would have initially imagined. What will be more revealing to me will be the reactions to these analyses. The analyses say be very careful how you use the data, but I fear the inclination is, as Pat Frank indicates, i.e. here is what we suspect is true and here is the analysis from these selected data to substantiate it. 13. #9 TAC (really this time), I have to disagree with you about the ‘error bars’ on the graph, they shoudn’t be omitted, they are vital. There may be no uncertainty in the measured number of storms in a year but what should be plotted is ‘confidence limit’ – error bar is perhaps a loaded term. If you could run the year over and over again then you would get a spread of values, that is the meaning of the uncertainty or ‘error bar’ plotted in the figure 1. 14. #13 IL, do you agree that the same value of the standard deviation should apply to every observation? If not — and the graph clearly indicates that it does not — could you explain to me why? 15. There is a scientific response to this: “Ack!” The RealClimate response: “*sigh* so how much was Paul Linsay paid through proxies some laundered money from Exxon-Mobil to confuse the public with regard to the scientific consensus on global My response: “Holy crap, we’ve been trying to predict a random process all of this time. Someone should call Hansen and Trenberth and recommend a ouija board for their next forecast – unless they’re already using one” 16. #14, no, because the count in each year is an independent measurement and the years are not necessarily measuring the same thing. It is conceivable for example that a later year has been affected by a positive correlation with SST (or any other mechanism) so the two periods would not have the same average. If you have measured a count of N then the variance of that in Poisson statistics is N. If you hypothesize that a number of years are all the same so that you add all the counts together then you have a larger value of N but the fractional uncertainty is lower since it is the standard deviation divided by the value N but since the standard deviation is the square root of N then fractional uncertainty is (root N)/N = 1/(root N) and thus as N increases, the relative uncertainty decreases. If you treat each year as separate then I believe that what Paul has done in figure 1 is correct. 17. John A – just because it is a random process does not mean that there cannot be a correlation with SST, AGW, the Stock Market, marriages in the Church of England or anything else Its quite reasonable to look to see if there is a correlation with time, SST or whatever. I have no problems with that. What I do think though is that the statistics of these small numbers make the uncertainties huge so that the amount of data you need to be able to confidently say that there is a real trend is far larger than is available. You would need many years’ more data to reduce the uncertainties – or if there was a real underlying correlation with SST – or whatever – it would have to be much more pronounced to stick up above the natural scatter. 18. Il, each year is an independent measurement and the years are not necessarily measuring the same thing Well, under the null hypothesis the “years” are measuring the same thing. Each one is an iid variate from the same population. The variates (not the observations) have the same mean, variance, skew, kurtosis, etc. Honest! I’ll try to find a good reference on this and post it. 19. Elsner is using Poisson regression for Hurricane prediction: Elsner & Jagger: Prediction models for annual U.S. hurricane counts, Journal of Climate, v19, 2935-2952, June 2006. He has some other interesting looking publications here: and a (recently updated) blog here: 20. Re 17: IL, thanks for making me laugh. We’re now generating our own statistics-based humor on this blog. Re #11: Paul, what happens if you apply the same analysis to the global hurricane data? Now I’m curious. Because if the global data follows the same Poisson distribution then we’re looking at an even bigger delusion in climate science than temperatures in tree-rings. 21. What type of curve would arise with a Poisson constant gradually rising , for instance from alpha=5 in 1950 to alpha=7 in 2000 ? Would that curve be distinguishable from a Poisson graph with intermediate alpha, given the coarseness resulting from the fact that N = 63 only? 22. #20 John A – glad I have some positive effect #18 TAC, No, I don’t think so (although I am always aware of my own fallibilities and am willing to be educated). I think I understand the point you are making but I am not sure it is correct here in the way you imply. There are a lot of examples given in http://en.wikibooks.org/wiki/Statistics:Distributions one example is going for a walk and finding pennies in the street. Suppose I go for the same walk each day. Many days I find 0 pennies, a few days I find 1, a few days I find 2 etc I can average the number of pennies per day and come up with a mean value that tells me something about the population of pennies ‘out there’ and it will follow a Poisson distribution. If I walk for many more days I can be more and more confident of the mean value (assuming the rate of my neighbours losing pennies is constant) but I then cannot say anything about whether there is any trend with time – eg are my neighbours are being more careless with their small change as time goes by? In order to test whether there is some trend with time I need to look at each individual observation and treat that as the mean value which is what Paul did in the original graph. Yes, if I assume that there is some constant rate of my neighbours losing pennies, some of which I find, I can look at the total counts and I can then get a standard deviation but I would then not have 63 data points all with the same ‘error bar’, I would have one data point with the ‘error bar’ in the time axis spanning 63 years. Yes, you can look at (for the sake of argument) pre 1945 hurricane numbers and post 1945 hurricanes and get a mean and standard deviation from Poisson statistics and infer whether there has been any change between those two periods with some sort of confidence limit but then you only have 2 data points. 23. I would like to announce my official “John A Atlantic Hurricane Prediction” for 2007. After extensive modelling of all of the variables inside of a computer model costing millions of dollars (courtesy of the US taxpayer) and staffed by a team of PhD scientists and computer programmers, I can announce: For 2007, the number of hurricanes forming in the Atlantic will be 6 plus or minus 3 24. Hey everybody. Speaking of “error bars”, what do you actually mean by that? Is it a definite limit of values which is acceptable as long as they stay within some given boundaries, or is it some definite “borderline” whose going beyond will cancel the veracity, or whatever you might call it, of your calculations? In Danish I think we call it “margin of error”, but I am not sure you mean the same by “error bars” so will somebody please inform me! It makes it easier for me as layman to follow your discussion on this thread. Thank you. 25. #22 Il, I think I understand the purpose that the “bars” in the first graphic were intended to serve. My concern had to do with whether use of bars for this purpose deviates from convention. I spent some time looking on the web, expecting to find a clear statement on error bars from either Tukey or Tufte. Unfortuntately, such a statement does not seem to exist. I did find one statement which can, I think, be interpreted to support your position: Note that there really isn’t a standard meaning for the size of an error bar. Common choices are: 1 $\sigma$ (the range would include about 68% of normal data), 2 $\sigma$which is basically the same as 95% limits, and 0.674àƒ’€”$\sigma$ which would include 50% of normal data. The above may be a population standard deviation or a standard deviation of the mean. Because of this lack of standard practice it is critical for the text or figure caption to report the meaning of the error bar. (In my above example, I mean the error bar to be 1 $\sigma$ for the However, this appears in a discussion of plotting large samples, and it seems likely that the word “population” was intended to refer to the sample, not the fitted distribution based on a sample of size N=1. Where does that leave things? Well, I continue to believe that we should reserve error bars for the purpose of displaying uncertainty in data. For the second purpose, to show how well a dataset conforms to a specific population, there are lots of good graphical methods (I usually use side-by-side Boxplots, admittedly non-standard but easily interpreted; I’ve also seen lots of K-S plots, overlain histograms, etc.). However, returning to error bars for the moment, perhaps the important point is already stated in the quote above: “Because of this lack of standard practice it is critical for the text or figure caption to report the meaning of the error bar.” 26. 11, thanks for the reference, I’ll try to find it. Like I said, not very familiar with counting processes. However, let’s still write some thoughts down: Suppose you have a single measurement of N counts and assume a Poisson process. What is the best estimate of the mean? N. What is the best estimate of the variance? N. Yes, the mean of observations from Poisson process is a MVB estimator of intensity (lambda, tau=1), and variance of this estimator is lambda/n, where n is the sample size. And I guess that the mean is best estimate for the process variance as well. But I think you assume that each year we have different process, which confuses me. How about thinking the whole set (n=60 or something) as realizations of one Poisson process, and testing whether it is a good model (i.e. Poisson process, constant lambda, estimate of lambda is 6.1). Plot this constant mean, add 95,99 % bars using Poisson distribution and plot the data to the same figure. 27. If Figures 1 and 2 were presented the other way around, the meaning of the “error bars” in Figure 1 could be presented more logically. From Figure 2 one can deduce that the data are from a Poisson distribution. Each annual count then is an estimate of the Poisson mean, with the one sigma confidence values on that mean as shown. The time series then can be examined to see if there is evidence of a change in the mean of the distribution. 28. There are three types of error here: Sampling error Early samples were taken from land and shipping lanes leaving large areas unsampled or under sampled. The size of this error has gone down with time. Methodological error Which includes everything from indirect methods of estimating location, winds speed and pressure to accuracy and precision of instruments. This also has gone down with time but is still non-zero. Process error We assume that even if the “climate” doesn’t change from year to year, the number of storms will. Any meaningful “error bars” would have to include (estimates of) the above. I conclude from these two plots that (2)The trend in Judith Curry’s plot at the top of this thread is a spurious result of the 11 year moving average, an edge effect, and some random upward (barely one standard deviation) fluctuations following 1998. I still don’t understand why the nice fit in the second plot implies the absence of a trend. Suppose that there was a very clear trend, with 1 hurricane in 1945, 2 hurricanes in 1946, … , and finally 12 hurricanes in 2005 and 15 hurricanes in 2006. Figure 2 would remain competely unaltered. So the fact that the Poisson distribution fits the histogram seems irrelevant as far as the existence of a trend is concerned. It is possible to obtain a Poisson distribution with one global rate, just by adding smaller distributions with different rates. 30. #26 UC: I completely agree with what you’ve written. 31. #29, True, histogram doesn’t care about the order. If google didn’t lie, 0.01 0.05 0.95 0.99 quantiles for Poisson(6) are 1 2 10 12, respectively. So, to me, the only problem with Poisson(6) model in this case are the 10 consecutive less-than-averages in 70′s. So the fact that the Poisson distribution fits the histogram seems irrelevant as far as the existence of a trend is concerned Just checked, term ‘trend’ is not in Kendall’s ATS subject index. What are we actually looking for? IMO we should look for possible changes in the intensity parameter. 32. #29 James, your point is well taken. You could have a strong trend and still obey a Poisson distribution. However, it appears that is not the case here; there is no trend in the data. Incidentally, landfalling hurricanes were considered (here), and it seemed that the data were almost too consistent with a simple Poisson process. It made me wonder what was going on. 33. TAC, re #9: search on “ergodicity” at CA (or “count ‘em, five”. This was the subject of argument between myself and “tarbaby” Bloom. The counts are known with high (but not 100%) accuracy, but counts are not the issue; it’s the behavior of the climate system that’s the issue, and your desire to make an inference *among* years. If the climate system were to replay itself over 1950-2006, you’d get a different suite of counts. That’s the sense in which “error” is meaningful for a yearly count. This is going to sound fanciful to anyone who has not analysed time-series data from a stochastic system. However it is epistemologically and inferentially correct. 34. #32 What test have you used to establish that there is no trend? A GAM fitted to these data, with Poisson variance, finds significant changes with time (p=0.027). This is only an approximate test, but a second order GLM, again with Poisson variance, is also significant (p=0.039). 35. #23 John A “After extensive modelling of all of the variables inside of a computer model costing millions of dollars (courtesy of the US taxpayer) and staffed by a team of PhD scientists and computer programmers, I can announce: For 2007, the number of hurricanes forming in the Atlantic will be 6 plus or minus 3″ I’ve very disapponted that as a fellow UK taxpayer you do not appeciate the fact that inorder to justify the signifcant sums of money we spend of funding this vital (to saving the planet) climate research that your supercomputer can only calculate to one significant figure. As a concerned UK taxpayer I have taken he liberty to once more fire up my retired backofthefagpacket supercomputer (which was retired from AERE Harwell some years ago after it was no longer required to solve the Navier-Stokes equations) and based on its the results it has output my prediction (endorsed by the NERC due to its high degree of precision) is 6.234245638939393 (+/- n/a as this calculation has been peformed by a supercomputer that can calculate pi to at least 22514 ecimal places as memorised by Daniel Tammet). As a UK tax payer I feel that it is important that such calculations must be highly precise and certainly not subject to any uncertainty. As a Church of England vicar I also appreciate that my mortality has already been determined (something which sadly people like Yule did not understand). I do confess however to be puzzled as to why inflation appears to have remained relatively constant and low since 1997 yet as a result of AGW it is now much rainier in the UK? 36. #33 Bender, I’m not sure I understand your point. FWIW, I have a bit of familiarity with time series. However, the question here has to do with graphical display of information, and specifically the use of error bars. At the risk of repeating myself, where the plotted points are known without error, by convention (i.e., what I was taught, but it does seem to be accepted by the overwhelming majority of practitioners) one does not employ error bars. Of course I understand your point about ergodicity. I agree there is a perfectly appropriate question about how the observations correspond to the hypothesized stochastic process, and clearly the variance of the process plays a role. As I think we both know, there are plenty of graphical techniques for communicating this information, some of which are mentioned above. But I do not see how this has anything to do with how one plots original data. It is ironic that this debate about proper graphics is occurring in the context of a debate about uncertainty in hurricane count data. For example, I thought Willis (here) presented an elegant way to display the uncertainty of the hurricane count data using both error bars and semicircles. That’s what error bars are for: to communicate the uncertainty in the data (which could be measured values, model results, or whatever). Climate scientists need to get used to thinking this way, and, as with other statistical activities, it is important to employ consistent and defensible methods. In a nutshell, plotting the 2005 hurricane count as 15 +/- 3.8 suggests that there might have been 18 hurricanes in 2005. That’s simply wrong. Said differently, the probability of an 18 in 2005 is zero; the number was 15. That number will never change (unless…). Data are data, data come first, and the properties of the data, including uncertainty, do not depend on the characteristics of some subsequently hypothesized stochastic process (at least in the classical world, where I spend most of my time). Finally, to be clear: I am raising an issue of graphical presentation. If the graphics were done differently — UC had it right in #26 — there would not be a problem. The problem with Figure 1 is that it overloads “error bars” in a way that’s bound to cause confusion. That’s my $0.02. 37. #36. TAC, that makes sense to me as well. 38. #34 RichardT, I may have made a mistake in keying the data, but here are my results showing no significant trend: % Year [1] 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 [21] 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 [41] 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 [61] 2004 2005 2006 % Hc [1] 7 5 3 5 6 7 11 8 6 6 8 9 4 3 7 7 4 8 3 7 6 4 7 6 4 12 5 6 3 4 4 6 6 [34] 5 5 5 9 7 2 3 5 7 4 3 5 7 8 4 4 4 3 11 9 3 10 8 8 9 4 7 9 15 5 % cor.test(Year,Hc,method="pearson") Pearson's product-moment correlation data: Year and Hc t = 0.9513, df = 61, p-value = 0.3452 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: -0.1307706 0.3579536 sample estimates: % cor.test(Year,Hc,method="kendall") Kendall's rank correlation tau data: Year and Hc z = 0.3654, p-value = 0.7148 alternative hypothesis: true tau is not equal to 0 sample estimates: % cor.test(Year,Hc,method="spearman") Spearman's rank correlation rho data: Year and Hc S = 38969.32, p-value = 0.6145 alternative hypothesis: true rho is not equal to 0 sample estimates: Warning message: Cannot compute exact p-values with ties in: cor.test.default(Year, Hc, method = "spearman") 39. #36 & 37 What are the errors for the time series of hurricane counts? Suppose you could re-run 1945-2006 over and over like in Groundhog Day. The number of hurricanes observed each year would change from repetition to repetition. You could then take averages and get a good measurement of the mean and variance of the number of hurricanes in each year. But you can’t. So the best you can do is assume each year’s count is drawn from a Poisson process with mean and variance equal to the number of observed hurricanes. The same applies to the histogram. The error on the height of each bin is given by Poisson statistics too. If you fit a function to a series of counts these are the errors that should be used in the fit. This has been standard practice in nuclear physics and its descendents for close to 100 years. As an example here’s a link from a nuclear engineering department. Notice the statement that “Counts should always be reported as A +- a.” 40. Re 36: … where the plotted points are known without error, by convention (i.e., what I was taught, but it does seem to be accepted by the overwhelming majority of practitioners) one does not employ error bars. The plotted points are not known without error. Direct measurements of intensity in the form of wind and pressure observations are seldom available. The eye and area of maximum winds cover a very small area and are unlikely to affect a station directly, especially for a ship whose captain is intent on avoiding the opportunity to observe the most severe part of the tropical cyclone. Observations from anywhere within the circulation are helpful (see Section 2.4) but alone reveal little about intensity (Holland, 1981c; Weatherford and Gray, 1988). The area of destructive winds can be very concentrated, especially in the case of a rapidly developing tropical cyclone. The most common estimates of intensity are those inferred from satellite imagery using Dvorak analysis. It is also possible to monitor the upper-tropospheric warm anomaly directly using passive microwave observations from satellites. Thermodynamic sounding retrievals from the NOAA Microwave Sounding Unit (MSU) have been statistically related to central pressure reduction and maximum winds for tropical cyclones in the North Atlantic (Velden, 1989) and western North Pacific (Velden et al., 1991). The technique has not been used operationally but is expected to have errors similar to the Dvorak analysis. It also performs less effectively on rapidly developing tropical cyclones, but does have the advantage that each estimate is independent of earlier analyses. Thus the Velden technique does not suffer from an accumulation of errors that may occur with a Dvorak analysis. Observing Methods As has been discussed here before. 41. #39 Paul, thanks for the link to the interesting document. You have to read it pretty carefully, and it deals with a slightly different problem (estimating the true mean, a parameter) but it does shed a bit of light on the topic. One particular thing to note: It specifies estimating sigma as the square root of the true mean, m, not the count, n. Thus, following the document’s prescription, all of the error bars in figure 1 should be exactly the same length, as implied by UC (#26) and also in #14. However, if you try to apply this to the hurricane dataset — for a small value of lambda where some of the observed counts are less than the standard deviation — you’ll find it doesn’t work very well (some of your observations will have error bars that go negative, for example). Does this clear anything up or just create more confusion? Let me try a different approach: In 2005, we agree there were n=15 hurricanes. We also agree that the expected count (assuming iid Poisson) was approximately m=lambda=6.1, and therefore the standard deviation of the expected count is approximately 2.5. So here’s the question: Is the n=15 a datapoint or an estimator for lambda? I expect your answer will be “both” — but I’m not sure. Anyway, I’d be interested in your response. 42. #40 James, I concede the point. However, in the interest of resolving the issue about error bars, can we, just for the moment, pretend that things are measured without error? Thanks! ;-) 43. #41. A question – hurricanes are defined here as counts with a wind speed GT 65 knots. There’s nothing physical about 65 knots. Counts based on other cutoffs have different distributions – for example, cat4 hurricanes don’t have a Poisson distribution, but more like a negative exponential or some other tail distribution. Hurricanes are a subset of cyclones, which in turn are a subset of something else. If hurricanes have a Poisson distribution, can cyclones also have a Poisson distribution? I would have thought that if cyclones had a Poisson distribution, then hurricanes would have a tail distribution. Or if hurricanes have a Poisson distribution, then what distribution would cyclones have. Just wondering – there’s probably a very simple answer. 44. re: #43 …if hurricanes have a Poisson distribution, then what distribution would cyclones have. Just wondering – there’s probably a very simple answer. I think it’d be similar to a pass-fail class. You might set an arbitrary value for pass on each individual test during the semester and then you could then give a number of tests, some harder and some easier, randomly. You’d have a Poisson distribution, possibly. But you might also, on adding up the various test results for each individual in the class decide you want to set things for 85% of students passing and this could be set at whatever value gives you that ratio. This could be regarded as a physical result while the arbitrary value for passing an individual test would not be. As for cyclones, they’d be everything which passes the “cyclone” test. There’d be a finite number of such cyclones each season, so they should distribute just like hurricanes do. Just with larger numbers. I think someone here was saying that as the numbers get larger, the curve gets more like a normal distribution, which makes sense, so I’d expect the distribution of cyclones to be more normal than the distribution of hurricanes. This being the case, it would seen that the distribution of a tail is simply a poisson distribution with the pool of “passing” candidates starting where the tail was chopped off. 45. Re #42 Wait a sec, TAC. Don’t concede too much here. Measurement error and sampling error are different things. This issue is all about sampling error. [Linsay's #39 clarifies my ergodicity argument.] The hypothesis we want to test is whether the observed counts are likely to be drawn from a rondom poission process (or truncated possoin, whatever) with fixed mean = fixed variance. Each year is a random sample from a stochastic process. Alternative hypothesis: there is a trend in the mean. Paul Linsay has plotted the variance around each year’s observation as though that observation *were* that year’s mean. That’s wrong, and that’s why your complaint about the difference count-variance dropping below zero for low counts is valid. The reason it’s wrong is that the null hypothesis is that there is only one fixed mean. If any obs fall outside the interval, such as 2005, there’s a chance we’re wrong. If, further, the proportion of observations falling outside the 95% confidence interval increases with time, then there’s a trend and the mean is not fixed. All this assumes that the variance is constant with time. But there is no reason this must be true. If the process is nonstationary, it is not ergodic. Then inferences about trends starts to get dicey. That’s where the statistical approach breaks down and the physical models start to play a role. 46. re: 44 cyclones … and by extension, tornados exhibit a poisson distribution? In an AGW theory of everything, shouldn’t f3-f5′s be increasing in frequency? Or is it because they do not tap directly into the catastrophic global increase of SSTs? Paul, a very interesting post. I disagree, however, when you say: (1)The annual hurricane counts from 1945 through 2006 are 100% compatible with a random Poisson process with a mean of 6.1 hurricanes per year. The trends and groupings seen in Figure 1 are due to random fluctuations and nothing more. When I looked at the distribution, it reminded me of a kind of distribution I have seen before in sea surface temperatures, which has two peaks instead of one. I’ve been investigating the properties of this kind of distribution, which seems to be a combination of a classical distribution (e.g. Poisson, Gaussian) with what (because I don’t know the name for it) I call a “sinusoidal distribution.” (This is one of the joys of not knowing a whole lot about a subject, that I can discover things. They’ve probably been discovered before … but I can come at them without preconception, and get the joy of discovery.) A sinusoidal distribution arises when there is a sinusoidal component in a stochastic process. The underlying sinusoidal distribution is the distribution of the y-values (equal to sin(x)) in a cycle. The distribution is given by d(arcsin(x))/dt. This is equal to 1/sqrt(1-x^2), where x varies from -1 to 1. The distribution looks like this: As you can see, the sine wave spends most of its time at the extremes, and very little time in the middle of the distribution. Note the “U” shape of the distribution. I looked at combinations of the sinusoidal with the poisson distribution. Here is a typical example: Next, here is the distribution of the detrended cyclone count, 1851-2006. Note the “U-shape” of the peak of the histogram. Also, note that the theoretical Poisson curve is to the left of the sides of the actual data. This is another sign that we are dealing with a sinusoidal Poisson distribution, and is visible in Figure 2 at the head of this thread. One of the curiosities of the sinusoidal distribution is that the width (between the peaks) is approximately equal to the amplitude of the sine wave. From the figure above, we can see that we are looking for an underlying sinusoidal component with an amplitude of ~ 4 cyclones peak to peak. A periodicity analysis of the detrended cyclone data indicates a strong peak at about 60 years. A fit for a sinusoidal wave shows the following: Clearly, before doing any kind of statistical analysis of the cyclone data, it is first necessary to remove the major sinusoidal signal. Once the main sinusoidal signal is removed, the reduced dataset looks like this: As you can see, the fit to a Poisson distribution is much better once we remove the underlying sinusoidal signal. As Jean S. pointed out somewhere, before we set about any statistical analysis, it is crucial to first determine the underlying distribution. In this case (and perhaps in many others in climate science), the existence of an underlying sinusoidal cycle can distort the underlying distribution in a significant way. While it might be possible to determine the statistical features (mean, standard deviation, etc.) for a particular combined distribution, it seems simpler to me to remove the sinusoidal component before doing the analysis. My best to everyone, and special thanks to Steve M. for providing this statistical wonderland wherein we can discuss these matters. 48. TAC plus bender #45. I see we are arguing about 2 different things here, the first perhaps a little more to do with semantics, the second more substantial. In #36 TAC argued that if the counts for 2005 were 15 then (assuming perfect recording capability) that was an exact number and so should have no error bar. He (? sorry, shouldn’t make assumptions) in #36 says that if you put 15+/-3.6 then that implies that the count could have been 18 which is wrong. I disagree. I think those are confidence limits – estimators if you prefer. In a physics experiment if I measure something numerous times with a measurement error, what the error bar is telling me is what is the probability that if I measure it again that I will get within a certain range of that value. In a literal sense we can’t ‘run’ 2005 again but the graph as plotted by Paul Linsay is meaningful to me as the confidence limits for each count – if we were to have another year with the same physical conditions as 2005, what is the likelihood that we would get 15 storms again. To me the correct answer is 15+/-3.6 (1 sigma). Its like the example from the physics web page linked by Paul, if I measure the number of radioactive decays per minute and I record the decays per minute for an hour. After the hour I have 60 measurements each of an exact number but that does not say to me that there are no error bars on those numbers. If I have N counts in the first minute then the standard deviation (the expectation of what I might get in the second minute) is root N. If I was to plot all of those 60 measurements against time I would plot each count value with its own individual confidence limit of root N (N being the particular count in that particular minute). Yes, if I take all the counts for the whole 60 minutes I can get a mean value for the hour. The confidence limit on that mean value will be root of all the counts in the hour (approximately 60N and the standard deviation will be root 60N). But that is not the confidence limit on an individual measurement. I could plot that mean value with its confidence limit root 60N but I would then only have a single point on the graph. Paul says he comes from a physics background, so do I and maybe this is where we are differing with TAC, Bender etc (maybe, again don’t want to make assumptions). In #45 Bender says that it is wrong to plot each point with the variance given by the mean of that point. Sorry Bender, I disagree with you, I believe it is correct. I am looking at it from the perspective of the radioactive decay counting experiment described above. You can reduce the uncertainty by summing together different years (although your uncertainty only decreases by root of the number of years that you sum) but then you have reduced the number of data points that you have! If you have the 63 years’ worth of individual year measurements then each individual one has a standard deviation which should be plotted, they are not all the same. If you want to address hypotheses such as ‘are the counts changing with time’ then you have to address the uncertainty in each data point if you retain all 63 data points and look for a trend that is significant above that noise level. Yes, you can reduce the uncertainty by summing years but then you have a lot less data points on your No, the variance count does NOT drop below zero because this is Poisson statistics, it is asymmetrical and doesn’t go below zero. 49. Let’s see if I clarify or confuse: What are the errors for the time series of hurricane counts? Suppose you could re-run 1945-2006 over and over like in Groundhog Day. The number of hurricanes observed each year would change from repetition to repetition. You could then take averages and get a good measurement of the mean and variance of the number of hurricanes in each year. But you can’t. So the best you can do is assume each year’s count is drawn from a Poisson process with mean and variance equal to the number of observed hurricanes. Yes, we are testing H0: the data are samples from Poisson(lambda) process. We don’t know lambda. We need to estimate it using the data. We want to find a function of the observations (a statistic) that tells something about this unknown lambda. Now, assuming H0 is true, we have quite a lot of theory behind us telling that average of the observations (say $\hat{x}[\tex]) is the best ever estimate of lambda (MVB, for example). We also know that variance of this estimator is$latex \hat{x}/n[\tex], where n is the number of independent observations. You cannot find unbiased estimator that has smaller variance than this. Estimate obtained this way will be distributed more closely round the true lambda than any other estimator. And the number of observations in this case is 60 (?), so we can see that sampling variance is small (I think I could say ‘sampling error is very small’ in this context as well). You can try this with simulated Poisson processes, take 60 samples of Poisson(6), take the average, and see how often it is less than 5 or more than 7. If you take only one sample, the sampling error is much larger. This is actually shown in Figure 1 Completely another business is the testing of H0. Knowing that lambda is close to 6, we can compute [0.01 0.05 0.95 0.99] quantiles from Poisson(6) – if, for example 0.99 point exceeded 10 times in the data, we can suspect that our distribution model is not OK. In addition, there should be no serial correlation in the data if H0 is true. And, I didn’t mention measurement errors in the above. Measurement errors affect both estimate of lambda and testing of H0. And finally, if Figure 1. bars represents sampling error, there is basically nothing wrong with it. Sampling variance is $latex \hat{x}/n[\tex], n=1. But I think that approach doesn’t bring up very powerful test for H0. 50. Re #47 If sinusoidal, why not “multistable”? You see the problem? Detection & attribution. (What is the frequency of your sinusoid? Is there more than one frequency? Or maybe the process is “multistable”? What are the attracting states?) Problem specification is a mess. Stationary random Poisson is just a starting point. The biggest reason arguing against Linsay’s Poisson is the high bias in the 1-2 counts and the low-bias in 3-4 counts. Systematic bias usually implies error in model specification. Essenbach’s cycle-removed Poisson, interestingly, removes this systematic bias. 51. Re #48 In #45 Bender says that it is wrong to plot each point with the variance given by the mean of that point. Sorry Bender, I disagree with you, Unfortunately, I am correct (in terms of the inferences that are being attempted with these data, which assume that you are observing a single stochastic process). If you suppose that the mean fluctuates from year to year (hence plotting variance around each observation, not the series mean), then you can not suppose that the variance is fixed. Either you are observing one stochastic process (one mean, one variance), or you observing more than one (multiple means, multiple variances) which has been stitched together to appear as one. Why would you ever plot a variance around an observation (as opposed to a mean)? What would that tell you? 52. Re #49 Let’s see if I clarify or confuse You clarify, at least up until the very end: if Figure 1. bars represents sampling error, there is basically nothing wrong with it. What’s right about plotting error around observations? Error is to be plotted on means; individual observations fall inside or outside those error limits. 53. Re #48 In #36 TAC argued that if the counts for 2005 were 15 then (assuming perfect recording capability) that was an exact number and so should have no error bar. He (? sorry, shouldn’t make assumptions) in #36 says that if you put 15+/-3.6 then that implies that the count could have been 18 which is wrong. It’s not 15 +/- 3.6 that should be plotted, it’s 6.1 +/- 3.6. If the series is stationary then the obs. 15 should be compared to that. In which case 15 is extreme. Essenbach shows that the mean and variance might not be stationary but sinusoidal. Now the obs 15 must be compared to the potential range calculated during 2005 to determine if it’s extreme. In which case 15 is still extreme. What’s right about plotting error around observations? Error is to be plotted on means; individual observations fall inside or outside those error limits. around observation. Mean of one sample is the sample itself. Observe only one sample from Poisson with unknown lambda. Best estimate of the lambda is the observation itself. Sampling variance is the observation itself. 55. #38 The Pearson Correlation coefficient you are using to find trends assumes that the data have a Gaussian distribution. Given the discussion of the Poisson nature of the data on this page, this choice need justifying. A more appropriate test for a linear trend when the data have Poisson variance is to use a generalised linear model with a Poission error distribution. I’ve done this, and you are correct, there is no linear trend. The absence of a linear trend does not imply that the mean is constant – there may be a more complex relationships with time. This might be sinusoidal (#47) but alternative exploratory model is a generalized additive model. A GAM finds significant changes in the mean. (This test is only appropriate) Everybody (except #47) seems to be happy with the statement that the “annual hurricane counts from 1945 through 2006 are 100% compatible with a random Poisson process” without any goodness of fit test. Are they? Doesn’t anyone want to test this assertion? 56. #55 RichardT The Pearson Correlation coefficient you are using to find trends assumes that the data have a Gaussian distribution. Given the discussion of the Poisson nature of the data on this page, this choice need justifying. This point is well taken. The Pearson version, as you correctly note, requires an assumption about normality. Given that we are looking at Poisson (?) data, there is reason to wonder about the robustness of the test. In #38 I also provided results from two nonparametric tests: Kendall’s tau and Spearman’s rho. They do not require a distributional assumption. Also, because they are relatively powerful tests even when errors are normal, they are attractive alternatives to Pearson. However, while these tests are robust against mis-specification of the distribution, they are not robust against, for example, “red noise”. SteveM has written a lot on this topic; search CA for “red noise” and “trend”. Among other things, “red noise” can lead to a very high type 1 error rate — you find too many trends in trend-free stochastic processes. Note that in this case all three tests, as well as an eyeball assessment, find no evidence of trend (the p-values for the 3 tests are .35, .71, and .61, respectively (#38)). I think we can safely conclude that whatever trend there is in the underlying process is small compared to the natural variability. 57. #56 These non-parametric tests, which will cope with the Poisson variance, can still only detect linear trends. If the relationship between hurricane counts and year is not linear, these tests are suitable, and there will be a high Type-2 error. Consider this code Even though there is an obvious relationship between x and y, the linear correlation test fails to find it. You correctly state that these tests non-parametric are not robust against red noise. If the hurricane counts are autocorrelated, then they are not from an idd Poisson process, and the claim that the mean is constant is incorrect. 58. The code cor.test(x,y)#p=0.959 for my simulation 59. First, I want to join Willis (#47) in offering “special thanks to Steve M. for providing this statistical wonderland wherein we can discuss these matters.” CA really is an amazing place. I would also call attention to Willis’s elegant and expressive graphics. Graphics are not a trivial thing; they are a critical component of statistical analyses (for those who disagree, Edward Tufte (author of “The Visual Display of Quantitative Information,” one of the most beautiful books ever written) presents excellent counter-arguments including an utterly convincing analysis of the 1986 Challenger disaster — it resulted from bad graphics!). WRT #44 – #56, there’s been a lot of comments, almost all of them interesting. I think bender has it right, but the arguments on both sides deserve careful consideration. The reason for the disagreement, as I see it, has to do with ambiguity about rules for graphical presentation, specifically what is “conventional,” and some confusion about what we are trying to represent with the figure. There is a also subtlety here that I am not sure I fully understand myself, but here goes: When you plot parameter estimates, error bars mean one thing (some sort of measure of the distance between the estimated value and the true parameter); when you plot data, they mean another (corresponding to the distance between the observed value and that single realization of the process). These two types of uncertainty are recognized, respectively, as epistemic and aleatory uncertainty (aka parameter uncertainty and natural variability). That’s why I asked the question in #41 Is the n=15 a datapoint or an estimator for lambda? which Il answered in #48 I think those are confidence limits – estimators if you prefer. When looking at a graphic, the clue about which type of uncertainty is presented — i.e. what the error bars refer to — is whether or not we are looking at data or estimates. When estimates are based on the mean of samples of size N=1, as is the case here, there is an obvious problem: The viewer may assume that the plotted points are data. However, you could argue, as some have, that these are not data but estimates based on samples of size N=1. (IMHO, it makes no sense to estimate from samples of size N=1 when you have 63 observations available; but that’s another discussion). Unless care is taken in how the graphic is constructed, the ambiguity is bound to cause confusion. 60. Oops! Please note that in the last paragraph of #59, the “aleatory/natural variability” is incorrectly defined. 61. #59 Now that I look at it, the whole section containing the word “aleatory” is a mess. Best just to ignore it. ;-) 62. #51 Bender, sorry to prolong an argument, but Unfortunately, I am correct (in terms of the inferences that are being attempted with these data, which assume that you are observing a single stochastic process). If you suppose that the mean fluctuates from year to year (hence plotting variance around each observation, not the series mean), then you can not suppose that the variance is fixed. Either you are observing one stochastic process (one mean, one variance), or you observing more than one (multiple means, multiple variances) which has been stitched together to appear as one. Why would you ever plot a variance around an observation (as opposed to a mean)? What would that tell you? No, I don’t think you are and I’m not sure you have understood what I was arguing based on your response additionally in #53. This is not a case where we have some large population where there is some defined population mean and standard deviation and by repeatedly sampling that population we can determine the mean and standard deviation of the sample and from that determine the whole data set which is what you seem to be arguing when you think that each data point is representative of that mean and should have that same variance. When you have a few random, independent processes like radioactive decay, and I submit, for the storm counts per year, you do not have the situation that you describe. Have you ever done an experiment where you have small counting statistics like a radioactive decay counting experiment for example? The process is described well in that link that Paul Linsay gave and it additionally describes the standard deviation on each individual data point as root N. Exactly the same situation applies to photons recorded by a photomultiplier but also to more commonplace situations such as calls to a call centre, admissions to a hospital etc. Each summed count within a time interval gives a number – the mean therefore for that time interval which if the count is N in that time interval, the variance is N and the standard deviation is root N. Each individual observation – each time period in the radioactive counting experiment or each year in the counting storms ‘experiment’ is an individual number subject to counting statistics. Where you have small probabilities of things happening but a large number of trials so that probability x trials = a significant number, you are subject to counting statistics. Then each data point (each year in the storms’ case that started everything off) has its own variance based on the number of storms counted in that year. Here I am making no assumptions about time stationarity or anything else, I am just looking at a sequence of small numbers generated by a process subject to counting statistics. This whole debate started because Paul Linsay put counting statistics confidence limits on the data points in figure 1. These are different for each point and I agree with him, you do not have the same confidence limit on each data point because each one is an independent measurement. If you want to compare them – to look for anomalous values or correlations with time or anything else then we must look at the confidence limits for a particular year, if you wish to test if a particular year is anomalous or we test for changes in the mean if we wish to test if there is some correlation with time. The mean value for all of the 63 years is 6.1 – suppose a particular year records for example 15. You might want to know what is the probability that that year is anomalous and you would look at the variance of that year which is root 15 and compare it with other years. If you want to compare with neighbouring single years then you would be comparing with the mean and standard variation of each of those individual years, if compared with the remaining 62 years then you would sum up all 62 years and derive a mean value and standard deviation which is root of all the storms in the 62 years. 63. #62 IL: Please take another look at the article that Paul provided and that you cite. It does not say: the standard deviation on each individual data point as root N. Rather — and this is important — it specifies root M, where M is the true process mean (i.e. the expected value of N), which is not the observed value N. 64. Since this has become a forum on measurement error, let’s continue. It’s an interesting topic all by itself. When you make a measurement there are two sources of error. One due to your instrument and the second due to natural fluctuations in the variable that you are measuring. The total error is the quadrature sum of these two. An an example, consider measuring a current, I. It is subject to a natural fluctuation known as shot noise with a variance proportional to I. I can build a current meter that has an intrinsic error well below shot noise so that the error in any measurement is entirely due to shot noise. Now I take one measurement of the current. It’s value happens to be I_1, hence the assigned error is +-sqrt(I_1) at the one sigma level. The experimental parameters change and a measurement of the current gives I_2, this time with an assigned error of +- sqrt(I_2). And so on. I never take more than one measurement in each situation but no one would argue with my assignment of measurement error. [Maybe bender would, I don't know!] Now translate this to the case of the hurricanes. Instrumental error is zero. I can count the number this year perfectly, it’s N. If hurricanes are due to a Poisson process the count has an intrinsic variance of N. Hence the assigned hurricane count error is sqrt(N), exactly what I did in Figure 1. #47, Willis The bin heights of the histograms are subject to Poisson statistics too. Hence the errors are +-sqrt(bin height). You have to show that the fluctuations in the distributions are significantly outside these errors to warrant the sine wave. To paraphrase the old joke about the earth being supported by a turtle: It’s Poisson statistics all the way down. 65. #63. Maybe that wasn’t the best link to work with since it discusses Gaussian profiles and says that Poisson statistics are too difficult to work with! Its only the true process mean M when you have a large number of counts so that a Gaussian is an appropriate statistics to use. It makes the point lower that when you have a single count, then the count becomes the estimate of the mean. I therefore go back to my point that the appropriate confidence limit on a single year’s storm counts is root N where N is the number of counts in that year. If I had thousands of year’s worth of data, on your argument I would take the standard deviation of the total number of storms which would number (say) 10,000 which the standard deviation would be a 100. Are you going to argue that the appropriate error on each individual year is +/-100?, or a fractional error of 100/10000 = 0.01 (times the mean of 6.1 = confidence limit of 0.061 on each individual data point)? The former is clearly nonsense and the latter is wrong because each year does not ‘know’ that there are thousands of years worth of data. Its like tossing coins, I can toss a coin thousands of times and get a very precise mean value with precisely determined standard deviation but if I toss a coin again, that is not appropriate for working out the probability of what is going to happen next time since the coin only ‘knows’ what the probability is of it coming up a particular result for the next throw! Ditto if I want to look at an individual year in that sequence I have to look at the count I have got for that year. Please note, in that example of counting for thousands of years and getting 10,000 storms I would be confident that I could determine the mean over all those years with a confidence limit of 1% but the uncertainty in time on that mean value would then span that whole period of thousands of years. If I want to see if there are long term trends in that data I can combine 100 years at a time to reduce the fractionational error in each of the mean values for each of those centuries and compare the mean value for each of those centuries with its standard deviation to test whether there is significant change with time. But then you would plot a single mean value for each century with the confidence limit on the time axis of one century. 66. #47. Willis, this is rather fun. Off the top of my head, your arc sine graphic reminded me of two situations. First, in Yule’s paper on spurious correlation, he has a graphic that looks like your sinusoidal graphic. Second, arc sine distributions occur in an extremely important random walk theorem (Feller). The amount of time that a random walk spends on one side of 0 follows an arc sine distribution. When I googled “arc sine feller”, I turned up a climateaudit discussion here that I’d forgotten about: http://www.climateaudit.org/?p=310 . So there might be some way of getting to an arc sine contribution without underlying cyclicity (which I am extremely reluctant to hypothesize in these matter.) 67. 49,54 Correction: lambda/n is the variance of the estimator, we don’t know lambda, but using $\hat{x}/n[\tex] wouldn't be too dangerous I guess(?). Compare to normal distribution case (estimate the mean, known variance sigma2),$latex \hat{x}[\tex] is the MVB estimator of the mean, with variance sigma2/n. 68. I find Willis’ point (#47) about eliminating the sinusoidal wave interesting from a statistical perspective, but it also speaks to the underlying climate issues. What is the cause and climatic signficance of the sinusoidal pattern Willis eliminated? You have a record of 63 years, which in climate terms is virtually nothing. I have long argued that the use of a 30 year ‘normal’ as a statistical requirement is inappropriate for climate studies and weather forecasting. Current forecasting techniques assume the pattern within the ‘official’ record is representative of the entire record over hundreds of years and holds for any period, when this is not the case. It is not even the case when you extend the record out beyond 100 years. The input variables and their relative strengths vary over time so those of influence in one thirty year period are unlikely to be those of another thirty year period. Climate patterns are made up of a vast array of cyclical inputs from cosmic radiation to geothermal pulses from the magma underlying the crust. In between is the sun as the main source of energy input with many other cycles from the Milankovitch of 100,000 years to the 11 year (9-13 year variability) Hale sunspot cycle and those within the electromagnetic radiation. We could also include the sun’s orbit around the Milky Way and the 250 million year cycle associated with the transit through arms of galactic dust. My point is the 63 year record is a composite of so many cycles both known and unknown that to sort them out in even a cursory way is virtually impossible with current knowledge. Is the 63 year period part of a larger upward or downward cycle, which in turn is part of an even larger upward or downward cycle? Now throw in singular events such as phreatic volcanic eruptions, which can measurably affect global temperatures for up to 10 years and you have a detection of overlappping causes problem of monumental proportions. 69. Interesting discussion. Everybody (except #47) seems to be happy with the statement that the “annual hurricane counts from 1945 through 2006 are 100% compatible with a random Poisson process” without any goodness of fit test. Are they? Doesn’t anyone want to test this assertion? Not 100 % compatible, that would be suspicious. And I think if I estimate lambda from observations, and then observe that 0.01 and 0.99 quantiles are exceeded only once with n=60, I think I have made kind of goodness of fit test. I don’t claim that it is optimal test, but at least I did it ;) This is not a case where we have some large population where there is some defined population mean and standard deviation and by repeatedly sampling that population we can determine the mean and standard deviation of the sample and from that determine the whole data set which is what you seem to be arguing when you think that each data point is representative of that mean and should have that same variance. Having trouble understanding what you are saying (sorry). In your link it is said that In practice we often have the opportunity to take only one count of a sample. IMO this is not the case here. TAC seems to agree with me. When you make a measurement there are two sources of error. One due to your instrument and the second due to natural fluctuations in the variable that you are measuring. The total error is the quadrature sum of these two. Makes no sense to me. 70. Re: #56 Note that in this case all three tests, as well as an eyeball assessment, find no evidence of trend (the p-values for the 3 tests are .35, .71, and .61, respectively (#38)). I think we can safely conclude that whatever trend there is in the underlying process is small compared to the natural variability. I have to continue going back to this statement and others like it to keep, what I view as the critical result coming out of this discussion, firmly in mind. To a layman with my statistical background I find the discussion about the Poisson distribution (and beyond) interesting and informative, but I also am inclined to view it as cutting the analysis of the data a bit too fine at this point. I would guess that a chi square goodness of fit test or a kurtosis/skewness test for normality would not eliminate a Poisson and/or a normal distribution as applying here (without the sinusoidal correction). Intuitively, if one considers the TC event as occurring more or less randomly and based on the chance confluence of physical factors, the Poisson probability makes sense to me. I agree with the Bender view on applicability of statistics and errors (but not necessarily extended to valuations of young NFL QBs) and his demands for error display bars. I have heard the stochastic mingling with physical processes arguments before but I keep going back to: stochastic processes arise from the study of fluctuations in physical systems. Standard statistical distributions can be helpful in understanding and working with real life events but I am also aware of those fat tails that apply to real life (and maybe the 2005 TC NATL storm season). 71. 68, Tim Ball: great post! 72. Steve Sadlov’s 2007 prediction : 6.1 +/- 2.449489743 — LOL! 73. Sorry I meant 6.1 +/- 2.469817807 ;) 74. It would be more interesting to find a 95% confidence interval for any hypothetical trend that can be included/hidden in the noisy data. Do climate models make predictions that are outside this confidence interval? 75. Paul, I see John Brignell gave your post a mention at his site. A little over half way down the page. 76. Steve M, you say: So there might be some way of getting to an arc sine contribution without underlying cyclicity (which I am extremely reluctant to hypothesize in these matter.) I agree whole heartedly. I hate to do it because it assumes facts not in evidence. I’ll take a look at your citation. Basically, what happens is that lambda varies with time. It probably is possible to remove the effects of that without assuming an underlying cycle. Exactly how to do that … unknown. 77. #65 IL: Thank you for your thoughtful comments. Believe me: I understand your argument. I am familiar with the statistics of radioactive decay, and I know something about how physicists graph count data. The error bars corresponding to that problem — you describe it well — are designed to serve a specific purpose: To communicate what we know about the parameter lambda. The “root N” error bars (though not optimal (see below)), are often used in this situation, and they are likely OK so long as the product of the arrival rate and the time interval is reasonably large. I have no argument on these points. So what’s the issue? Well, we’re not dealing with radioactive decay, or with any of the other examples you cite. We’re dealing with statistical time series, and, IMHO, the relevant conventions for plotting such data come from the field of time series analysis, not radioactive decay. Specifically, when you plot a time series with error bars, the error bars are interpreted to indicate uncertainty in the plotted values. That’s what people expect. At least that’s what my cultural background leads me to believe. [This discussion has a peculiar post-modern feel. Perhaps a sociologist of science can step in and explain what's going on here?]. Anyway, here are some responses to other comments: I therefore go back to my point that the appropriate confidence limit on a single year’s storm counts is root N where N is the number of counts in that year. This is approximately correct if the only sample of the population that you have is the N observations and you are concerned with estimating the uncertainty in the arrival rate. If you want a confidence interval for the observed number of arrivals, however, the answer is [N,N]. (Incidentally, the root N formula is actually not a very good estimator of the standard error. For one thing, if you happen to get zero arrivals, you would conclude that the arrival rate was zero with no uncertainty). If I had thousands of year’s worth of data, on your argument I would take the standard deviation of the total number of storms which would number (say) 10,000 which the standard deviation would be a 100. Are you going to argue that the appropriate error on each individual year is +/-100?, or a fractional error of 100/10000 = 0.01 (times the mean of 6.1 = confidence limit of 0.061 on each individual data point)? That’s a good point. Under the null hypothesis we have one population (of 63 iid Poisson variates). To estimate lambda, just add up all the events and divide by 63. Then I would plot the data — the 63 observations, no error bars — and, as bender suggests, perhaps overlay the figure with horizontal lines indicating the estimated mean of lambda (imagine a black line), an estimated confidence interval for lambda (blue dashes), and maybe some estimated population quantiles (red dots). However, I would not attach error bars to the fixed observation. You know: The observation is fixed, right? However, the overlay would describe the uncertainty in lambda as well as the estimated population quantiles. Its like tossing coins, I can toss a coin thousands of times and get a very precise mean value with precisely determined standard deviation but if I toss a coin again, that is not appropriate for working out the probability of what is going to happen next time since the coin only “knows’ what the probability is of it coming up a particular result for the next throw! OK. So how would you plot error bars for the time series of coin tosses? Note: Coin tosses can be modelled as a Bernoulli rv, whose variance is given by N*p*(1-p); since N=1, \hat{p} is always equal to either zero or one, and your error bars have length zero… 78. If it’s random then that means we have no clue at all what causes it. Neatly done Paul. 79. #77 TAC. Thanks for your comments, particularly the first paragraph seems to indicate that maybe we are not as far apart as I thought. Maybe this is a difference from different areas of science and we are arguing about presentation rather than substance but, to me, Paul’s figure 1 is correct and meaningful. What you say Then I would plot the data “¢’ ¬? the 63 observations, no error bars “¢’ ¬? and, as bender suggests, perhaps overlay the figure with horizontal lines indicating the estimated mean of lambda (imagine a black line), an estimated confidence interval for lambda (blue dashes), and maybe some estimated population quantiles (red dots). However, I would not attach error bars to the fixed observation. You know: The observation is fixed, right? However, the overlay would describe the uncertainty in lambda as well as the estimated population quantiles. doesn’t make sense to me because if you have assumed a null hypothesis of no time variation and have summed all the 63 years’ worth of data then we only have one data point, the mean of the whole ensemble, with smaller uncertainty on that ensemble mean but spanning the 63 years. What you suggest is having your cake and eating it by taking the data from the mean and applying that to individual data points. I can see what you are getting at but in all the fields I have worked in (physics related) what you suggest would be thrown out as misleading. I would never see a data point with no error bar because I always see predictors and even if its a perfect observation of a discrete number of storms, to present that as a perfect number to me lies about the underlying physics. Perhaps ultimately as long as there is good description of what is going on and we calculate confidence limits, significance of anomalous readings and trends correctly then maybe it doesn’t matter too I still think though that the basic problem here between us is when you say So what’s the issue? Well, we’re not dealing with radioactive decay, or with any of the other examples you cite. We’re dealing with statistical time series, and, IMHO, the relevant conventions for plotting such data come from the field of time series analysis, not radioactive decay. Specifically, when you plot a time series with error bars, the error bars are interpreted to indicate uncertainty in the plotted values. That’s what people expect. At least that’s what my cultural background leads me to believe. No, physically this is exactly like radioactive decay or finding the pennies that I described or admissions to a hospital or any of those similar situations where counting statistics applies – the underlying physics is the same where we have a very small probability of a storm arising in a particular time or place but over a year there are a few. It is not then a statistical time series where I am sampling from a larger population with some underlying mean and variance. I guess as I say, as long as we correctly calculate the significance of time variations etc and its well explained what is done or displayed then this discussion has probably gone about as far as it can. My final 2p on all of this. To me Paul’s figure 1 conveys correctly the uncertainties inherent in the physics, what you and Bender suggest to me with my background is misleading and what Judith Curry presented (way back in the othe thread that started all of this with the 11 year moving average) is wrong and dangerously misleading. 80. Paul: 1) What if the count for some year is zero (TAC’s point in 77)? 2) How would you draw those bars if you assume Gaussian distribution instead of Poisson? When you make a measurement there are two sources of error. One due to your instrument and the second due to natural fluctuations in the variable that you are measuring. The total error is the quadrature sum of these two. I think I understand now (pl. correct if I’m wrong!). You observe y(t), y(t)=x(t)+n(t), n(t) is error due to instrument. x(t) is a stochastic process. x(t) varies over time, and you are not very interested of x(t) per se, you want to get more general estimate: what is x(t+T), x(x-T), etc. If the process is stationary, it has an expected value. Your second error is E(x)-x(t), am I right? If so, 1)I think that ‘error’ is misleading term 2) ‘natural fluctuations’ without explanation opens the gate for 9-year averages and Ritson’s coefficients. If you define it as stochastic process, you’ll have many tools that are not ad hoc (Kalman filter, for example), to deal with the problem. Often ad hoc methods are as effective as carefully defined statistical procedures, but the difference is that the latter gives less degrees of freedom for the researcher. If you have 2^16 options to manipulate your data, you’ll get any result you want from any data set. Popper wouldn’t like that. 81. #79 IL: I agree that our difference has to do almost entirely with form, not substance, and even there I agree we’re not far apart. When you say: No, physically [statistically?] this is exactly like radioactive decay or finding the pennies that I described or admissions to a hospital or any of those similar situations where counting statistics applies – the underlying physics [statistics?] is the same where we have a very small probability of a storm arising in a particular time or place but over a year there are a few. My only quibble would be that the physics are different; its the stats that are the same; and the “cultural context” — the graphical conventions employed by the target audience — differ. So, the remaining issue: How to communicate the message, which we agree on, as unambiguously as possible to the community we want to reach. As I understand it, you are comfortable with — prefer — error bars attached to original data; I worry that such error bars introduce ambiguity to the figure (I also question their statistical interpretation, but that’s a secondary issue). I prefer an overlay or separate graphics. Of course, we do not have to resolve this. But, having now debated this thorny issue for half a week, perhaps this could be a real contribution to the literature. Consistency and rigor in graphics is important — perhaps as important as consistency and rigor in statistics, though less appreciated. Perhaps we could come up with a whole new graphical method for plotting Poisson time series — get Willis involved to ensure the aesthetics, and other CA regulars who wanted to get involved could contribute — and share it with the world ;-). I say we name it after SteveM! Time to get some coffee… 82. #79 IL: One final point: doesn’t make sense to me because if you have assumed a null hypothesis of no time variation and have summed all the 63 years’ worth of data then we only have one data point, the mean of the whole ensemble, with smaller uncertainty on that ensemble mean but spanning the 63 years. What you suggest is having your cake and eating it by taking the data from the mean and applying that to individual data points. I don’t know if I should admit this, but, in the sense you describe, statisticians do “have their cake and eat it too” — it is standard practice in time series analysis. For example, one often begins a data analysis by testing the distribution of errors assuming the sample is iid — before settling on a time series model. Then one looks at possible time-series models, rechecking the distribution of model errors based on the hypothesized model, etc., etc. It’s called model building. Perhaps it is indiscrete of me to mention this… It does raise a question: How would you develop error bars for a non-trivial ARMA — let’s start with an AR(1) — time series with Poisson errors? 83. Don’t know about a coffee TAC, perhaps we could have a beer or two…. I’m not really trying to get in the last word, but I think the physics IS the same. OK, in a literal sense, radioactivity is due to quantum fluctuations and tunnelling and hurricanes are a macroscopic physical process but what is fundamental to the problem and why I think that what you and Bender suggest is inappropriate is that there is a very small probability of hurricanes arising in any given area at any given time, its only when we integrate over a large area – ocean basin and long time (year) that we find up to several hurricanes. Each – on the treatment above – is a random, independent event caused by a low probability process which is why the statistics and the underlying physics of that statistics are the same as these other areas of physics. Getting climate scientists like Judith Curry to discuss that inherent uncertainty in years’ counts would be really interesting. Having said that, and this is where it could get really interesting, as Margo pointed out some time back, there is a possibility that hurricane formation is not independent, that the more hurricanes there are in a year, the more predisposed the system is to form more through understandable physical mechanisms. That would take us to a new level of interest but since I see conclusions on increasing hurricane intensity based on 11 year moving averages with no apparent discussion of the inherent uncertainties in a probability system like this, I think there is a long way to go before we can tackle such questions. 84. #80, UC (1) sqrt(0) = 0, no error bars, just a data point (2) once N is large enough, about 10 to 20, the difference between Poisson and Gaussian becomes small. The Gaussian has mean N and variance N. The error bars would still be +- sqrt(N) at one Ritson used to be an experimental particle physicist, my training and career for a while too, so I’d expect that he would understand the way I plotted the data and error bars. 85. The problem with climate time-series data like these hurricane data is that you have one instance, one realization, one sample, drawn from a large ensemble of possible realizations of a stochastic process. You want to make inferences about the ensemble (i.e. all those series that could be produced by the terawatt heat engine), but based on a single stochastic realization. Any climate scientist who does not understand this – and its statistical implications – should have their degree(s) revoked. In contrast, physical time-series data that are generated by a highly deterministic process do not face the same statistical challenge. Often the physical process is so deterministic that you never stopped to think about the existence of an ensemble. Why would you? 86. Hey people, off topic I know, but seeing what good work you amateurs are doing, I can’t resist citing this little gem, which seems taken directly from RealClimate: It must be almost unique in scientific history for a group of students admittedly without special competence in a given field thus to reject the all but unanimous vertict of those who do have such competence. This was from G. G. Simpson, talking about proponents of Continental drift in 1943…. (quoted in Drifting Continents and Shifting Theories, by H.E. LeGrand, p. 102) 87. I got lost. Did you guys agree whether “error bars” should be put on the count data? 88. #87 No, didn’t agree. But I’ll try to find the book suggested in #11 and learn (found William Price, Nuclear Radiation Detection 1964, will that do? ) 1) and 2) in #84 makes no sense to me (*), but I’m here to learn. I agree with #85. (*) except ‘difference between Poisson and Gaussian becomes small’, but replace 10 to 20 with 1000 89. #87 No, I guess not. But there is no way I am wrong – or my name isn’t Michael Mann 90. A count is not a sample; it is an observation. Observations are subject to measurement error, not sampling error. Sample means are calculated from sample observations (n gt 1) and are subject to sampling error. We do this because we want to compare the known sample mean to the unknown population mean. In stochastic time-series the population being studied/sampled is special, in that it is virtual and it is infinite: it is an ensemble. In stochastic time-series you are trying to draw inferences about a system’s ensemble behavior, but you have to do that with a single (long!) realization, and you have to invoke the principle of ergodicity: the sample statistics converge to the ensemble statistics. If your series is short, or if the ensemble is changing behavior behavior as you study it, then you will not get the convergence required to satisfy the ergocidity assumption. Then you are in trouble. So … why on earth would you apply sampling error to a set of observations when the thing that produces them is a highly stochastic process that only ever gives you one (possibly nonstationary) I agree with me. 91. I remember just enough about statistics to be dangerous (maybe I could be a consultant for the Team? :)), but I think Bender is right. A hurricane count is simply an observation, not a collection of observations, like a sample. Thus, how can you justify applying a statistical parameter to it? 92. Ok, its a fair cop, my name is not Michael Mann so given what Bender has just posted in #90, maybe this one is just going to run and run, I had better not try and move on. A count is not a sample; it is an observation. Observations are subject to measurement error, not sampling error. Its not a sampling error and it is not a measurement error!! Its part of the fundamental physics of the process. Observations produced by random processes with small probability are subject to considerable uncertainty! Yes, the observation that there were 15 Atlantic named storms last year (or whatever the number actually was) is an exact number, there were 15, no more, no less, if none were missed by all the satellites, ships and planes. But so what??! There is nothing magic about that number 15 even though its an exact observation. The conditions in the ocean basin were not so constrained that it had to be 15 with a probability of 1! If conditions remained exactly the same it could easily have been 14, or 13 or 17 – and we can calculate the probability that that number of 15 has come up purely by chance and we can also calculate the probability that any given number of named storms ranging from 0 to as large as you like – and including 15 could have occured last year given that 15 were observed. That is what Paul plotted and I think this is where the disconnect in our talking to each other is occuring. The fact that there were 15 does not mean that there was probability of unity of the number 15 occuring. So we calculate the probability that 15 could have occured even though 15 were observed!! I’m sure that to some that will still sound a bit gobbledegook but think about throwing a dice. I throw it and throw a 2. Its an exact number and an exact observation but the chance of me getting that 2 is not unity, its 1/6 so I can calculate the probability that that 2 came up by chance. (I know this is not a good analogy for the storms since with the dice we have 6 numbers each with equal probability but that is the principle). That treatment is fundamental to understanding the nature of the process and the inherent uncertainties when you have a process generated by such a fundamentally random process that gives you a few observations per year. Ok, storm over and calming again – to try and answer jae’s question. I hope I don’t put words in TAC’s mouth or anyone elses for that matter but I think we fairly well agree on fundamentals about uncertainties when we want to compare observations over time to test for trends etc, the difference (pace what I said to Bender above) seems mainly to be communication and how you present data. I think that we are agreed that moving 11 year averages with no consideration of these sorts of uncertainties is definitely not correct. Margo pointed out some time back, there is a possibility that hurricane formation is not independent, that the more hurricanes there are in a year, the more predisposed the system is to form more through understandable physical mechanisms. Someone did say this, but I’m afraid it wasn’t me! I thought it was an interesting idea. (Since I’ve used that word with irony here before, I think I should say I mean interesting in a good way.) I’m afraid I don’t know if one hurricane forming affects the probability of another one forming later on. So … why on earth would you apply sampling error to a set of observations when the thing that produces them is a highly stochastic process that only ever gives you one (possibly nonstationary) sample? I don’t think you would. But, you might illustrate the estimated measurement uncertainties in some cases. So, the if the “official” count recorded for a given year can hypothetically differ from “real number”, then, there might be cases where you want to show this. As it happens, when I see graphics, I’m content if they capture the major factors contributing to uncertainty. In the case of hurricane counts, if the annual numbers are presented unfiltered, I don’t usually feel the need for anyone to add the “measurement uncertainty” to the hurricane count for each individual year. But if someone averages or smooths the count, then you bet I want to see uncertainty interval. (Better yet, come up with “error” bars that account for both the statistical uncertainty in the mean and the measurement uncertainty. There are techniques for this.) Basically, you want “honest graphics” that convey a reasonably decent estimate of the uncertainty. 94. #90 , you say (n gt 1) , why (n greater than or equal 1) would not work? Nevermind, I withdraw my agreements and disagreements, short time-out for me. 95. #93 Sorry Margo, my bad again. Sadly, senior moments have a strong correlation with time and have long since moved out of where I can describe them by Poisson statistics but instead by Gaussian with high (and rising) mean. I’ve just searched and it was Sara Chan, comment 39 on the Judith Curry on Landsea 1993 thread – the thread that spawned all of this. Where is Judith Curry anyway? I would really like to know what she makes of these discussions. 96. #93. I don;t see why hurricane formation would necessarily be independent. If you drive a motorboat through water, you get a train of vortices. I know the analogy isn’t very close, but why would it be impossible that one vortex wouldn’t prompt subsequent vortices. My guess as to a low 2006 season was based on this analogy. 97. Re #96 Spatial patterns of vortices lead to temporal patterns of anti-persistence and, therefore, statistical non-independence (at least at some space-time scales). Logical. 98. Re #94 If n = 1, what do you get for a standard deviation? 99. Regarding the error bars (Fig 1): if a frequency is determined then the error must be a result of categorising the event as a hurricane or not. If it’s based on wind speed then does that mean the error bar is the propagation of errors for the given hurricanes in question? How were the errors combined for data collected by (presumably) various measuring schemes over the decades? 100. Paul, IL, bender, UC, jae and all: #87 asks: “Did you guys agree whether “error bars” should be put on the count data?” Well, I think the answer is we have some work to do. I was semi-serious when I suggested in #81 that: Perhaps we could come up with a whole new graphical method for plotting Poisson time series. Why? Well, I think IL is correct that (#92) “we fairly well agree on fundamentals about uncertainties” and that our differences relate primarily to “how you present data.” However, I also agree entirely with bender that the error bars are wrong because, among other things, they violate a convention of time-series graphics (#2, #9, #14, #36) and are likely to be However, apparently IL and Paul are used to viewing count data this way and, for them, the error bars do not present a problem. Nonetheless, I imagine they can accept the idea that some of us find the error bars confusing if not offensive. This leads me to think we need a new graphical method that we can all agree to, something unambiguous, compact, beautiful and expressive. We have plenty of creative talent right here at CA to do this ourselves, and I am not aware of any prohibition on contributing constructively to the science. We are not just auditors ;-) That’s my $0.02, anyway. 101. SteveM Re 96: I also don’t see why a hurricane occurring right now might not affect the probability of a hurricane forming a short time later. I just don’t happen to know. I could speculate but the physical arguments in my own speculation would sound like mumbo-jumbo — even to me. As long as you mentioned the Von Karman vortex street, voila: (The solid object is an island! ) 102. Shoot! I hope this shows. 103. Re #100 This is child’s play for a heavy like Wegman. That’s why I don’t bother. There are people who are already paid to solve these problems. Why are they not solving them? Why does it take volunteer 104. #103 I agree with bender on this point, everything we have been debating for hundreds of comments over several threads must be well known to professional statisticians, thrashed out in papers and books. I don’t know that, but since Poisson statistics has been around for nearly 200 years, all of these things must have been well chewed over. #100 TAC – I can perfectly well accept that there are different ways of viewing the world, it sounds like Paul and I are coming from a physicist’s viewpoint and need to understand the world from underlying physical principles. As long as we all accept the fundamental uncertainties given by the physics and calculate probabilities and confidence limits correctly when we calculate if there is a significant change with time etc then, ok, I can live with people wanting to present the data in a different way. What you want would get thrown out of a physical science journal though because its ‘unphysical’ #96 Steve, yes, hurricane formation may indeed not be truely independent, I and others mentioned this a little. Unless the correlation becomes very high though so that hurricanes are more or less forming as soon as one is leaving a formation area in the ocean it will be extremely hard to tell. Behind all this debate are the small numbers of hurricanes per year and a small number of year’s worth of data that makes the uncertainties so large, its very difficult to study anything at all. Nobody has really responded to my hissy fit in #92, does this make sense to you? Or did you understand this all along? 105. Ok, too interesting, but other work to do, one more post;) http://en.wikipedia.org/wiki/Ergodic_hypothesis says The ergodic hypothesis is often assumed in statistical analysis. The analyst would assume that the average of a process parameter over time and the average over the statistical ensemble are the same. Right or not, the analyst assumes that it is as good to observe a process for a long time as sampling many independent realisations of the same process. The assumption seems inevitable when only one stochastic process can be observed, such as variations of a price on the market. That the hypothesis is often erronous can be easily demonstrated [1]. I’m not very familiar with this, but I know that if we have a stationary process (strict sense), we can estimate the finite dimensional distributions from one (long) realization. So, in this context, I think it makes no difference if we speak about ergodicity or stationarity (if math gurus disagree, cases of singular distributions etc, pl. tell it now). Stationarity is easier concept for me (for some unknown reason). #90,91 why the sample size cannot be 1? With sample size of one you cannot estimate standard deviation of Gaussian distribution, but Poisson distribution is different case. Paul, IL: Price, Nuclear Radiation Detection has a chapter ‘Statistics of detection systems’. As an example, there is a data from 30 separate measurements, each taken for a 1-min interval, Geiger-Muller counter. (I can post the data later, if needed). Average is 28.2 counts. In the usual case the true mean is not known. Rather, a single determination of n counts is made. This value is reported as n +- sqrt(n). The meaning of this precision index is that there are only about 33 chances out of 100 that the true average number of counts for this time interval differs from n by more than sqrt(n). It is assumed that n_i=(approx) mean(n)=(approx) lambda=sigma^2. Using the example data it is found that 27 % of the n_i +- sqrt(n_i) limits do not contain the mean(n). But the story continues: If one is dealing with a series of counts, each of which is for the same time interval, mean(n) is the best value for the time interval employed. And TAC said in #77 Under the null hypothesis we have one population (of 63 iid Poisson variates). To estimate lambda, just add up all the events and divide by 63. I see no conflict here, with sample size of one you have to use n_i=(approx) mean(n)=(approx) lambda=sigma^2 but with larger sample size you average them all. And no conflict with my #80 either, we are not very interested in n_i per se, we want more general estimate (capability to predict, or to reconstruct the past, for example). To me, the Figure 1 looks like a result of model observation = the true lambda + error where the error term distribution is Poisson(lambda)[lambda+x], E(error) is zero and Var(Error) is lambda. Each year there is a new lambda, and past lambdas don’t help in estimating it. Each year a new estimate of error variance is obtained from the observation itself. And that’s why I think it is a confusing figure. 106. UC and TAC, thanks as always for your thought provoking posts. I got to thinking about your statement that: If one is dealing with a series of counts, each of which is for the same time interval, mean(n) is the best value for the time interval employed. and TAC’s statement that: Under the null hypothesis we have one population (of 63 iid Poisson variates). To estimate lambda, just add up all the events and divide by 63. It seemed to me that we could estimate the mean in a different way, which is that the mean is the value that minimizes the RMS error of the points (using the usual Poisson assumption that the variance in the dataset is equal to the mean). Using this logic, I took a look at the RMS error. Here is the result: The minimum RMS value is at 8.9. I interpret the difference between the arithmetic mean and the mean that minimizes the RMS error as further support for my conclusion that lambda is not fixed, but varies over time … and it may say that we can reject TAC’s null hypothesis. 107. A question: suppose that Atlantic storm count is affected by a random process (El Nino / La Nina) as well as a trend (SST). Would that be detectable by this analysis? (Pardon my likely poor posing of the question but I hope the gist of it is apparent.) There’s evidence that year-to-year count is strongly affected by El Nino, which appears random. There’s also the thought that SST affects count, which is believed to be a strong effect by some (Webster etc) while others (like me) think there’s probably a weak effect. 108. #100, #103, #104: I admit developing a new graphic was fanciful. As bender and IL note, it is someone else’s job. #105 When bender (#90) talks about n .gt. 1, I think he’s using “n” to denote the number of Poisson observations (each observation would be an integer .ge. 0). When n=1, we have the non-time-series case that IL and Paul (I think) are used to working with. However, we have also used the letter “n” to denote the number of arrivals, and there is an interesting issue here, too. I’ll now use K, instead, to denote a Poisson rv, and k to denote an observed value of K. The most obvious problem with the \sqrt{K} formula occurs when k is zero (#77). In that case, applying the “\sqrt{K} reasoning” yields an arrival rate of zero with no uncertainty. Whatever our disagreements, I think we can agree that this is nonsense. Also, it should be troubling given that, for Poisson variates, K=0 is always a possibility. IL: I’m still having a hard time understanding what is meant in #79 by …physically this is exactly like radioactive decay or finding the pennies that I described or admissions to a hospital or any of those similar situations where counting statistics applies – the underlying physics is the same… For me, I cannot even see how the physics of a die and the physics of a random number generator are the same. However, if you want to argue that a rolling cube and a CCD detector in a dark room have the same physics (and then there’s the hospital), I’m all ears. What is really going on here? I think IL may have defined statistics as a subset of physics — I guess that’s his prerogative — in which case the result is trivial. However, statisticians might now see it that way; they tend to draw the lines somewhat differently. They talk about events (for the die, the event space looks something like {.,:,.:,::,:.:,:::}) which are governed by physics, and the corresponding random variables (which take on values like {0,1,2,3,4,5,6}) which have statistical properties that can be considered without reference to physics. As for the rest, I think most of the arguments have been made. I still agree with bender; I don’t like the error bars. I see small problems with the error bars (e.g. as defined, when k=0 they don’t work); I see medium-sized problems with the error bars (where the estimated error and the estimated statistic are correlated, unsophisticated (e.g. eyeball) statistical tests and confidence intervals will tend to be biased toward rejecting on the left (btw: this bias is connected with Willis’s RMSE estimator in an interesting way)); and then there’s the BIG problem: Potential misinterpretation. They also add clutter to the graphic and require explanation. Overall, not a good thing. However, I don’t hold out much hope that repeating these arguments, or bender’s arguments (which I also happen to agree with), will change any minds. 109. Re: #107 A question: suppose that Atlantic storm count is affected by a random process (El Nino / La Nina) as well as a trend (SST). Would that be detectable by this analysis? David S, I can give you my layman’s view (and repeat myself as I am wont to do) and that is that TAC’s comment in #56 and quoted below would indicate that a statistically significant trend is not found. I also believe that the point has been made that the use of lower frequency filtering needs to be justified before applying and that the filtering application, if justified, must make the necessary statistical adjustments (to neff). Note that in this case all three tests, as well as an eyeball assessment, find no evidence of trend (the p-values for the 3 tests are .35, .71, and .61, respectively (#38)). I think we can safely conclude that whatever trend there is in the underlying process is small compared to the natural variability. The remainder of the discussion (which comprises most of it) comes by way of a disagreement on the display of error bars and the thinking behind it. From a layperson’s view, I agree with TAC and Bender on the matter of the thinking behind the error bars and have appreciated their attempts to explain the interplay of stochastic and deterministic processes and the appropriate application of statistics. My agreement may be because this is the approach to which I am familiar. Perhaps it is my layperson’s view, but I am having trouble understanding the other approaches presented here and their underlying explanations. I am not even sure how much of the differing views here result from looking at deterministic and stochastic processes differently. This discussion has been very friendly compared to some I have experienced on this subject. I do think that there is a correct comprehensive view of how statistics are applied to these processes and not separate deterministic and stochastic ones. 110. To sew these threads up we should get back to the project we were working on prior to the publication of Mann & Emanuel (2006), which would require translating John Creighton’s MATLAB code (for orthogonal filtering) into R and applying it to these data. Because when you account for the low-frequency AMO (similar to what Willis has basically done), and Neff, and the Poisson distribution of counts (as Paul Linsay has done) I am sure that what you will find is no trend whatsoever. The graphical display would be cleaner and more correct than Linsay’s here, but would still prove his basic point: this is a highly stochastic process which is statistically unrelated to the CO2 trend (but might be related to a decadal oscillatory mode that is a primary pathway for A/GW heat 111. Well … nothing is as simple as it seems. I had figured that if the standard deviation as used by Paul in Figure 1 was an estimator of the underlying lambda, I could use that to figure out where lambda was as I did in post #106 above. This showed that lambda estimated by that method was smaller than the arithmetic mean. However, the world is rarely that simple. Having done that, I decided to do the same using R with random Poisson data, and I got the same result, the lambda calculated by the same method is smaller than the actual lambda … so I was wrong, wrong, wrong in my conclusions in #106. However, this also means that the use of sqrt(observations) as error bars on the observations leads to incorrect answers … go figure. 112. #110, bender. For fun I showed Figure 1 so some of my former physics colleagues. Nobody even blinked. The only point anyone made was that the error bars for very small n should be asymmetric because of the asymmetric confidence intervals for the Poisson distribution. Which I knew but didn’t want to bother with for an exercise as simple as this. In any case, the error bars as plotted (with the asymmetrical correction at small n if you want to be fussy) are the values needed to fit the data to any kind of function. They have to be carried through into any smoothing function like the running average used by Curry or Holland and Webster. For fun I’ve also looked at the data back to 1851 without bothering about possible undercounting. The mean drops to 5.25 hurricanes/year from 6.1 but the data still looks trendless. The distribution and overlaid Poisson curve match as well as in Figure 2. With 156 years of data it provides an interesting test of the Poisson hypothesis. The probability of seeing a year with no hurricanes is exp(-5.25) = 0.0067, quite small. But in 156 years I’d expect 156*exp(-5.25) = 1.05 years with no hurricanes. In fact, there are two years, 1907 and 1914, that have no hurricanes. 113. Surely a quick and easy way of showing a trend would be to plot the Poisson distrib. for several time periods, say every 40 years? Then you could see if the mean shifts. 114. Re #112 Send your physics colleagues here and maybe they’ll learn something about robust statistical inference if they read my posts. Those error bars are meaningless in the context of the only problem that matters: hurricane forecasting. People who don’t blink scare me. 115. be afraid, verrrry afraid. 116. Re #109 Ken, thanks. That’s about what I gathered. One day I’d like to learn about the statistical characteristics of processes which are driven by both random and trended factors. 117. 115 Yes, well, I suppose you have nothing to lose being wrong, so why should you be afraid. Go ahead and mock me. Just be sure to send your physics friends here. 118. TAC #108 IL: I’m still having a hard time understanding what is meant in #79 by For me, I cannot even see how the physics of a die and the physics of a random number generator are the same. However, if you want to argue that a rolling cube and a CCD detector in a dark room have the same physics (and then there’s the hospital), I’m all ears. What is really going on here? I think IL may have defined statistics as a subset of physics “¢’ ¬? I guess that’s his prerogative “¢’ ¬? in which case the result is trivial. I guess I’m not quite in the category of Lord Rutherford who said ‘All science is physics or stamp collecting’ but maybe my view of physics is more catholic than most (here anyway). What I meant was that although radioactive decay, the hurricanes and the hospital admissions have different physical processes, (quantum fluctuations/heat engine of the ocean/infection by pathogens) neveretheless, when you strip each to its bare essentials, they are working in the same way. The probability of any given radioactive atom decaying in a certain time is completely minute, but there are a vast number of atoms in the lump of radioactive material so that the tiny probability times the number of atoms gives a few decays per second (say). The probability of a hurricane arising in a particular area of ocean at a particular time is tiny but when you add up all those potential hurricane forming areas over the whole of an ocean basin and over a long enough time you get a few hurricanes per year. The probability of any one of us as an individual getting a pathological disease is really tiny, but there are a lot of people so a hospital sees a small but steady stream of people each day. (I say steady, what I mean is a few each day but the numbers who are admitted each day fluctuates according to Poisson statistics). (Of course, if the events are no longer random with small probability such as if an infectious disease starts going through a neighbourhood then the Poisson distribution breaks down. The same would happen if hurricanes formed at a higher rate so that the formation of one affected the probability of another forming). Whenever you have a small probability of something happening to an individual (person/area/thing etc) but an awful lot of persons/areas/things then you get Poisson statistics. I call that stripping a problem to the essentials ‘physics’, (maybe I do follow Rutherford after all), but that’s not important, what is, is that understanding of the inherent and large uncertainties in the system. Re #112 Send your physics colleagues here and maybe they’ll learn something about robust statistical inference if they read my posts. Those error bars are meaningless in the context of the only problem that matters: hurricane forecasting. People who don’t blink scare me. Strictly peaking of course, you are right that the error bars on the graph are not necessary for ‘robust hurricane forecasting’, you can correctly study the statistics of the sequence of numbers without plotting the error bars on figure one BUT THEY ARE THERE! and if you want to understand the underlying physics (there I go again) of a problem, ie its most fundamental essentials, I, and Paul and clearly from Paul’s colleagues of a ‘physics’ pursuasion, these are the sorts of things you need to think about. From many of your posts and from others’ posts here, I still don’t think many people here fully understand the underlying principles and fundamental large uncertainties when you have a random process at work. 119. #112 In any case, the error bars as plotted (with the asymmetrical correction at small n if you want to be fussy) are the values needed to fit the data to any kind of function. They have to be carried through into any smoothing function like the running average used by Curry or Holland and Webster. Yes, yes. Absolutely. Why can’t this be seen? If bender and others are calculating ‘robust time series’ correctly (I am not doubting that bender does, I don’t think Judith Curry does), then this is all implicit in their work even if they don’t realise it – Paul and I are just making it a bit more explicit. 120. #119 Il: I think I appeciate what you mean by I’m not quite in the category of Lord Rutherford who said “All science is physics or stamp collecting’ but maybe my view of physics is more catholic than most. In its dedication to a search for grand theories, for unifying explanations of seemingly unrelated phenomena, physics is magnificent. When I wrote (#108) “I think IL may have defined statistics as a subset of physics,” that is what I had in mind. It is a great and noble thing. However, I hope you appreciate that statisticians sometimes see things differently. For example, statisticians will bristle when they read what you endorsed (#119) “fit the data to any kind of function.” You see, statisticians, in their parochial ways, believe that one fits functions to data — never the other way around. However, based on a sample of N=2, can I conclude that physicists do not subscribe to this principle? ;-) [OK: That last part was undeniably snarky]. Anyway, I think we are mostly in agreement. When are we having that beer? 121. #107 If we deal with a random variable X whose distribution depends on a parameter which is a random variable with a specified distribution, then the random variable X is said to have a compound distribution. One such example is negative binomial distribution (The lambda of Poisson distribution is specially distributed rv): Hey, they mention ‘tornado outbreaks’.. AGW oriented model would be of course lambda=f(Anthropogenic CO2) (which is a possible alternative hypothesis to our H0: lambda=constant, which we have been testing here many days). And by figure 1 Paul says that a lot of very different lambda-curves could be well fit to this data. sqrt(0) = 0, no error bars, just a data point Let’s put this value to Price’s text: This value is reported as 0 +- 0. There are only about 33 % chances out of 100 that the true average number of counts for this time interval differs from 0 by more than 0. 1) Not very true, underestimates the percentage 2) No error bars: people get an idea that this is something exact, something completely different with in the other cases (1,2,3,..). +-sqrt(n) is a confusing rule for me, that’s all. But I think I understood your message now. 122. Re: UC request in comment #3 added error bars equal to +- sqrt(count) as is appropriate for counting statistics. Not very familiar with this, any reference for layman? In keeping with my obsession to retain the major points of threads such as this one in a reasonable summary, I would like to see any reference presented to answer UC’s question from very early in the thread — as my perusal failed to come up with one. I would like to add a request to see a reference that handles the 0 count with this approach. The reference should be for an application other than radioactive decay. Also I assume we are talking here, not about counting error as in radioactive decay for many independent measurements where if you have only one measurement the counting error is N^(1/2), but many measurements from the same system where the mean becomes N bar and the standard deviation becomes (N bar)^(1/2) which are the mean and standard deviations for a Poisson distribution as derived from the all of the data points. 123. Once a contained weather system begans moving over the surface of the earth it is subjected to the factors created by movement over a rotating surface and also the movement of an object through a uniform medium. The speed within the hurricane has been discussed, but we also need to consider the speed with which the system moves over the surface. The deflection of the trajectory of a system as it moves away from the equator is affected by increasing coriolis effect and an important part of this is changing angular momentum (am). The latter influence (am) varies with the speed of the system. The photograph Margo provides (#102) apears to indicate the second factor and that is sinuosity. There is clear sinuosity in the circumpolar vortex and in the flow of the Gulf Stream and North Atlantic drift. It is logical to assume that a weather system moving through the uniform medium of the atmosphere will be subjected to sinuosity. As I understand nobody has effectively explained the development of sinuousity. The best explanation I have heard is that it is the most efficient way of moving from A to B with the least amount of energy used – a natural conservation of energy process. 124. I see the problem now. Two issues have been conflated here. I have been arguing about what kind of graphical representation and error structure is required to make robust inferences about changes in the ensemble mean number of hurricanes expected in a year. The physics people are concerned about propagation of error, arguing that if you are going to use some observation in a calculation you need to know the error associated with the observation and carry that through the calculation. I won’t disagree at all with the latter, but I would add that you had better understand the former if you want to understand what it is the hurricane climatologists are asking. My point is that it doesn’t make sense to treat an observation as though it were a representative sample of the ensemble. You physics people need to think about what it means to infer a trend based on a sample realization drawn from a stochastic ensemble. Until you understand that you will continue to bristle at my comments. 125. Sampling error and measurement error are not the same thing. 126. Tim B., thanks for the post. You say: As I understand nobody has effectively explained the development of sinuousity. The best explanation I have heard is that it is the most efficient way of moving from A to B with the least amount of energy used – a natural conservation of energy process. Sinuosity is quite well explained by the Constructal Law. This Law actually explains a whole host of phenomena, from the ratio of weight to metabolic rate in mammals to the general layout of the global climate system. There is a good overview at the usual fount of misinformation, I think William Connelly hasn’t realized that Bejan’s work covers climate. The Constructal Theory was developed primarily by Adrian Bejan. His description of the theory is here. A two page Word document, The design of every thing that flows and moves, is a good introduction to the theory. He is one of the 100 most highly cited engineers in the world. His paper: Thermodynamic optimization of global circulation and climate Adrian Bejan and A. Heitor Reis INTERNATIONAL JOURNAL OF ENERGY RESEARCH Int. J. Energy Res. 2005; 29:303–316 Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/er.1058 is a clear exposition of the major features of the global climate from first principles. Sorry to harp on this, but Bejan’s work has been wildly under-appreciated in the climate science community. 127. #120 TAC Steve has my email. Probably about time to go back to lurk mode. 128. IL, Paul, bender, et al. I enjoy debating with smart people, and this time has been a particular pleasure. I think the discussion helped clarify the issues, and made me aware of things I had not thought about in a long time, if ever (bender summarized it elegantly with “Sampling error and measurement error are not the same thing” — but they are both real and they both matter). I’m disappointed that no one jumped on the opportunity to point out, gratuitously, that “fitting data to functions (models)” seems to be SOP among some climate scientists. I thought that qualified as “low-hanging fruit”; you guys are so polite! 129. #47 Not sure about sinusoidal distribution, but how about Poisson distribution where lambda is a function of some other random process (see 107,121, and I think the paper Jean S linked in #19 is relevant as well). I noted that there are extremes that don’t fit well to your histograms (3rd Figure) – and these extremes are not necessarily recent ones (1887, 1933). One way to model overdispersed count data (variance greater than the mean) is using the negative binomial distribution (lambdas follow Gamma distribution), but in this case some stochastic process as a ‘lambda-driver’ would probably do. And now we are so close to Bayesian data analysis, so I have to ask: Can anyone give a predictive distribution for future hurricanes given the SST? i.e. yearly p(n|SST). How different it would be from John A’s Poisson(6) ? We don’t know the future SST, but we can plug those values in later. And the same for global temperature, give me p(T|CO2,Volcanic,Solar). I will check the accuracy of our knowledge ten years later with realized CO2, Volcanic and Solar TAC, I wrote an example on fitting data to functions #63 in http://www.climateaudit.org/?p=1013 My model: “Named Storms’ is i.i.d Gaussian process. 2005 is over 4 sample stds, astronomically improbable. My models are never wrong, so 2005 is faulty observation. Outlier. Removed. But let’s not blame climate scientist for everything, this kind of fitting was invented earlier that climate science, I think:) 130. #126 Thanks Willis: It appears this is the information I had heard about, namely that sinuosity is an atttempt to maximum energy efficiencies by overcoming restrictions such as friction throughout an entire sysem. I would still like some response to my other points about angular momentum and sinuosity as applied to the movement of hurricanes. The deflection of all the tracks to the right as they move away from the equator in the Northern Hemisphere is mostly a function of adjustment to changing rotational forces. The degree of adjustment is a function of the speed of the entire weather system. Depending on which way this macro guidance sends the system then determines the geophysical and other factors that will come into play. I realize this is not statistics, but the number of occurrences, such as US landfall of hurricanes, intensities achieved, and many of the factors being dicussed here, are directly determined by 131. Re: #129 Interesting point. Bill Gray uses past TS data to construct a predictive model for forecasting TSs weeks ahead of the season and then massages the data again to “adjust” his predictions. The way I look at what he has accomplished is that the predictive power of the advanced model is not statistically significant but that closer to the event prediction is. I would think that someone must have published models for hurricane events using past data without the attempt to be predictive, i.e. after the fact. Gray uses numerous variables in his predictive models and SST, as I remember, plays a part, albeit a small one. As I recall Gray rationalizes his use variables by attempting to explain the physics involved. Modern computer models are used to predict TSs but have very little out-of-sample results to judge them by. What we really need, at least as a starting point ,is a computer model that uses the actual conditions at the time of the TS event as predictor of TSs. But are not we getting a bit ahead of ourselves when data seem to indicate a trendless line of TSs versus time. Re: #130 I am sure you know better than I, but is it not a fact that one thing computer models have been able to accomplish with some success is to predict the tracks of TSs. What inputs do they use? 132. #131 Ken, thanks for the response. The ability to predict track and speed has not been very successful, especially when you consider the limited range of directions. That is, they all move in a general pattern in one relatively small quadrant of the compass. In addition, the predictions of different computers of a single hurricane vary considerably. 133. Oh, good grief. Hurricanes are one of the ways that Earth dissipates heat. When the SST is hotter and atmospheric conditions (wind speeds, shear, etc.) are at the right level, the hurricanes increase in number and strength (as well as “simple” thunderstorms). It has to be related, in the long run, to SST. I’ll bet there were a lot more severe hurricanes during the MWP. It’s too bad we don’t have reliable proxies for past hurricanes. 134. RE #132 The computer models use standard meteorological inputs and generate a path and intensity prediction. One thing that’s been found is that an “ensemble” (average prediction, or range of predictions, from many models) is often better than the prediction from just one model. It’s also been found that flying jets into the surrounding atmosphere to gather data results in much-improved forecasts. It seems that the computers suffer from GIGO, which is not a surprise. I am very interested in seeing the European (ECMWF) computer sstorm season predictions this year. As I understand it, they let the computer run months of weather map predictions and then count the storms the computer generates. Good luck with that. 135. What I have been attempting to point out about models used to predict a TS event or its path is that the predictive capabilities appear to improve as data from actual current or near current time conditions are used to continually readjust the predictions. Longer term predictions have to first make educated guesses as to what these conditions will at a future time and then use those “guesses” of conditions to determine the probability of a TS event or probability of the direction of its path. What I would like to see is how well do these models perform if they have all the data of the existing conditions for an incremental step and then as conditions change how well they perform for the next step and so on. In other words how well can they simply process current data by excluding the prediction of conditions? 136. A plot of NHC storm forecast errors is shown here . The forecasts are one-day to five-day. For example, the chart shows that the typical error for the 2-day (48 hour) forecast is about 100 nautical miles. These can be thought of as computer + human forecasts. As shown, the farther into the future the forecast goes, the greater the cumulative error. I will say from watching many storms that, beyond five days, the forecasts are almost useless. This is why I’m fascinated to see what the Europeans will forecast from their computer-generated storm seasons. The computer-only performance is shown here , for the 2-day forecast. The computers do a little worse than computer + human, but they are improving. Interestingly, the Florida State Super Ensemble (FSSE) does as well as, or slightly better than, the human + computer performance. My understanding is that the FSSE looks at all computer models and considers their historical error tendencies in making its forecast. 137. #134 By implication then the problem is not enough models. More models and therefore better approximations. I also note the comments about better accuracy as the actual event approaches. This is the practice I see in Canadian forecasts. I call it progressive approximation. With regular weather forecasts I understand that if you say tomorrow’s weather will be the same as today you have a 63% chance of being correct. This is based on the rate of movement of weather systems which generally take 36 hours to move through. Hence the probability of the weather being the same in 12 hours is 63%. Surely lead time is essential in forecasting for extreme events to provide time for evacuation or other reactions. How many times will people pack up and leave when there was no need? 138. Tim Ball, you say in #130: Thanks Willis: It appears this is the information I had heard about, namely that sinuosity is an atttempt to maximum energy efficiencies by overcoming restrictions such as friction throughout an entire Actually, it sound like you are talking about something different, the minimization of entropy. The Constructal Law is something different and much more encompassing. It was stated by Bejan in 1996 as follows: For a finite-size system to persist in time (to live), it must evolve in such a way that it provides easier access to the imposed currents that flow through it. The basis of the theory is that every flow system is destined to remain imperfect, and that flow systems evolve to distribute the imperfections equally. One of the effects predicted by the Constructal Law is the one that you have alluded to above, the maximization of energy efficiencies. The Constructal Law predicts not only the maximization, but the nature and shape of the resulting flow patterns. Because of this power, it has found use in an incredibly wide variety of disciplines. See here for a range papers utilizing construcal theory from a variety of fields, including climate science All the best, 139. RE #137 I think the key to using multiple models in an ensemble is to know their weaknesses and then make an adjustment for those biases. The GFS model, for instance, may be slow at moving shallow Arctic air masses, so ignore it on those and look at the other models. The NAM model continuously generates a tropical storm near Panama during the hurricane season, so ignore it on that regard. And so forth. I think that’s what the ensemble method does. Seems, though, that the better approach is to fix the models. I have a question which you or someone else might be able to help me answer. The question is, why doesn’t the temperature in the upper Yukon (or other snow-covered polar land) on a calm night in the dead of winter fall to some absurdly low temperature, like -100C? It seems to me that there is little heat arriving from the earth, due to snow cover, and little or no sunlight, and (often) clear skies allowing strong radiational cooling What brakes the cooling? Thanks. I have a question which you or someone else might be able to help me answer. The question is, why doesn’t the temperature in the upper Yukon (or other snow-covered polar land) on a calm night in the dead of winter fall to some absurdly low temperature, like -100C? It seems to me that there is little heat arriving from the earth, due to snow cover, and little or no sunlight, and (often) clear skies allowing strong radiational cooling What brakes the cooling? Thanks. David, where is the trick? A night in the dead of winter over there is the same as a day: without sun. 141. I would like to resurrect this thread if that is possible because I believe that everyone has missed the point. We were discussing statistical inference. Statistical inference involves hypothesis testing. Hypothesis testing involves setting up a Null Hypothesis. Discussions such as the one about whether we should or should not show confidence limits on graphs can often be resolved by asking the question “What is the underlying null hypothesis?”. Indeed is there a null hypothesis underlying Paul Linsay’s claim that the sample data are a “good” fit to a Poisson distribution? I will now set up a null hypothesis for dealing with Paul’s proposition about the hurricane data. My null hypothesis is the following statement: “The annual hurricane counts from 1945 to 2004 are sampled from a population with a Poisson distribution and the hurricane count of 15 for the year 2005 is a sample from that same population.” The mean count for the 60 years 1945 to 2004 inclusive is 5.97. We will use this as an estimate of the parameter of the distribution. I have calculated that the probability of obtaining a count of 15 or greater from a Poisson distributed population with a parameter of 5.97 is .0005, ie 1 in 2000. We can therefore reject the null hypothesis at the 0.1 percent level. It follows that either 2005 is an exceptional year which is significantly different from the 60 preceding years or that the process which generates annual hurricane counts is not a Poisson distribution. Personally I prefer that latter interpretation. Hurricane generation is likely to depend on large scale ocean parameters such as mixed layer depth and temperature which persist over time. Because of this it is unlikely that successive hurricanes are independent events. If they are not independent then they are not the outcome of a Poisson process. Poisson works only if there is no clustering of events. A back-of-the-envelope calculation indicates that 15 is rather a large sample value. With a mean of about 6 and a standard deviation of about 2.5, 15 is more than 3 standard deviations away from the mean. It will certainly have a low probability. Ironically Paul Linsay’s data examined in this way leads to a conclusion which is diametrically opposed to his original intention in presenting the data. All the same it was a great idea and one certainly worth discussing on Climate Audit. Thanks Paul. 142. It seems to me, accepting your figures, that your 1 chance in 2000 is the statistic that gives the chance of having 15 hurricanes in any one year. However, there are 60 years in 1945-2004, and so your 1 in 2000 becomes 60 in 2000 for any set of sixty years. From your calculation, there’s a 3 percent chance that in any 60 year period, one year will have 15 hurricanes. From Paul’s figure, we see one 15-hurricane year. So, your null hypothesis is rejectable at the 3 percent level. Not very significant. 143. Re #141, 142: And for us lay folk, the conclusion is?? 144. #141, 142: Pat’s analysis is the correct one. In my original calculation I got a mean of 6.1 hurricanes per year. This gives a probability of 1.0e-3 of observing 15 hurricanes in any one year. In 63 years the probability is 6.5e-2 of observing at least one 15 hurricane year. Next point: When should a rare event generated by a stochastic process occur? Only when you’re not looking? Only if you’ve taken a very long time series? The correct answer: They happen at random. It’s hard to build up an intuition for these kinds of probabilities. In my youth, I spent many sleepy nights on midnight shift at Fermilab watching counter distributions build up. They always looked strange when there were only a few tens to hundreds of events. It takes many thousands of events to make the distributions look like the classic Poisson distribution. We just don’t have that kind of data for hurricane counts. There is another very strong piece of evidence for the Poisson nature of hurricane counts that is not shown on this thread but is shown in the continuation thread. If you scroll down to Figure 5 you will see a plot of the distribution of times between hurricanes. Hurricanes occur at random but the time between them follows an exponential distribution, which is the classic signature of a Poisson process. The same distribution occurs if the data is restricted to 1944-2006, and within errors, with the same time constant. 145. Pat Frank, you say However, there are 60 years in 1945-2004, and so your 1 in 2000 becomes 60 in 2000 for any set of sixty years. but my null hypothesis was not about ANY year of the 60 years it was specifically about THE year 2005. The hypothesis that the high count in 2005 arose purely by chance can be rejected at the 0.1 percent level as I stated. It might be more appropriate to criticize my choice of a specific year in my null hypothesis. I did so because it is a recent year. We are looking for a change in the pattern. The subtext of all this is that the Warmers are saying that global warming is causing more cyclones and Paul is saying “No it’s just chance”. When, after 60 years of about 6 cyclones a year, we suddenly get a year with 15 cyclones is that due to chance? I have shown that it is not. It is too improbable. It is highly significant and we need to look for another explanation. With regard to Figure 5 on the other thread – the real issue is whether the displayed sample is significantly different from the exponential law expected for a Poisson distribution, not whether the graph looks good. To test this you would need to use chi-square or Kolmogorov-Smirnov as suggested by RichardT on the other thread. Quantitative statistical methods provide the best way of extracting the maximum amount of information from a limited amount of data. Qualitative methods like eyeballing a graph really don’t tell us very much, it is too easy to fool yourself. Even though 2005 had a significantly greater number of cyclones I do not believe that this supports AGW. All it does is imply that a mechanism exists for generating an abnormal number of cyclones in some years. The modest count of 5 in 2006 suggests that 2005 was one-off rather than a trend. A one in two thousand year probability and you don’t think that is significant. 146. John Reid, I don’t understand the logic. If you picked a year at random, it might be true, but if you pick just the highest year out of the bunch, you have to look at the odds of that turning up in a sample of N=60, not N=1. 147. Right, if the chance of a given year having x hurricanes is 1 in 2000, then the chance of at least one year out of 60 having x hurricanes is 60 in 2000. So, while it’s highly unlikely that 2005 be that year, it’s not as quite unlikely that that year should be between 1941 and 2000 (for example). 148. I essentially agree with Paul Linsay’s comments, i.e. that the counts are best fit with Poisson distributions and that the year 2005 was a very unusual year. I also think that since a better fit is derived for a Poisson distribution by dividing the time period and we have some a priori doubts about the early counts that the results from splitting the data would agree with an undercount (a random one that is). The error bars have nothing whatsoever to do with these arguments, but error bars representing the mean (square root of it) for the entire period would be more appropriate. Thought I would sneak that in. My background on using the square root of the mean for an individual count to indicate statistical error is appropriate if I were counting a radioactive decay that I knew would yeild a Poisson distribution and I make only 1 count. If I made multiple counts I would average them and use the resulting mean to calculate the statistical error. Paul, I reside close to where you spent your midnight shifts. 149. #145 — John, thinking statistically is an exercize in counter-intuitive realizations. If you choose any event and ask after the probability that it would happen just when it did, that number will be extremely small. Does that mean no events at all will ever happen? With regard to your admirable calculation, you can choose 2005 to be your year of study, but the statistics of your system include 60 years and not just one year, because you applied it to Paul Linsay’s entire data set. That means a 15-hurricane year has a 3 percent chance of appearing somewhere in that 60 years. The fact that it appeared in 2005 is an unpredictable event and has the tiny chance you calculated. But that same tiny chance would apply to every single year in the entire 2000 years of record. And in that 2000 year span, we know from your calculation that the probability of one 15-hurricane year is 1 (100%). Even though the chance of it appearing in any given year is 0.0005. So, how does one reconcile the tiny 0.0005 chance in each and every year with a probability of 1 that the event will occur within the same time-frame? One applies statistics, as Paul did, and one shows that the overall behavior of the system is consistent with a random process. And so the appearance of unlikely events can be anticipated, even though they cannot be predicted. You wrote: “When, after 60 years of about 6 cyclones a year, we suddenly get a year with 15 cyclones is that due to chance?” Look at Paul’s original figure at the top of the thread: there was also a 12-hurricane year and two elevens. Earth wasn’t puttering along at a steady 6, and then suddenly jumped to 15, as you seem to imply. It was jumping all over the place. There was also a 2-hurricane year (1981). Isn’t that just as unusual? Does it impress you just as much as the 15-year? 150. John, What you are doing is a post hoc test – finding an interesting event and showing that it unexpected under the null hypotheses. This type of test is very problematic, and typically has a huge Type-1 error. But you are correct that Paul Linsay’s analysis is incomplete. A goodness-of-fit test is required, AND a power test is required to check the Type-2 error rate. 151. Willis says: If you picked a year at random, it might be true, but if you pick just the highest year out of the bunch, you have to look at the odds of that turning up in a sample of N=60, not N=1. I didn’t pick it because it was the maximum, I picked it because it was recent. I am testing for change. It is a pity that this conversation didn’t happen a year ago when it would have been THE most recent year. I agree that the water is muddied slightly by it not being the most recent year. Ken Fritsch says: the year 2005 was a very unusual year Thank you Ken. So unusual in fact that a year like this should only occur once in 2000 years if the Poisson assumption is correct. I agree with what you say about error bars. Pat Frank says: And in that 2000 year span, we know from your calculation that the probability of one 15-hurricane year is 1 (100%). Where did you learn statistics? By your argument the probability of a 15-hurricane year in 4000 years would be 2. In fact the probability of a 15-hurricane year in 2000 years is 1 – (1-.0005)^ 2000 = 0.632 not 1. there was also a 12-hurricane year and two elevens. Earth wasn’t puttering along at a steady 6, and then suddenly jumped to 15, as you seem to imply. It was jumping all over the place….. Isn’t that just as unusual? Does it impress you just as much as the 15-year? The mean of the whole 62 year period is 6.1. I have calculated the Poisson probability of 11 or more hurricanes in a single year to be .0224. The probability of of 11-or-more-hurricane years in 62 years is therefore 0.75 The probability of 4 such events is .19. No it doesn’t impress me. The method I used whereby I partitioned the data into two samples, used the large sample to estimate the population parameter and then tested the other sample to see if it is a member of that same population, is a standard method in statistics. 152. Re: #150 But you are correct that Paul Linsay’s analysis is incomplete. A goodness-of-fit test is required, AND a power test is required to check the Type-2 error rate. I did a chi square test for a Poisson fit and a normal fit and the fit was significantly better for a Poisson distribution than for a normal one. The fit for the period 1945 to 2006 was excellent for a Poisson fit and good for the period 1851 to 1944 — using of course two different means and standard deviations. Re: #151 Ken Fritsch says: the year 2005 was a very unusual year Thank you Ken. So unusual in fact that a year like this should only occur once in 2000 years if the Poisson assumption is correct. If essentially all of the data fits a Poisson distribution and one year shows a very statistically significant deviation I would not necessarily be inclined to throw out the conclusion that the data fits a Poisson distribution reasonably well. A 1 in 2000 year occurrence has to happen in some 60 year span. The evidence says that the occurrence of hurricanes can be best approximated by a Poisson distribution, and with all the implications of that, but that does not mean that nature allows for a perfect fit as that is seldom the case. Also what if some of the hurricanes in 2005 were counted simply because of marginal wind velocity measurements? After all we are counting using a man made criteria and measurements. It is not exactly like we are measuring hard quantities in the realm of physics. 153. #47 Whiles, I’ve heard of sinusoidal distributions but it never crossed my mind to combine it with a poison distribution. Great idea :). Have you looked at the power spectrum of the hurricane’s. Perhaps you can identify a few spectral peaks. 154. Ken Fritsch says: A 1 in 2000 year occurrence has to happen in some 60 year span. Reminds me of my granny who used to say “well someone’s got to win it” whenever she bought a lottery ticket. The evidence says that the occurrence of hurricanes can be best approximated by a Poisson distribution Is this the evidence which is under discussion here or do you know some other evidence that you haven’t told us about? If not, aren’t you assuming what you are trying to prove? Go back to my original post and look at the null hypothesis which I set up. Because the computed probability was extremely low we must reject the null hypothesis. Okay? Therefore EITHER 2005 was a special year OR the underlying distribution is not Poisson distributed. It’s your choice. It appears you have chosen the former. I am happy with that. 2005 was a significantly different year from the preceding years. The next step is to find out why. Let’s use statistics as a research tool rather than a rhetorical trick. I am not arguing in favour of AGW. I am arguing against eyeballing graphs and in favour of using quantitative statistics. Paul Linsay picked a lousy data set with which to demonstrate his thesis. Its a pity he didn’t do it 2 years ago, it might have worked without the 2005 datum. 155. I could really use a latex preview. Please delete the above post: I find this an intriguing discussion. I think John Ried has a good point. A poison or sinusoidal poison may be a good distribution to describe most of the process but may not describe tail events (clustering) well. John Ried also puts forth the other hypothesis that in recent years the number of hurricanes is increasing. So then why not use some kind of poison regression: Say: $\lambda (t)=t * \lambda _{1} + lambda _{o}$ Then : $Pr(Y_{i} = y_{1})= exp(- t * \lambda _{1} - \lambda _{o} )*(t* \lambda _{1} + \lambda_{o} ) ^{ y_{1}}/(y_{1}!)$ For the case of N independent events: $Pr(Y_{1}=y_{1}, . . . , Y_{M}=y_{M})= Pr(Y_{1}=y_{1}) * . . . * Pr(Y_{M} = y_{M})$ I think if you take the log of both sides and then find the maximum value by varying lambda_1 and lambda_o you will get the optimal value. Recall the maximum occurs where the derivative is equal to zero. It looks like you could reduce the problem to finding roots of a polynomial. Perhaps there are more numerically robust ways to handle the problem. That said using the roots of a polynomial could give an initialization to a gradient type optimization algorithm. 156. Another thought, once you have found the optimal value for the slope of the mean in the poison distribution, one could do as Willis has done in post 47. But this time instead of plotting a sign poison distribution we plot a distribution which is a composition of linear distribution with a poison distribution. Given in reality the mean has a sinusoidal component for sure and maybe a linear component it should be pretty clear the class of distributions we should be looking at. We must remember all models are an approximation. The point is not to disprove a model but find the model that is the best balance between the fewest number of parameters and the greatest 157. Isn’t the real question not whether the data approximate a Poisson distribution, but whether the mean of the distribution is constant or varies with time? The Cusum chart that I posted as comment #19 on the continuation of this thread clearly demonstrates the mean isn’t constant and that the mean has increased to about 8 since 1995. Hurricanes occur in small numbers each year. You can’t have less than zero hurricanes. Of course the distribution will appear to be approximately Poisson. How can it not be? 158. I found an informative primer on tropical cyclones. Global Guide to Tropical Cyclone Forecasting Here are a couple excerpts from chapter one related to Poisson distribution. Care is needed in the interpretation of these data. A frequency of 100 cyclones over 100 years indicates an average of 1 per year. This should not be interpreted as a 100% probability of a cyclone occurring on that date. Rather, use of the Poisson distribution (Xue and Neumann, 1984) indicates a 37% chance of no tropical cyclone occurring. This distribution provides an excellent estimate of occurrence probability for small numbers of cyclones in limited regions. If a long period of accurate record is available Neumann, et al. (1987) found that the use of relative frequencies provide a better estimate of event probability. A useful estimate of the number of years having discrete tropical cyclone occurrence in a particular area (the number of years to expect no cyclones, 1 cyclone, etc) may be obtained by use of the Poisson distribution. Discussion on this application is given by Xue and Neumann (1984). 159. DeWitt Payne, The sinusoidal poison distribution is exactly that. It is a poison distribution where the mean changes with time. A sinusoidal poison distribution should be more tail heavy then the linearly increasing mean which was suggested with John Read. However, perhaps a modulated linearly increasing mean would be even more tail heavy. Isn’t the real question not whether the data approximate a Poisson distribution, but whether the mean of the distribution is constant or varies with time? The Cusum chart that I posted as comment #19 on the continuation of this thread clearly demonstrates the mean isn’t constant and that the mean has increased to about 8 since 1995. Hurricanes occur in small numbers each year. I agree that the count data and Poisson distribution fitting is better approximated by a Poisson distribution that has a change in mean with time. I suspect that your Cusum chart is probably overly sensitive in picking up a statistically significant change in mean. To reiterate what I found for the time periods below for a mean, Xm, and the probability, p, of a fit to a Poisson distribution: 1851 to 2006: Xm = 5.25 and p = 0.087 1945 to 2006: Xm = 6.10 and p = 0.974 1851 to 1944: Xm = 4.69 and p = 0.416 Now the probability, p, for the period 1945 to 2006 shows an excellent fit to Poisson distribution, while 1851 to 1944 shows a good fit and 1851 to 2006 shows a poorer fit but not in the reject range of less than 0.05. As RichardT noted we need to look at Type II errors and those errors of course increase from very small for 1945 to 2006 and intermediate from 1851 to 1944 to large for 1851 to 2006 as evidenced by the values of p. My other exercise in this thread was to determine the sensitivity of the goodness of fit test to a changing mean and found that while the test does not reject a fit for excursions as large as 1 count per year from the mean, the value of p decreases significantly. If one had a small and/or slowing changing sinusoidal variation in mean shorter than the time periods measured (for a fit to a Poisson distribution) it is doubtful that the chi square test would detect it. I think one can make a very good case for the Poisson fit from 1945 to 2006 and a complementary reasonable case for a Poisson fit from 1851 to 1944 with a smaller mean. With the evidence for earlier under counts of TCs, the smaller early mean with a Poisson distribution could agree with that evidence — if one assumed the earlier TCs were missed randomly. Or one could, if a priori evidence was there, make a case for large period sinusoidal variations in TC occurrences. I am not sure how a case would be made for a slowly changing mean from a Poisson distribution as a function of increasing SSTs for the period 1945 to 2006, but I am sure that someone has or will make the effort. Small changes in the 1945 to 2006 mean due to under (or over) counting and/or trends due to small temperature changes and/or a small cyclical variation probably would not be detectable in the chi square goodness of fit test. Having said all that, the fit for that time period is excellent. 161. Minor note: the period 1945-2006 saw a drift upwards in storm count due to increased counting of weak, short-lived storms and of those hybrids called subtropical storms. If anyone desires to remove those, so as to give a more apples-to-apples comparison, then remove those that lasted 24 hours or less (at 35 knot or higher winds) and the storms which were labeled in the database as 162. Re: #160 The cusum chart is designed to detect small changes in a process, in the range of 0.5 to 2.0 standard deviations, more rapidly than a standard individual control chart. So I don’t think it is overly sensitive considering that the changes observed are likely to be small. In fact, I’m rather surprised that hurricane researchers haven’t already used it. But then it is a technique used mostly in industry. If the annual count of hurricanes were truly random then annual hurricane predictions would have no skill compared to a prediction based on the median (or maybe the mode) of the distribution. This should be testable and probably already has been. Anybody have a quick link to the data before I go looking? 163. RE #162 Bill Gray’s individual hurricane forecasts, including his review of their forecast skill, can be found in the individual reports located here . I don’t know of any comprehensive study, though I seem to recall that Greg Holland did some kind of review (which CA’s willis later found to be of little merit). I also seem to recall that willis did a review on CA and found skill in Gray’s forecasts. 164. Solow and Moore 2000 which I mentioned in #1 above includes a test for whether there is a secular trend in the Poisson parameter (concluding for Atlantic hurricanes 1930-1998 that there isn’t.) If anyone can locate an implementation of this test in R (which seems to have every test under the sun), I’d be interested. IT seems to me that the strategy for testing the presence of a secular trend (see the null hypothesis H0 described there) would be equally applicable, mutatis mutandi, for testing the presence of a sin(b*t) term rather than just a t term. Andrew R. Solow and Laura Moore, Testing for a Trend in a Partially Incomplete Hurricane Record, Journal of Climate Volume 13, Issue 20 (October 2000) pp. 3696′€”3699 url 165. Re: #161 Minor note: the period 1945-2006 saw a drift upwards in storm count due to increased counting of weak, short-lived storms and of those hybrids called subtropical storms. I would agree that ideally as much of the observational differences as can be presumed should be removed from the data before looking at fits to Poisson distribution. Re: #162 I am not aware of a Cusum analysis being used to evaluate statistically significant changes in means, but have seen it used exclusively as an industrial control tool. Maybe a good reference to a statistical book or paper would convince me. Re: #163 I don’t know of any comprehensive study, though I seem to recall that Greg Holland did some kind of review (which CA’s willis later found to be of little merit). I also seem to recall that willis did a review on CA and found skill in Gray’s forecasts. I found that without the late adjustments (closer to event) that there was not skill in Gray’s forecasts. I also have the idea that he used some adjustments that where not necessarily part of any objective criteria but more subjective. I did find skill when late adjustments were used. Re: #164 Solow and Moore 2000 which I mentioned in #1 above includes a test for whether there is a secular trend in the Poisson parameter (concluding for Atlantic hurricanes 1930-1998 that there I need to read this link more closely but if they have looked at a fit of land falling hurricanes for fit to a Poisson distribution, I must say: why did not I think of that. 166. John Ried (#141) writes: “The mean count for the 60 years 1945 to 2004 inclusive is 5.97. We will use this as an estimate of the parameter of the distribution. I have calculated that the probability of obtaining a count of 15 or greater from a Poisson distributed population with a parameter of 5.97 is .0005, ie 1 in 2000. We can therefore reject the null hypothesis at the 0.1 percent level. It follows that either 2005 is an exceptional year which is significantly different from the 60 preceding years or that the process which generates annual hurricane counts is not a Poisson distribution. Personally I prefer that latter interpretation. Hurricane generation is likely to depend on large scale ocean parameters such as mixed layer depth and temperature which persist over time. Because of this it is unlikely that successive hurricanes are independent events. If they are not independent then they are not the outcome of a Poisson process. Poisson works only if there is no clustering of events.” I was thinking about your comments and I’ve decided that if you are interesting in a good fit of the tail statistics then you should not estimate the mean via a simple average. You should use maximum likely hood to estimate the mean. This will mean that the fit you obtain for the distribution will have fewer of these highly unlikely events but will have a worse Chi Squared Score. 167. #151 — I never claimed to be an expert in statistics. Whatever I may be expert in, or not, doesn’t change that no matter when a 15-hurricane year showed up across however many years you like, your method of isolating out that particular year requires it to be highly improbable and demanding of a physical explanation. Your null experiment, in other words, telegraphs your conclusion. There is a physical explanation, of course, but in a multiply-coupled chaotic system a resonance spike like a 15-hurricane year will be a fortuitous additive beat from the combination of who-knows-how-many underlying energetic cycles. It’s likely no one will ever know what the specific underlying physical cause is for the appearance of any particular number of hurricanes. There is another aspect of this which is overlooked. That is, in a short data set like the above, there won’t have been time for the appearance of very many of the more extreme events. That means the calculated mean of what amounts to a truncated data series is really a lower limit of the true mean. A Poisson distribution calculated around that lower limit will leave isolated whatever extremes have occurred, because the high-value tail will attenuate too quickly. For example, the Poisson probability of 15 hurricanes in a given year increases by factors of 1.7 and 3.2 over mean=6.1 if the true mean is 6.5 or 7 hurricanes per year, resp. That 6.1 per year is a lower limit of the true mean then makes 15 hurricanes less unusual, and so perhaps less demanding of an explanation that, in any case, would probably be unknowable even if we had a physically perfect climate model.* *E.g. M. Collins (2002) Climate predictability on interannual to decadal time scales: the initial value problem Climate Dynamics 19, 671′€”692 168. Re #164 Good question. 169. #141. I agree with John Reid. Poisson is only a hypothesis. Some sort of autocorrelation certainly seems possible to me, especially once the year gets started. My guess as to a low 2006 season was based on the idea that it had a slow start and whatever conditions favored the slow start would apply through the season. Also, for all we know, the true distribution may be a somewhat fat-tailed variation of the Poisson distribution. I doubt that it would be possible to tell from the present data. 170. #141. John Reid, wouldn’t it make more sense to test the hurricane distribution for 1945-2006 as Poisson rather than calculating a parameter for 1945-2004 and then testing 2005. A test for Poisson is that the Poisson deviance is approximately chi-squared with degrees of freedom equal to the length of the record. HEre’s a practical reference. Here’s an implementation in of this test in R: index< -(1945:2006)-1850;N<-length(index); N #62 x<- hurricane.count[index]#hurricane count is series commencing 1851 x_hat<-exp(coef(glm0)) ;x_hat # 6.145161 test<-2*sum( x * log(x/ x_hat) );test #Poisson test asymptotically chi2 df=N # 61.63861 The value of x_hat here is no surprise as it is very close to the mean. Including the 2005 and 2006 records, the Poisson deviance is almost exactly equal to the degrees of freedom. 171. Re: #170 I was concerned that the numbers in your post did not match the chi square test I used to obtain a p = 0.97 for a fit of the 1945-2006 hurricane counts to Poisson distribution. I then read the paper linked and believe that I see the approach used there is much different than the one I used. (I excerpted it below and ask whether I am correct that this is what you used — without the a priori information incorporated). The p = 0.51 for a Poisson fit that I believe I can deduce from your print outs, while indicating a good fit, is significantly below that that I calculated using the approach with which I am familiar. I am wondering whether the binning involved in my approach that requires at least 5 counts per bin is what makes the difference here. The df in my approach are of course related to the counts ‘€” 2 which in this case could not exceed 13 but is made smaller by the binning of more than one number to meet the 5 minimum requirement. In binning the 15 count for 2005 with other lesser counts to meet the 5 minimum binning requirement, in effect takes a very low probability appearance of 1 occurence at 15 counts and combines it into one where the probabilty goes to a probability of 5 counts over 11 or 12 which will have a higher probability. An alternative method of interpreting the model deviance is to estimate what the deviance value should be for a sparse data set if the model fitted the data well and this is possible using simulations of the data. The fitted values which are derived from the original Poisson model may be regarded as the means of a set of Poisson random variables and, assuming that these fitted values are correct, random numbers for each observation can be generated and compared with the Poisson cumulative distribution function to provide simulated data. A new set of fitted values may then be estimated by fitting a Poisson model to these simulated data and the deviance of this new model may be calculated by comparing the simulated data, which are now treated as the observed values, with the new set of fitted values. Because the data have been produced according to a known model, the deviance is approximately what we would expect if a correct model were fitted to the original, sparse data set. If the observed model deviance lies within the middle 95% of the simulated distribution, it is reasonable to accept the model at the 0.05 significance level. The application of this approach to the sparse data set analysed in this study is described below. A priori information may also be included in generalized linear models… Such a priori information is incorporated into the model by treating it as a covariate with a known parameter value of 172. Does anyone have the land falling hurricane data and/or link to it for the NATL over the extended period of 1851 to 2006 — or a shorter period such as 1930 to 2006? I would like to see how well that data fits a Poisson distribution over the whole period and perhaps for split periods. 173. I collated the landfall data into a readable form. The HURDAT data is formated in a very annoying way. landfall.count= ts( tapply(!is.na(landfall$year),factor(landfall$year,levels=1851:2006),sum) ,start=1851) 174. Here is some code that can be used to give an appreciation of the type-2 errors in Paul Lindsay’s analysis. This simple analysis tests if Ho (the process is Poisson) is rejected for mixtures of Poisson processes using a chi-squared test. The mixture has two populations of ~equal size, with means 6.1+offset and 6.1-offset, where the offset is varied between 0 and 3. This is repeated many times, and the proportion of types Ho is rejected is summed. When the offset is zero, Ho is rejected 5% of the time at p=0.05, this is expected. The test has almost no power with small offsets, the proportion of rejects is only slightly higher than the type-1 error rate. With an offset of 2, i.e. population means of 4.1 and 8.1, Ho is rejected about half the time. Still larger offsets are required to reliably reject Ho. This case is extreme, using two discrete populations rather than a continuity, but still Ho is not reliably rejected unless the difference between populations is large - Paul Lindsay's test has little power. An alternative test, GLM, is much more powerful, and can be used to shows that the hurricane counts are not Poisson distributed. 175. Code again N=100#increase to 1000 for more precision nyr=63 #number of years in record 176. last try N=100#increase to 1000 for more precision nyr=63 #number of years in record plot(off.set,rejectHo, xlab="Offset", ylab="Probability of rejecting Ho") # [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] #off.set 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3 #rejectHo 0.05 0.07 0.09 0.09 0.17 0.20 0.44 0.49 0.63 0.83 0.97 0.98 1 177. Re: #174 I found that for the chi square goodness of fit to a Poisson distribution from 1945 to 2006 went from 0.974 with a mean of 6.1 to a p 178. Re: #174 I found that for the chi square goodness of fit to a Poisson distribution for 1945 to 2006 hurricane counts went from 0.974 with a mean of 6.1 to a p 179. Re: #174 I found that for the chi square goodness of fit to a Poisson distribution for the 1945 to 2006 hurricane counts went from p = 0.974 with a mean of 6.1 to a p less than 0.05 when I inserted means of 5.0 and 7.2. The value of p decreases slowly for the initial incremental changes from 6.1 and then decreases at an ever increasing rate as the changes get further from 6.1. Do you have more details on the alternative GLM test? I now remember that the greater or lesser than signs will stop the post. 180. Steve McIntyre says (#170) #141. John Reid, wouldn’t it make more sense to test the hurricane distribution for 1945-2006 as Poisson rather than calculating a parameter for 1945-2004 and then testing 2005. Yes it would. I only did it the way I did it to allow me to make the either/or argument more clearly. As it happens it may well be that the first 60 years is not significantly different from a Poisson distribution. I’ll have a look at it. 181. I tried the following test for a trend in the Poisson parameter (calculating glm0 as above). The trend coeffficient was not significant. # Deviance Residuals: # Min 1Q Median 3Q Max #-1.98996 -0.88819 -0.03531 0.52119 2.73641 # Estimate Std. Error z value Pr(>|z|) #(Intercept) 1.416262 0.366138 3.868 0.000110 *** #index 0.003170 0.002866 1.106 0.268673 #Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ‘ 1 #(Dispersion parameter for poisson family taken to be 1) # Null deviance: 61.639 on 61 degrees of freedom #Residual deviance: 60.414 on 60 degrees of freedom #AIC: 287.76 #Number of Fisher Scoring iterations: 4 #Model 1: x ~ index #Model 2: x ~ 1 # Resid. Df Resid. Dev Df Deviance #1 60 60.414 #2 61 61.639 -1 -1.225 182. #181 try a second order term in the GLM model, else use a GAM 183. Doing my standard chi square test for goodness of fit for a Poisson distribution for land falling hurricanes for the time periods 1851 to 2005, 1945 to 2005 and 1851 to 1944, I found the following means, Xm, and chi square probabilities, p: 1851 to 2005: Xm = 1.81 and p = 0.41 1945 to 2005: Xm = 1.70 and p = 0.03 1851 to 1944: Xm = 1.88 and p = 0.58 The trend line for land falling hurricane counts over the 1851 to 2005 time period has y = -0.0016x + 4.93 and R^2 = 0.0025. 184. Re: #174 This case is extreme, using two discrete populations rather than a continuity, but still Ho is not reliably rejected unless the difference between populations is large – Paul Lindsay’s test has little power. An alternative test, GLM, is much more powerful, and can be used to shows that the hurricane counts are not Poisson distributed. RichardT, I am not sure how to interpret your findings, but I would say that the p values that you and I derived for the 1945 to 2006 hurricane count fit to a Poisson distribution are close to same: 0.95 and 0.97. It is that number that informs of the fit for that time period and gives the measure of Type II errors. Those numbers indicate a small Type II error. Your sensitivity test is something that shows considerably less robustness for detecting changes in means than my less than formal back-of-an-envelop test did, but that exercise is besides the point as it does not change the p value for the actual fit found. I know that chi square goodness of fit tests can be less than robust and more sensitive tests where applicable should be applied. For a goodness of fit for normal distribution, I was shown years ago that skewness and kurtosis tests could be superior to the chi square test and particularly when the data is sparse and binning of data becomes problematic. I do not know how to interpret the difference in the goodness of fit tests between that for all hurricanes and land falling hurricanes for the 1945 to 2005(6) time period except to note that the sparse data for a small Poisson mean reduces the degrees of freedom to very small numbers for a chi square test. The discrepancies between a predicted Poisson and the actual distribution for land fall hurricanes were in the middle of the range and not at the tails. The telling analysis to me are the lack of trends in the land falling hurricanes and in the partitioned data that Steve M and David Smith have presented and analyzed ‘€” all of which point to some early undercounts and (lacking better explanations for these findings than I have seen) an immeasurable trend in total hurricanes. I am hoping to see more details from you or Steve M on the Generalized Linear Models alternative test as I do not have much experience with fitting these models with a Poisson distribution (and 185. Speaking of counting things, would the same type of analysis apply to counting days of record-setting high and low temperatures as described here? 186. #186 Dan, I think a similar type of analysis would work but record high temperatures aren’t really a poison process. If you find the average distance from the mean temperature at which recorded high temperatures occur and instead count the number of days which the temperatures deviates from the mean by this amount or more you would have a poison process. This could match closely the counts of record high temperatures but will not equal it exactly. 187. NAMED STORMS AND SOLAR FLARING Some meteorologists have observed that the key circumstances that factor into hurricane formation are primarily sea surface temperatures, wind shear, and global wind events. They also believe that stationary high pressure centers over North America and El Nino cycles in the Pacific cause the Atlantic Hurricanes to turn north into the Mid-Atlantic. Also there appear to be fewer hurricanes during El Nino years more recently While all this may be true in terms of outer symptoms, it does not explain the inner causes of hurricanes nor do they help to predict future number of named storms, a process which more recently has not been accurate. Standard meteorology does not yet embrace the electrical nature of our weather, plasma physics, nor the plasma electrical discharge from near earth comets. Solar cycles and major solar storm events like X flares are mistakenly ignored. Yet solar flares disrupt the electrical fields of our ionosphere and atmosphere and cause electrical energy to flow between our ionosphere and upper cloud tops in developing storms. Here is what Space Weather recently said and recorded when showing an electrical connection from the ionosphere to the top of storm clouds on August 23,2007. GIGANTIC JETS: Think of them as sprites on steroids: Gigantic Jets are lightning-like discharges that spring from the top of thunderstorms, reaching all the way from the thunderhead to the ionosphere 50+ miles overhead. They’re enormous and powerful. You’ve never seen one? “Gigantic Jets are very rare,” explains atmospheric scientist and Jet-expert Oscar van der Velde of the Université Paul Sabatier’s Laboratoire d Aérologie in Toulouse, France. “The first one was discovered in 2001 by Dr. Victor Pasko in Puerto Rico. Since then fewer than 30 jets have been recorded–mostly over open ocean and on only two occasions over land.” The resulting increased electrical currents affect the jet streams [which are also electrical] and which energize and drive our developing storms and hurricanes. Here is what NASA said about the recent large X-20 solar flare on April 3, 2001[release 01-66] “This explosion was estimated as an X-20 flare, and was as strong as the record X-20 flare on August 16, 1989, ” said Dr. Paal Brekke, the European Space Agency Deputy Project Scientist for the Solar and Heliospheric Observatory (SOHO), one of a fleet of spacecraft monitoring solar activity and its effects on the Earth. “It was more powerful that the famous March 6, 1989 flare which was related to the disruption of the power grids in Canada.” Canada had record high temperatures that summer [This writers comments. not by NASA] Monday’s flare and the August 1989 flare are the most powerful recorded since regular X-ray data became available in 1976. Solar flares, among the solar system’s mightiest eruptions, are tremendous explosions in the atmosphere of the Sun capable of releasing as much energy as a billion megatons of TNT. Caused by the sudden release of magnetic energy, in just a few seconds flares can accelerate solar particles to very high velocities, almost to the speed of light, and heat solar material to tens of millions of degrees. The flare erupted at 4:51 p.m. EDT Monday, and produced an R4 radio blackout on the sunlit side of the Earth. An R4 blackout, rated by the NOAA SEC, is second to the most severe R5 classification. The classification measures the disruption in radio communications. X-ray and ultraviolet light from the flare changed the structure of the Earth’s electrically charged upper atmosphere (ionosphere). This affected radio communication frequencies that either pass through the ionosphere to satellites or are reflected by it to traverse the globe. [Note red highlighting is by this author ,not NASA] Here is what flares affect Industries on the ground can be adversely affected, including electrical power generation facilities, ionospheric radio communications, satellite communications, cellular phone networks, sensitive fabrication industries, plus the electrical system of our entire planet including equatorial jet streams, storm clouds, hurricanes, ionosphere, northern and southern jet streams, earth s atmosphere, vertical electrical fields between earth s surface and the ionosphere just to mention a few. The reason for all the extra named storms recently 2000-2005 is not global warming but the increased number of significant solar flares, comet fly bye s and the unique planetary alignment during the latter part of solar cycle #23. These events can occur any time during a solar cycle but are more prominent around the years of the solar maximum and especially during 6-7 years of the ramp down from maximum to minimum. Refer to the web page of CELESTIAL DELIGHTS by Francis Reddy http://celestialdelights.info/sol/XCHART.GIF for an excellent article and illustration of solar flares and solar cycles during the last three solar cycles. The use of simple Regression analysis of past named storms to predict future storms will continue to be of limited value unless these randomly occurring solar events are taken into account as well .One cannot accurately predict the score of future ball games by simply looking at past ball games. You have look at each new year based on the unique circumstances of that new season The attached table clearly illustrates why there were so few storms [only10] in 2006 and why the previous years 1998-2005 was so much more active in terms of named storms namely [16-28 storms/ year] .The table for example shows that during 2003 there were 16 named storms and twenty [20] X class solar flares during the main hurricane season of June1-November 30. Three of the solar flares were the very large ones like X28, X17 and X10. On the other hand during 2006 there were only 10 named storms and only 4 X size solar flares of which none were during the hurricane season. During 2005 and 2003 there were 100 and 162 respectively of M size solar flares while in 2006 there were only 10. The 2000-2005 increase of named storms was not due to global warming or the years 2006-2007 would have continued to be high in terms of storms. During the period 2000-2005, much more electrical energy was pumped into our atmosphere by the solar flares especially the larger X size flares. There may have also been planetary electrical field increase brought on by the close passing of several major comets and special planetary alignments, like during September 6,1999 and August 26-29,2003. The year 2007 will likely be similar to 2006 with fewer storms as there has been no major solar flaring to date or major passing comets. It is possible but unlikely that major solar flaring will take place during a solar minimum year which the year 2007 is. Unless there will be significantly more solar flaring during the latter part of this year, the number of named storms will again be closer to the average of 9- 10 and not 15-17 as originally predicted nor the current predictions of some 13 -15 storms. YEAR # OF X SIZE DURINGG EL NINO # OF NAMED SOLARR COMETS SOLAR LARGE HURRIC. YEAR STORMS PHASE Near FLARESS FLARES SEASON adjustedd not adjust. 1996 1 1 13 12 solar min HALE BOOP 1997 3 X9.4 3 YES 8 7 1998 14 10 NA 15 14 1999 4 4 13 12 PL LEE 2000 17 X5.7 13 16 15 solar max ENCKE 2001 18 X20,X14.4 8 16 15 PL[six] C-LINEAR 2001A2B 2002 11 9 YES 13 12 2003 20 X28,17,10 15 16 16 PL NEAT V1 2004 12 11 YES 15 15 2005 18 X17 12 NA 28 28 2006 4 X9 0 YES 10 10 2007 0 0 NA 5 5 solar min to date to date 0 to date to date * assumed season June1 to C&M flares were not included Some flares last longer and deposit more energy. This was not noted. NA EL NINO present but not during hurricane season Very minor EL NINO months at the beginning of year PL Special planetary alignment during hurricane season Since major solar flares are difficult to predict, one can recognize in what phase of the solar cycle one is predicting into and use that as an indicator of possible below average, average or above average solar storm level which in turn translates to below average, average or above average named storms. See paper by T.Bai called PERIODICITIES IN FLARE OCCURRENCE ,ANALYSIS OF CYCLES 19-23 on bai@quake.stanford.edu Above average flares occur during 6-7 solar ramp down period and to a lesser extent, the 3-4 years around the solar maximum. Average and below average flares occur at solar minimum and the 2-3 of the solar build up leading to solar maximum. Specific planetary alignments and the swing of major comets around our sun will also tend to increase the named storm activity. There are exceptions to every rule and sometime things are different from the normal or the past. For more information about the new science of weather and the electrical nature of our planet and our planet s atmosphere refer to the writings of James McCanney and his latest book PRINCIPIA METEROROLOGIA THE PHYSICS OF THE SUN 188. Months late, I know, but I just wanted to suggest that an time-variable observation bias is quite likely in hurricane detection. A hurricane is defined as a storm in which sustained windspeeds of more than a certain speed are found, at some point in its career. In the past, windspeeds could mostly only be measured accurately on land. Now, windspeeds can be measured remotely by radar. It is therefore more likely now that a storm which qualifies as a hurricane will be identified as such as than it was, say, 60 years ago. Or have I, too, missed something? One Trackback 1. [...] to that number 15, there seems to have been an outbreak of Data Deficiency Disorder over at Climate Audit. When you are stuck with a limited number of data, it is tempting to try all sorts of a posteriori [...] Post a Comment
{"url":"http://climateaudit.org/2007/01/06/paul-linsays-poisson-fit/?like=1&source=post_flair&_wpnonce=15d01e1108","timestamp":"2014-04-21T02:34:44Z","content_type":null,"content_length":"431222","record_id":"<urn:uuid:309ef9c1-522b-43b6-8628-6b2f107ae16a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: IAS/Park City Mathematics Series 2009; 360 pp; hardcover Volume: 16 ISBN-10: 0-8218-4671-X ISBN-13: 978-0-8218-4671-1 List Price: US$75 Member Price: US$60 Order Code: PCMS/16 In recent years, statistical mechanics has been increasingly recognized as a central domain of mathematics. Major developments include the Schramm-Loewner evolution, which describes two-dimensional phase transitions, random matrix theory, renormalization group theory and the fluctuations of random surfaces described by dimers. The lectures contained in this volume present an introduction to recent mathematical progress in these fields. They are designed for graduate students in mathematics with a strong background in analysis and probability. This book will be of particular interest to graduate students and researchers interested in modern aspects of probability, conformal field theory, percolation, random matrices and stochastic differential equations. Titles in this series are co-published with the Institute for Advanced Study/Park City Mathematics Institute. Members of the Mathematical Association of America (MAA) and the National Council of Teachers of Mathematics (NCTM) receive a 20% discount from list price. Graduate students and research mathematicians interested in probability and its applications in mathematics and physics.
{"url":"http://ams.org/bookstore-getitem/item=PCMS-16","timestamp":"2014-04-18T12:31:45Z","content_type":null,"content_length":"15424","record_id":"<urn:uuid:d294c713-b279-475f-afaf-c875ddc464d7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Does Gradient of Fugacity Create Entropy? P: 789 I approach it in a less general but maybe more understandable way. Suppose you have a thermally isolated cylinder with a boundary of some sort separating it into two systems. The boundary holds negligible extensive parameters (energy, entropy, volume, particles). On the left you have system 1, with intensive parameters [itex]T_1, P_1, \mu_1[/itex] (temperature, pressure and chemical potential) and on the left you have [itex]T_1, P_1, \mu_1[/itex]. The fundamental law states that [tex]dU_1=T_1 dS_1-P_1 dV_1+\mu_1 dN_1[/tex][tex]dU_2=T_2 dS_2-P_2 dV_2+\mu_2 dN_2[/tex]along with conservation laws:[tex]dU_1+dU_2=0\,\mathrm{Conservation\,of\,energy}[/tex][tex]dV_1+dV_2=0\,\mathrm{Conservation\,of\,volume}[/tex][tex]dN_1+dN_2=0\,\mathrm{Conservation\,of\,particle\,number}[/tex] [tex]dS_1+dS_2=dS_c\,\mathrm{Non-conservation\,of\,entropy}, dS_c>0[/tex]Now consider "flows" going from left to right. Define: [tex]dU=-dU_1=dU_2[/tex][tex]dV=-dV_1=dV_2[/tex][tex]dN=-dN_1=dN_2[/tex][tex]dS=-dS_1=dS_2-dS_c[/tex]So now the fundamental laws become:[tex]-dU=-T_1 dS+P_1 dV-\mu_1 dN[/tex][tex]dU=T_2 (dS+dS_c)-P_2 dV+\mu_2 dN[/tex] Adding, and defining [itex]\Delta X=X_2-X_1[/itex] for the intensive variable X, gives [tex]0=T_2 dS_c+\Delta T dS-\Delta P dV+\Delta \mu dN[/tex] So you can see how the differences in the intensive variables relate to the created entropy [itex]dS_c[/itex]. For example, for the case of a thermally open ([itex]dS\ne 0[/itex]) but mechanically closed ([itex]dV= 0[/itex]) and materially closed ([itex]dN= 0[/itex]), the created entropy is [itex]dS_c=-\Delta T dS/T_2[/itex]. We can characterize the effect of the boundary with Fourier's law somewhat restated: [itex]dS=-K\Delta T \Delta t[/itex], where [itex]\Delta t[/itex] is the time interval considered for the process, and K is an effective thermal conductivity, which we can adjust experimentally, and so we can adjust [itex]dS[/itex]. You can see that the ratio of created to passed entropy ([itex] dS_c/dS[/itex]) is proportional to [itex]\Delta T[/itex], so it can be made arbitrarily small by making [itex]\Delta T[/itex] arbitrarily small. You can still have a non-zero [itex]dS[/itex] however, by making the product [itex]\Delta T \Delta t[/itex] non zero - i.e. making [itex]\Delta t[/itex] approach infinity. In this case you will have a quasistatic, reversible process, with no creation of For the case of a mechanically open ([itex]dV\ne 0[/itex]) but thermally closed ([itex]dS= 0[/itex]) and materially closed ([itex]dN= 0[/itex]), the created entropy is [itex]dS_c=\Delta P dV/T_2[/ itex]. If we keep the analogy with the thermal case, this entropy is created by friction. We can characterize the effect of the boundary with Fourier's law somewhat restated: [itex]dV=-\gamma\Delta P \Delta t[/itex], where \gamma is an effective coefficient of friction, which we can adjust experimentally, and so we can adjust [itex]dV[/itex]. If we don't include friction, then we have to deal with the fact that the work done on the left system (1) is [itex]-P_1 dV[/itex] while the work done by the right system is [itex]P_2 dV[/itex]. We could make a mechanical connection to the boundary to counteract the force, and do work on the environment, but that would violate the assumption of an isolated system. We could have a "spring loaded" boundary that absorbed the difference as potential energy, but that would violate the assumption that the boundary contains no energy. For the case of a materially open ([itex]dN\ne 0[/itex]) but thermally closed ([itex]dS= 0[/itex]) and mechanically closed ([itex]dV= 0[/itex]) system, the created entropy is [itex]dS_c=\Delta \mu dN /T_2[/itex]. We can characterize the effect of the boundary with Fick's law somewhat restated: [itex]dN=-D\Delta \mu \Delta t[/itex], where D is an effective coefficient of diffusion, which we can adjust experimentally, and so we can adjust [itex]dN[/itex]. The problem with this is I don't know how a materially open but thermally closed boundary can be made. A semi-permeable membrane allows particles to pass through but if you had a case where the chemical potential on each side was the same, the same number of particles would be transferred forward and backward, but the hot particles would bring more thermal energy and the cold ones less. Anyway, the result is that differences in any intensive variable (temperature, pressure, chemical potential) will produce entropy under the above assumptions. All of the above processes are irreversible as long as the [itex]\Delta X[/itex]'s are neither zero nor infinitesimally small, which produces an entropy creation [itex]dS_c[/itex] which is neither zero nor infinitesimally small. In the limit of zero [itex]\Delta X[/itex]'s, but non-zero [itex]\Delta X \Delta t[/itex]'s, the processes will become quasistatic and reversible, with no creation of entropy.
{"url":"http://www.physicsforums.com/showthread.php?p=4213340","timestamp":"2014-04-18T18:19:06Z","content_type":null,"content_length":"102574","record_id":"<urn:uuid:4fb2c074-ce53-4a0b-b603-2d62b3590633>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
[plt-scheme] HtDP Exercise 12.4.2 - To the authors of HtDP From: artificial (4rt.f15h at gmail.com) Date: Tue Dec 1 10:52:27 EST 2009 Since I got interested in functional programming, I did a little research in the web, downloaded DrScheme and started to work through HtDP a few days ago. I know imperative languages (mostly C, Java, Ruby) fairly well, so I skimmed through the book, yet did all the exercises that contained writing scheme code. Then I encountered exercise 12.4.2. I read it, started to code and failed terribly. I couldn't get it right and got frustrated. So I fired up chrome and googled for a solution, but I couldn't find one. Just a few threads on this mailing list which didn't provide a solution but emphasized the usage of "design recipes". I googled some more for permutations, learned a little about the math background - which was interesting but didn't help me to solve 12.4.2, and found video lectures of Jerry Cain of Stanford which contained a spectacular permutation function but in the end didn't help me either. In the end, I started over with HtDP and learned every little bit about "design recipes". Then I took pen and paper and designed the arrangements function using what I have learned. After that I fired up DrScheme, entered the 4 functions (including the given function arrangements) and check-expects with the examples for each function I came up with, and guess what? It worked right away :-D (this was a few minutes ago). Thank you very much. I think learned my lesson. PS: I'm not an English speaker, but I hope my writing makes sense enough for you to understand. Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2009-December/036950.html","timestamp":"2014-04-18T10:46:50Z","content_type":null,"content_length":"6889","record_id":"<urn:uuid:48a7e9d1-ac6d-4745-bcf4-71e078efc55c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Any excuse to get some new stationery When I was studying for my maths degree, lots of people asked if I was going to be a maths teacher. Ugh, what a horrific thought, choosing to spend all week with teenagers. No, I wasn’t going to be a maths teacher. That said, I’ve always had a lingering thought that what I really wanted to do was help people to be as enthusiastic about maths (and science in general) as I was. I guess my time in the voluntary sector has been along a similar theme, helping people to understand what technology is and how it can help them. I would like to do more though. I’d like to learn how to be a better trainer and also start to explore whether or not ‘teaching’ in a formal environment would suit me. So, brand new for 2013 I’ve signed up to do a PTLLS course, the introductory qualification you need to teach in the lifelong learning sector if, like me, you don’t have any formal teacher training. It’s all a bit daunting really. I haven’t done any academic study for over 10 years and probably haven’t written an essay for a good 15. I don’t start for a few weeks but as soon as I do I’ll probably use this blog to capture some of what I’m thinking and feeling. I wonder if this is a good excuse to get a new notepad? [Please don't leave any comments here telling me how much schools need maths teachers and how rewarding it is to teach children, I'm really not going to change my mind, believe me.] 6 thoughts on “Any excuse to get some new stationery” 1. While avoiding the message “how much schools need maths teachers and how rewarding it is to teach children” you might have a go at being a STEMnet volunteer. I find I can help kids and young adults to be as enthusiastic as I am about engineering (and maths and science in general). For example, next month I’ll be telling sixth formers about Tsiolkovsky’s rocket equation and younger kids about the enormous ratios between transmitted and received signals when they use their mobiles. Let me know if you would like to see the material (That applies to all your readers too) □ Thanks John, I’d be interested to find out more. Although, I’m not sure I can remember any of that level maths at the moment! ☆ Not fancy maths – a couple of quotes from my STEMnet stuff - Tsiolkovsky’s rocket equation - Delta-V = exhaust velocity * ln (initial mass/final mass) - and - ratios between transmitted and received signals when they use their mobiles - phone is getting (say) -85dbmw (decibel milliwatts) – few billionths of a milliwatt and base station is transmitting (say) 5 or 6 watts or 5 or 6 thousand milliwatts So it takes some clever engineering to pick out such a tiny fraction of the transmitted signal A thousand times a billion is a trillion 10**12 (1 with 12 zeros after it) 2. Pingback: Learn, Earn, Yearn | weeklyblogclub
{"url":"http://louisebrown.org.uk/2013/01/25/any-excuse-to-get-new-stationery/","timestamp":"2014-04-20T18:38:02Z","content_type":null,"content_length":"58216","record_id":"<urn:uuid:ac6b74fc-189c-4924-94c1-a1f424053688>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Anyone heard of the "Aquaplane Formula"??? - PistonHeads Author Discussion Original I went to an IAM training sessiontoday, and they mentioned the Aquaplaining formula- a rough guide as tho when a tyre will start to aquaplane. It is the square root of the tyre pressure (in psi) multiplied by nine. This is the maximum speed obtainable befoere aquaplaining. posts hence my tyre pressure of 36 psi will be 6x9 = 56 mph. 153 I'm not convinced. What about you??? Surely tread depth and pattern have got to play a part as well? Forget the forumla, just go really bloody fast. 14,134 Sorry but to many variables not taken into account, like corner or total weight and width for starters. months MoJo. williamp said: I went to an IAM training sessiontoday, and they mentioned the Aquaplaining formula- a rough guide as tho when a tyre will start to aquaplane. posts It is the square root of the tyre pressure (in psi) multiplied by nine. This is the maximum speed obtainable befoere aquaplaining. 150 hence my tyre pressure of 36 psi will be 6x9 = 56 mph. I'm not convinced. What about you??? This is bollox but I will check it out with the IAM. 3,834 Surely this is also affected by the size of the tyre, the weight of the car, as well as a host of other factors. The pressure of the tyre and the speed travelled are relevant factors, but there are a lot of others that affect it as well to quite a significant degree. months A rule like that, I would have thought, would be about as usefull as the stopping distances on the highway code - ie grossly innacurate. williamp said: I went to an IAM training sessiontoday, and they mentioned the Aquaplaining formula- a rough guide as tho when a tyre will start to aquaplane. posts It is the square root of the tyre pressure (in psi) multiplied by nine. This is the maximum speed obtainable befoere aquaplaining. 162 hence my tyre pressure of 36 psi will be 6x9 = 56 mph. I'm not convinced. What about you??? I'm not convinced either, my tyres run 40psi, and I've aquaplaned at 45mph (scary stuff in a 2.5 tonne vehicle with joystick steering 1,278 I wouldn't think so either. The coefficient of friction (road surface, tire surface) and normal force of the car (weight), along with the velocity would determine the degree of wheel turning posts in combination with sliding. Tire pressure and the mass of the car would determine the surface area of contact between the car and the road. If there's more contact surface, then presumably a higher velocity would be needed to skid. If there's hydroplaning, you have an effect sort of like bearing stress on a fluid between two surfaces where the ability of the liquid to shear is 133 greater than the tires and weight (or lack of weight) of the car can break the surface tension and drive through the water to displace it (why wheel tread has channels)rather than sliding months over top of it. But what the hell do I know. I'm only guessing. posts I hydroplaned and spun my MINIVAN 180 degrees while stopping at a light just after it had rained. Must have hit an oil slick. Anyway, I was only going about 25 mph, lower speed than the tire pressure, with plenty of room to stop, so I hadn't slammed on the breaks. My van isn't exactly light weight either. Tread pattern can make a huge difference to this. I remember a test a few years back that showed nearly 15mph difference in aquaplaning speeds. posts A bit worrying that ... given the take-off and landing speeds of aircraft. According to the formula, they'd all aquaplane (no pun intended!) in the wet. Mind you, runways tend to be fairly well drained. But perhaps it explains why the safety briefing includes ditching posts Would have expected depth of water to make a difference as well. 145 Fairly easy to aquaplane when finding a foot and a half of standing water across a Scottish single track road in the dark. seafarer said: 778 I hydroplaned and spun my MINIVAN 180 degrees while stopping at a light just after it had rained. Must have hit an oil slick. Anyway, I was only going about 25 mph, lower speed than the posts tire pressure, with plenty of room to stop, so I hadn't slammed on the breaks. My van isn't exactly light weight either. 162 That just sounds like a very slippy surface to me, which definately isn't the same thing. Aquaplaning is when there is sufficient quantity of water that the pressure it exerted on your tyres months by the water as it tries to squeeze out of the way is enough to lift the tyres off the road, instantly reducing your coefficient of friction almost to zero. As has already been said, the formula suggested ignores far too many factors to be even vaguely useful. williamp said: I went to an IAM training sessiontoday, and they mentioned the Aquaplaining formula- a rough guide as tho when a tyre will start to aquaplane. It is the square root of the tyre pressure (in psi) multiplied by nine. This is the maximum speed obtainable befoere aquaplaining. 5,389 hence my tyre pressure of 36 psi will be 6x9 = 56 mph. I'm not convinced. What about you??? months It's total rubbish. The Bosch handbook has a table of mu values against speed for new and worn tyres. Fact is that if there is deep enough water you will aquaplane. But that can be from driving thorough a small puddle or on a flat road with no run off. If the water is draining from the surface then you have considerably more margin of safety. With all due respect to the IAM, their aquaplaning formula does not sound reliable to me. It ignores far too many important factors. At the risk of provoking serious criticism of my driving, I report the following experience: The road layout - a high class dual carriageway, slightly uphill, a very large radius RH curve over a gentle summit and into a long straight, slightly downhill. The road surface - wet, moderately rough, decent drainage, no standing water. The weather conditions - daylight, an overcast afternoon but good visibility, not raining at the time. Traffic conditions - passed one HGV while accelerating, no other vehicles in sight. The car - a Jaguar Series 3 Sovereign V12 with very good tyres on. posts The driver - me. Guilty as charged - again! 122 The speed - 115 mph. The result - no problem, given suitable care in the execution of the experiment. For anyone minded to point out the great stopping distance required in such conditions, please rest assured that a very large stopping distance was available, but not required. The key to success in any such extreme 'antics' is to keep it all smooth and gentle. Hope this helps! Best wishes all. just out of interest.but 6x9=54 not 56...just thought i'd mention that so you can recalculate 14,988 I once worked out the speed you'd need to do to get a racing pushbike slick tyre to aquaplane. 100psi or so and a very narrow contact patch. Something like 300mph required..... months Can't remember the formula. It was probably wrong. Pehaps they mean the square root of the pressure on the surface of the tyre i.e a product of vehicle weight divided by the total contact patch area in inches (giving psi). Mr E said: I once worked out the speed you'd need to do to get a racing pushbike slick tyre to aquaplane. 100psi or so and a very narrow contact patch. posts Something like 300mph required..... 122 Can't remember the formula. It was probably wrong. Don't worry, it was probably rather more credible than the IAM formula. Just to clarify the aquaplaning formula that is very accurate. Firstly the formula applies to aviation and the answer is expressed in Knots (nautical miles per hour) so get the answer and multiply by 1.15 to give you the answer in statute miles per hour (vehicle MPH) There are many criteria to be addressed in using the formula. For a start, the tread of the tire is ineffectual once the depth of standing water equals the depth of tread on the tire. The tire behaves the same way as if it was bald once this happens. The tire pressure is the underpinning criteria due to it's direct effect on the tire footprint loading, that is, the smaller the footprint the higher the loading on the surface. formula (weight supported divided by tire contact surface area) So, using that formula the higher the tire pressure, the higher the 1 posts aquaplaning initiation speed will be. Also, the interpretation of the formula maintains that the value expressed is the speed that aquaplaning MAY OCCUR, NOT WILL OCCUR. The second part of the formula states that once the aquaplaning is established the speed that it will cease is 7.5 times the square root of the tire pressure. months These formulae weren't just made up, they were derived by testing and noting the results. In aviation the word "demonstrated" takes a new and powerful meaning. Virtually everything in aviation has to be "demonstrated" before it is written in the aviation training syllabus and accepted as fact. It is interesting to note that when hydroplaning or aquaplaning occurs, a steam pool is created beneath the tire, between the tire and the road. This causes the tire tread to boil and melt (rubber reversion) and ruins good tires. Also, the steam pocket is located forward of the center of gravity in the direction of travel, located directly under the center of the axle and perpendicular to the road. Because of this the wheel will rotate BACKWARDS! Thought this might help Regards to all Gassing Station | Archive | General Gassing [Archive]
{"url":"http://www.pistonheads.com/gassing/topic.asp?t=83216","timestamp":"2014-04-18T05:31:33Z","content_type":null,"content_length":"42749","record_id":"<urn:uuid:97a0b29a-7257-493d-aad3-a8856712cd7d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Functions, Graphs, and Limits Sometimes when a function has a horizontal asymptote, we can see what it should be. Sample Problem Let f(x) = 4^-x. Then as x approaches ∞ the function f approaches 0, there is a horizontal asymptote at y = 0. The function approaches this asymptote as x approaches ∞. As x approaches -∞ the function f grows without bound, and therefore does not approach the asymptote y = 0. Sometimes it's a little challenging to see what the asymptote should be, but we're up for it. These asymptotes often appear when drawing rational functions. First we'll go through the guaranteed method that will tell us what type of asymptote we have and what it is, and then we'll show a shortcut for finding horizontal asymptotes. The Polynomial Long Division Method The guaranteed method is polynomial long division. This method will probably take some time, but it will get the answer. When finding horizontal / slant / curvilinear asymptotes of a rational function, we do long division to rewrite the function. We throw away the remainder, and what is left is our asymptote. If we're left with a number, that's a horizontal asymptote (and remember, 0 is a perfectly good number!). If we're left with a line of the form y = mx + b (in other words, a degree-1 polynomial), that line is our slant asymptote. If we're left with anything else, it's a curvilinear asymptote. The reason we throw away the remainder is that it will be a rational function whose numerator has a smaller degree than the denominator, and we know the limit of such a function as x approaches ∞ is A rational function will approach its horizontal / slant / curvilinear asymptote when x is approaching ∞ and when x is approaching -∞. The Shortcut Congratulations; we've survived the long division. Our reward is a shortcut for finding horizontal asymptotes of rational functions. A horizontal asymptote will occur whenever the numerator and denominator of a rational function have the same degree. Find the horizontal asymptote of the function If we do long division, we find so the horizontal asymptote is y = 3. Sample Problem Find the horizontal asymptote of the function If we do long division, we find Therefore, the horizontal asymptote is y = 2. Notice a pattern? We divide the leading term of the numerator by the leading term of the denominator, and that gives us the horizontal asymptote. That's it. To summarize: If a rational function has... • a smaller degree polynomial in the numerator than in the denominator, that function will have a horizontal asymptote at 0. All done! • the same degree polynomial in the numerator as in the denominator, that rational function has a horizontal asymptote which we can find by dividing leading terms only. • a numerator one degree larger than the denominator, that rational function has a slant asymptote, which we can find by long division. • none of the above, the function has a curvilinear asymptote, which we can find by long division.
{"url":"http://www.shmoop.com/functions-graphs-limits/finding-horizontal-asymptotes.html","timestamp":"2014-04-21T07:38:36Z","content_type":null,"content_length":"31349","record_id":"<urn:uuid:a06a9ae4-4fe6-41eb-b808-2fedba7f8320>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Local convexity preserving rational cubic spline curves Sarfraz, M. and Hussain, M. and Habib, Z. (1997) Local convexity preserving rational cubic spline curves. Information Visualization, 1997. Proceedings., 1997 IEEE conference, 1. Microsoft Word A scheme for generating plane curves which interpolates given data is described. A curve is obtained by patching together rational cubics and straight-line segments which, in general, is C1 continuous. It is a local scheme which controls the shape of the curve and preserves the shape of the data by being local convexity-preserving. A particular scheme is suggested which selects the tangent vectors required at each interpolation point for generating a curve. An algorithm is presented which constructs a curve by interpolating the given data points. This scheme provides a visually pleasant display of the curve's presentation. An extra feature of this curve scheme is that it allows subsequent interactive alteration of the shape of the default curve by changing the shape control parameters and the shape-preserving parameters associated with each curve segment. Thus, this feature is useful for further enhancing the user satisfaction, if desired Item Type: Article Date: August 1997 Date Type: Publication Subjects: Computer Divisions: College Of Sciences > Earth Sciences Dept Creators: Sarfraz, M. and Hussain, M. and Habib, Z. ID Code: 14712 Deposited By: KFUPM ePrints Admin Deposited On: 24 Jun 2008 16:46 Last Modified: 12 Apr 2011 13:17 Repository Staff Only: item control page
{"url":"http://eprints.kfupm.edu.sa/14712/","timestamp":"2014-04-21T12:25:43Z","content_type":null,"content_length":"15771","record_id":"<urn:uuid:b7909b07-f2f6-477b-b796-4a2d2fec4e75>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: A logically motivated theory Replies: 15 Last Post: May 21, 2013 8:22 AM Messages: [ Previous | Next ] Re: A logically motivated theory Posted: May 20, 2013 3:37 PM On May 20, 2:59 am, fom <fomJ...@nyms.net> wrote: > On 5/18/2013 11:19 PM, Zuhair wrote: > > On May 19, 1:47 am, fom <fomJ...@nyms.net> wrote: > >> On 5/18/2013 2:52 PM, Zuhair wrote: > >>> On May 18, 10:38 pm, fom <fomJ...@nyms.net> wrote: > >>>> On 5/18/2013 2:21 PM, Zuhair wrote: > >>>>> On May 18, 8:58 pm, fom <fomJ...@nyms.net> wrote: > >>>>>> On 5/18/2013 10:40 AM, Zuhair wrote: > >>>>>>> In this theory Sets are nothing but object extensions of some > >>>>>>> predicate. This theory propose that for every first order predicate > >>>>>>> there is an object extending it defined after some extensional > >>>>>>> relation. > >>>>>>> This goes in the following manner: > >>>>>>> Define: E is extensional iff for all x,y: (for all z. z E x iff z E y) > >>>>>>> -> x=y > >>>>>>> where E is a primitive binary relation symbol. > >>>>>> So, > >>>>>> <X,E> > >>>>>> is a model of the axiom of extensionality. > >>>>>>> Now sets are defined as > >>>>>>> x is a set iff Exist E,P: E is extensional & for all y. y E x <-> P(y) > >>>>>> So, > >>>>>> xEX <-> ... > >>>>>> where > >>>>>> ... is a statement quantifying over relations and predicates. > >>>>> No ... is a statement quantifying over objects. > >>>> How so? The formula seems to have an > >>>> existential quantifier applying to a > >>>> relation and a subformula with the > >>>> quantified 'E' as a free variable: > >>>> 'E is extensional' > >>>> Using 'R' for "Relation", I read > >>>> Ax(Set(x) <-> EREP(extensional(R) /\ Ay(yRx <-> P(y)))) > >>> I meant that P must be first order. There is no General so to say > >>> membership relation E, there are separate different membership > >>> relations all of which are 'primitive' relations. > >>> Of course one might contemplate something like the following: > >>> Define(E): x E X iff Exist R Exist P( extensional(R) /\ Ay(yRX <- > >>>> P(y) /\ P(x) ) > >>> This E relation would be something like a 'general' membership > >>> relation, but this is not acceptable here, because it is 'defined' > >>> membership relation and not 'primitive'. When I'm speaking about > >>> membership relations in the axioms I'm speaking about ones represented > >>> by 'primitive' symbols and not definable ones. > >>> Zuhair > >> I will refrain from making a long posting based on > >> my earlier mistaken impressions. > >> However, your remarks here suggest that you should take > >> a look at Quine's "New Foundations" and the interpretation > >> of stratified formulas. If you have Quine's book "Set Theory > >> and Its Logic" available to you, a couple of hours reading the > >> appropriate chapters and flipping forward to the definitions > >> in earlier chapters should give you some sense of the matter > >> as he saw it. > >> I believe it is Thomas Forster who is making his book > >> available online concerning NF, if you should become more > >> interested in Quine's theory. > > Yes I'm familiar with NF, actually I managed to further simplify it. I > > coined the Acyclicity criterion, after which we can forsake > > stratification altogether. See the joint article of Bowler, Randall > > Holmes and myself on that subject, you can find it on Randal Holmes > > home page and also on my website. Actually see: > >http://math.boisestate.edu/~holmes/acyclic_abstract_final_revision.pdf > > Anyhow here I'm trying to achieve something else, that of seeing that > > PA can be interpreted in a LOGICAL theory. I view all the extensional > > primitive relations in this theory as logical since all what they > > function is to extend predicates. If we regard the second order > > quantifier as logical, then that's it the major bulk of traditional > > mathematics belongs to logic. I'm not sure if we can get without the > > second order quantifier. > > Anyhow I'm not sure of the remarks I've presented here, I might be > > well mistaken, but matters look to go along that side. I'm just > > conjecturing here. > > DIVIDE and CONCUR > > Zuhair > What exactly do you take to be "logical"? > For example, on the Fregean view, one is interpreting > the syllogistic hierarchy as extensional. This is a > typical mathematical view. However, Leibniz interprets > the syllogistic hierarchy as intensional, and, Lesniewski's > criticisms of Frege and Russell also lead to an intensional > interpretation. > What you are referring to as "second-order" is, for me, > a directional issue (extensional=bottom-up, intensional=top-down) > with respect to priority in the syllogistic hierarchy. > One often takes such questions for granted because > our textbooks provide such little background information. > We focus our attentions according to what we are taught. > John MacFarlane has written on this demarcation issue: > http://plato.stanford.edu/entries/logical-constants/ > Historically, Aristotle is "intensional". This follows > from his claim that genera are prior to species. But, > the problems arise with the issue of "substance". The > notion of "substance" is associated with individuals and > grounds the syllogistic hierarchy predicatively. > So, there is an implicit tension in foundational studies > because of "first-order"/"second-order" or > "extensional"/"intensional" dichotomies. > That is why I am asking for some clarification as to > what you take to be "logical". > Thanks. I don't have a principled approach as regards demarcation of logic yet. For now I'm content with saying that all fragments of first order logic with identity (including all substitution instances by concrete objects) are logical. I also maintain that having object extensions of first order predicates is by itself logical since it just copies the predicative content into the object world. A simple trial to do that is to add a monadic symbol like "e" to the above language and stipulate that if G is a predicate symbol then eG is a term. eG is read as "the extension of G". stipulate the axiom: eF=eG iff (for all x. F(x) <-> G(x)) To me this approach is perfectly logical. We can use second order quantification to define a membership x E y iff Exist G: G(x) & y=eG where G ranges over first order predicate symbols. In general we can define an i_th membership relation as: x Ei y iff Exist Gi: Gi(x) & y=eGi where Gi ranges over the i_th order predicate symbols. So the membership relations so defined (in second order) only reflect fulfillment of predicates after their order. Of course to justify such an approach one must show that fulfillment of predicates differs after their order, which indeed is hard to show. Since it seems that "is" in "Socrates is a man", is not really different from "is" in "Triangle is a shape". Of course "is" here is just another word for "fulfills being", so the sentences, completely interpreted, are: "Socrates fulfills being a man", "Triangle fulfills being a shape". Even more completely displayed those sentence are: The object the name "Socrates" refers to -is- a man. The predicate the name "Triangle" refers to -is- a shape. It appears that the article "is" in both of the above sentences has the same meaning, that of "fulfills being". And it seems that there is no difference in this fulfillment per se between the two sentences. However still it can be argued that fulfillment of predicates by predicates is a different kind of concept from fulfillment of predicates by objects, and that this difference is the same for higher predicates fulfillment. And this can be a strong point since using extensions in the same manner (that of concatenating the symbol e with the predicate symbol) doesn't elucidate the difference between an object and a predicate and between a predicate and a higher predicate, which are agreeably must be Mirrored by different "sorts" of extensions, so in absence of that difference we must show it by the membership relation. Anyhow, the above stipulation of ordered membership does in sense MIRROR the order of predicates, so in principle it is inert and doesn't add something that is substantially extra-logical, so it can be considered as logical. However saying its logical really depends on whether the second order quantifier is inert or not. Date Subject Author 5/18/13 A logically motivated theory Zaljohar@gmail.com 5/18/13 Re: A logically motivated theory fom 5/18/13 Re: A logically motivated theory 5/18/13 Re: A logically motivated theory Zaljohar@gmail.com 5/18/13 Re: A logically motivated theory fom 5/18/13 Re: A logically motivated theory Zaljohar@gmail.com 5/18/13 Re: A logically motivated theory fom 5/19/13 Re: A logically motivated theory Zaljohar@gmail.com 5/19/13 Re: A logically motivated theory fom 5/19/13 Re: A logically motivated theory ross.finlayson@gmail.com 5/20/13 Re: A logically motivated theory Zaljohar@gmail.com 5/20/13 Re: A logically motivated theory fom 5/20/13 Re: A logically motivated theory fom 5/21/13 Re: A logically motivated theory Zaljohar@gmail.com 5/18/13 Re: A logically motivated theory Zaljohar@gmail.com 5/18/13 Re: A logically motivated theory fom
{"url":"http://mathforum.org/kb/message.jspa?messageID=9125310","timestamp":"2014-04-17T11:15:45Z","content_type":null,"content_length":"46087","record_id":"<urn:uuid:71f9b022-0466-4f5b-bafe-6446a8f3f84f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
A Law of Comparative Judgment Louis L. Thurstone University of Chicago The object of this paper is to describe a new psycho-physical law which may be called the law of comparative judgment and to show some of its special applications in the ,measurement of psychological values. The law of comparative judgment is implied in Weber's law and in Fechner's law. The law of comparative judgment is applicable not only to the comparison of physical stimulus intensities but also to qualitative comparative judgments such as those of excellence of specimens in an educational scale and it has been applied in the measurement of such psychological values as a series of opinions on disputed public issues. The latter application of the law will be illustrated in a forthcoming study. It should be possible also to verify it on comparative judgments which involve simultaneous and successive contrast. The law has been derived in a previous article and the present study is mainly a description of some of its applications. Since several new concepts are involved in the formulation of the law it has been necessary to invent several terms to describe them, and these will be repeated here. Let us suppose that we are confronted with a series of stimuli or specimens such as a series of gray values, cylindrical weights, handwriting specimens, children's drawings, or any other series of stimuli that are subject to comparison. The first requirement is of course a specification as to what it is[,] that we are to judge or compare. It may be gray values, or weights, or excellence, or any other quantitative or qualitative attribute about which we can think `more' or `less' for each specimen. This attribute which may be assigned, as it were, in differing amounts to each specimen defines what we shall call the psychological continuum for that particular project in measurement. ( 274) As we inspect two or more specimens for the task of comparison there must be some kind of process in us by which we react differently to the several specimens, by which we identify the several degrees of excellence or weight or gray value in the specimens. You may suit your own predilections in calling this process psychical, neural, chemical, or electrical but it will be called here in a non-committal way the discriminal process because its ultimate nature does not concern the formulation of the law of comparative judgment. If then, one handwriting specimen seems to be more excellent than a second specimen, then the two discriminal processes of the observer are different, at least on this occasion. The so-called `just noticeable difference' is contingent on the fact that an observer is not consistent in his comparative judgments from one occasion to the next. He gives different comparative judgments on successive occasions about the same pair of stimuli. Hence we conclude that the discriminal process corresponding to a given stimulus is not fixed. It fluctuates. For any handwriting specimen, for example, there is one discriminal process that is experienced more often with that specimen than other processes which correspond to higher or lower degrees of excellence. This most common process is called here the modal discriminal process for the given stimulus. The psychological continuum or scale is so constructed or defined that the frequencies of the respective discriminal processes for any given stimulus form a normal distribution on the psychological scale. This involves no assumption of a normal distribution or of anything else. The psychological scale is at best an artificial construct. If it has any physical reality we certainly have not the remotest idea what it may be like. We do not assume, therefore, that the distribution of discriminal processes is normal on the scale because that would imply that the scale is there already. We define the scale in terms of the frequencies of the discriminal processes for any stimulus. This artificial construct, the psychological scale, is so spaced off that the frequencies of the discriminal processes for any given stimulus form a normal distribution ( 275) on the scale. The separation on the scale between the discriminal process for a given stimulus on any particular occasion and the modal discriminal process for that stimulus we shall call the discriminal deviation on that occasion. If on a particular occasion, the observer perceives more than the usual degree of excellence or weight in the specimen in question, the discriminal deviation is at that instant positive. In a similar manner the discriminal deviation at another moment will be negative. The standard deviation of the distribution of discriminal processes on the scale for a particular specimen will be called its discriminal dispersion. This is the central concept in the present analysis. An ambiguous stimulus which is observed at widely different degrees of excellence or weight or gray value on different occasions will have of course a large discriminal dispersion. Some other stimulus or specimen which is provocative of relatively slight fluctuations in discriminal processes will have, r similarly, a small discriminal The scale difference between the discriminal processes of two specimens which are involved in the same judgment will be called the discriminal difference on that occasion. If the two stimuli be denoted A and B and if the discriminal processes corresponding to them be denoted a and b on any one occasion, then the discriminal difference will be the scale distance (a — b) which varies of course on different occasions. If, in one of the comparative judgments, A seems to be better than B, then, on that occasion, the discriminal difference (a — b) is positive. If, on another occasion, the stimulus B seems to be the better, then on that occasion the discriminal difference (a — b) is negative. Finally, the scale distance between the modal discriminal processes for any two specimens is the separation which is assigned to the two specimens on the psychological scale. The two specimens are so allocated on the scale that their separation is equal to the separation between their respective modal discriminal processes. We can now state the law of comparative judgment as follows: ( 176) in which S[i] and S[2] are the psychological scale values of the two compared stimuli. x[12] = the sigma value corresponding to the proportion of judgments p[1>2]. When p[1>2] is greater than .50 the numerical value of x[12 ] is positive. When p[1>2] is less than .50 the numerical value of x[12] is negative. σ[1] = discriminal dispersion of stimulus R[l]. σ[2] = discriminal dispersion of stimulus R[2] r = correlation between the discriminal deviations of R[1] and R[2 ]in the same judgment. This law of comparative judgment is basic for all experimental work on Weber's law, Fechner's law, and for all educational and psychological scales in which comparative judgments are involved. Its derivation will not be repeated here because it has been described in a previous article.[2] It applies fundamentally to the judgments of a single observer who compares a series of stimuli by the method of paired comparison when no `equal' judgments are allowed. It is a rational equation for the method of constant stimuli. It is assumed that the single observer compares each pair of stimuli a sufficient number of times so that a proportion, pa>a, may be determined for each pair of stimuli. For the practical application of the law of comparative judgment we shall consider five cases which differ, in assumptions, approximations, and degree of simplification. The more assumptions we care to make, the simpler will be the observation equations. These five cases are as follows: Case I.—The equation can be used in its complete form for paired comparison data obtained from a single subject when only two judgments are allowed for each observation such as `heavier' or `lighter,' `better' or `worse,' etc. There will be one observation equation for every observed proportion of judgments. It would be written, in its complete form, thus: < insert formula 1 > ( 177) According to this equation every pair of stimuli presents the possibility of a different correlation between the discriminal deviations. If this degree of freedom is allowed, the problem of psychological scaling would be insoluble because every observation equation would introduce a new unknown and the number of unknowns would then always be greater than the number of observation equations. In order to make the problem soluble, it is necessary to make at least one assumption, namely that the correlation between discriminal deviations is practically constant throughout the stimulus series and for the single observer. Then, if we have n stimuli or specimens in the scale, we shall have 2 n(n — I) observation equations when each specimen is compared with every other specimen. Each specimen has a scale value, S,, and a discriminal dispersion, a[l], to be determined. There are therefore 2n unknowns. The scale value of one of the specimens is chosen as an origin and its discriminal dispersion as a unit of measurement, while r is an unknown which is assumed to be constant for the whole series. Hence, for a scale of n specimens there will be (2n — i) unknowns. The smallest number of specimens for which the problem is soluble is five. For such a scale there will be nine unknowns, four scale values, four discriminal dispersions, and r. For a scale of five specimens there will be ten observation equations. The statement of the law of comparative judgment in the form of equation I involves one theoretical assumption which is probably of minor importance. It assumes that all positive discriminal differences (a — b) are judged A > B, and that all negative discriminal differences (a — b) are judged A < B. This is probably not absolutely correct when the discriminal differences of either sign are very small. The assumption would not affect the experimentally observed proportion p A> a if the small positive discriminal differences occurred as often as the small negative ones. As a matter of fact, when p A> a is greater than .50 the small positive discriminal differences (a — b) are slightly more frequent than the negative perceived differences (a — b). It is probable that rather refined experimental procedures are necessary to ( 178) isolate this effect: The effect is ignored in our present analysis. Case II.—The law of comparative judgment as described under Case I refers fundamentally to a series of judgments of a single observer. It does not constitute an assumption to say that the discriminal processes for a single observer give a normal frequency distribution on the psychological continuum. That is a part of the definition of the psychological scale. But it does constitute an assumption to take for granted that the various degrees of an attribute of a specimen perceived in it by a group of subjects is a normal distribution. For example, if a weight-cylinder is lifted by an observer several hundred times in comparison with other cylinders, it is possible to define or construct the psychological scale so that the distribution of the apparent weights of the cylinder for the single observer is normal. It is probably safe to assume that the distribution of apparent weights for a group of subjects, each subject perceiving the weight only once, is also normal on the same scale. To transfer the reasoning in the same way from a single observer to a group of observers for specimens such as handwriting or English Composition is not so certain. For practical purposes it may be assumed. that when a group of observers perceives a specimen of hand-writing, the distribution of excellence that they read into the specimen is normal on the psychological continuum of perceived excellence. At least this is a safe assumption if the group is not split in some curious way with prejudices for or against particular elements of the specimen. With the assumption just described, the law of comparative judgment, derived. for the method of constant stimuli. with two responses, can be extended to data collected from a group of judges in which each judge compares, each stimulus with every other stimulus only once. The other assumptions of Case I apply also to Case II. Case III.—Equation 1 is awkward to handle as an observation equation for a scale. with a large number of specimens. In fact the, arithmetical labor of constructing an educational or psychological scale with it is almost prohibitive. The ( 179) equation can be simplified if the correlation r can be assumed to be either zero or unity. It is a safe assumption that when the stimulus series is very homogeneous with no distracting attributes, the correlation between discriminal deviations is low and possibly even zero unless we encounter the effect of simultaneous or successive contrast. If we accept the correlation as zero, we are really assuming that the degree of excellence which an observer perceives in one of the specimens has no influence on the degree of excellence that he perceives in the comparison specimen. There are two effects that may be operative here and which are antagonistic to each other. (1) If you look at two handwriting specimens in a mood slightly more generous and tolerant than ordinarily, you may perceive- a degree of excellence in specimen A a little higher than its mean excellence. But at the same moment specimen B is also judged a little higher than its average or mean excellence for the same reason. To the extent that such a factor is at work the discriminal deviations will tend to vary together and the correlation r will be high and positive. (2) The opposite effect is seen in simultaneous contrast. When the correlation between the discriminal deviations is negative the law of comparative judgment gives an exaggerated psychological difference (S[l]— S[2]) which we know as simultaneous or successive contrast. In this type of comparative judgment the discriminal deviations are negatively associated. It is probable that this effect: tends to be a minimum when the specimens have other perceivable attributes, and that it is a maximum when other distracting stimulus differences are removed. If this statement should be experimentally verified, it would constitute an interesting generalization in perception. If our last generalization is correct, it should be a safe assumption to write r = 0 for those scales in which the specimens are rather complex such as handwriting specimens and childrens’ drawings. If we look at two handwriting specimens and perceive one of them as unusually fine, it probably tends to depress somewhat the degree of excellence ( 180) we would ordinarily perceive in the comparison specimen, but this effect is slight compared with the simultaneous contrast perceived in lifted weights and in gray values. Furthermore, the simultaneous contrast is slight with small stimulus differences and it must be recalled that psycho-logical scales are based on comparisons in the subliminal or barely supraliminal range. The correlation between discriminal deviations is probably high when the two stimuli give simultaneous contrast and are quite far apart on the scale. When the range for the correlation is reduced to a scale distance comparable with the difference limen, the correlation probably is reduced nearly to zero. At any rate, in order to simplify equation i we shall assume that it is zero. This represents the comparative judgment in which the evaluation of one of the specimens has no influence on the evaluation of the other specimen in the paired judgment. The law then takes the following Case IV.—If we can make the additional assumption that the discriminal dispersions are not subject to gross variation, we can considerably simplify the equation so that it becomes linear and therefore much easier to handle. In equation (2) we let σ[2] = σ[1]+d, in which d is assumed to be at least smaller than a[l] and preferably a fraction of σ[1][ ] such as .1 to .5. Then equation (2) becomes Equation (3) is linear and very easily handled. If σ – σ [1 ] is small compared with σ equation (3) gives a close approximation to the true values of S and for each specimen. If there are n stimuli in the scale there will be (2n – 2) unknowns, namely a scale value S and a discriminal dispersion σ for each specimen. The scale value for one of the specimens may be chosen as the origin or zero since the origin of the psychological scale is arbitrary. The discriminal dispersion of the same specimen may be chosen as a unit of measurement for the scale. With n specimens in the series there will be ˝ n(n – 1) observation equations. The minimum number of specimens for which the scaling problem can be solved is then four, at which number we have six observation equations and six unknowns. Case V.—The simplest case involves the assumption that all the discriminal dispersions are equal. This may be legitimate for rough measurement such as Thorndike's hand- ( 282) -writing scale or the Hillegas scale of English Composition. Equation (2) then becomes But since the assumed constant discriminal dispersion is the unit of measurement we have S[1]– S[2] = 1.4142x[12].(4) This is a simple observation equation which may be used for rather coarse scaling. It measures the scale distance between two specimens as directly proportional to the sigma value of the observed proportion of judgments p . This is the equation that is basic for Thorndike's procedure in scaling handwriting and children's drawings although he has not shown the theory underlying his scaling procedure. His unit of measurement was the standard deviation of the discriminal differences which is .707σ when the discriminal dispersions are constant. In future scaling problems equation (3) will probably be found to be the most useful. The observation equations obtained under any of the five cases are not of the same reliability and hence they should not all be equally weighted. Two observed proportions of judgments such as p[l>2] = .99 and p[l>3] = .55 are not equally reliable. The proportion of judgments p[l]>[2 ] is one of the observations that determine the scale separation between S[l] and S[2]. It measures the scale distance (S[1]— S[2]) in terms of the standard deviation, σ[1–2], of the distribution of discriminal differences for the two stimuli R[I] and R[2]. This distribution is necessarily normal by the definition of the psychological scale. The standard error of a proportion of a normal frequency distribution is (283) in which a is the standard deviation of the distribution, Z is the ordinate corresponding to p, and q = 1–p while N is the number of cases on which the proportion is ascertained. The term a in the present case is the standard deviation a[l]—[2 ]of the distribution of discriminal differences. Hence the standard error of p[1>2] is But since, by equation (2) and since this may be written approximately, by equation (3), as σ[1–2] = .707(σ[1] + σ[2]) (7) we have The weight, w[l]–[2], that should be assigned to observation equation (2) is the reciprocal of the square of its standard error. Hence It will not repay the trouble to attempt to carry the factor (σ[l] + σ[2])^2 in the formula because this factor contains two of the unknowns, and because it destroys the linearity of the observation equation (3), while the only advantage gained would be a refinement in the weighting of the observation equations. Since only the weighting is here at stake, it may be approximated by eliminating this factor. The factor .5 is a constant. It has no effect, and the weighting then becomes By arranging the experiments in such a way that all the observed proportions are based on the same number of judgments the factor N becomes a constant and therefore has ( 284) no effect on the weighting. Hence This weighting factor is entirely determined by the proportion, p1>2 of judgments ` I is better than 2' and it can therefore be readily ascertained by the Kelley-Wood tables. The weighted form of observation equation (3) therefore becomes wS[1] – wS[2] – .707wx[12]σ[2] – .707wx[12]σ[1] = o.(12) This equation is linear and can therefore be easily handled. The coefficient is entirely determined by the observed value of p for each equation and therefore a facilitating table can be prepared to reduce the labor of setting up the normal equations. The same weighting would be used for any of the observation equations in the five cases since the weight is solely a function of p when a factor is ignored for the weighting formula. A law of comparative judgment has been formulated which is expressed in its complete form as equation (I). This law defines the psychological scale or continuum. It allocates the compared stimuli on the continuum. It expresses the experimentally observed proportion, p[1>2] of judgments ‘I is stronger (better, lighter, more excellent) than 2 ' as a function of the scale values of the stimuli, their respective discriminal dispersions, and the correlation between the paired discriminal deviations. The formulation of the law of comparative judgment involves the use of a new psychophysical concept, namely, the discriminal dispersion. Closely related to this concept are those of the discriminal process, the modal discriminal process, the discriminal deviation, the discriminal difference. All of these psychophysical concepts concern the ambiguity or qualitative variation with which one stimulus is perceived by the same observer on different occasions. The psychological scale has been defined as the particular linear spacing of the confused stimuli which yields a normal ( 285) distribution of the discriminal processes for any one of the stimuli. The validity of this definition of the psychological continuum can be experimentally and objectively tested. If the stimuli are so spaced out on the scale that the distribution of discriminal processes for one of the stimuli is normal, then these scale allocations should remain the same when they are defined by the distribution of discriminal processes of any other stimulus within the confusing range. It is physically impossible for this condition to obtain for several psychological scales defined by different types of distribution of the discriminal processes. Consistency can be found only for one form of distribution of discriminal processes as a basis for defining the scale. If, for example, the scale is defined on the basis of a rectangular distribution of the discriminal processes, it is easily shown by experimental data that there will be gross discrepancies between experimental and theoretical proportions, p[1>2]. The residuals should be investigated to ascertain whether they are a minimum when the normal or Gaussian distribution of discriminal processes is used as a basis for defining the psychological scale. Tri-angular and other forms of distribution might be tried. Such an experimental demonstration would constitute perhaps the most fundamental discovery that has been made in the field of psychological measurement. Lacking such proof and since the Gaussian distribution of discriminal processes yields scale values that agree very closely with the experimental data, I have defined the psychological continuum that is [1]-implied in Weber's Law, in Fechner's Law, and in educational quality scales as that particular linear spacing of the stimuli which gives a Gaussian distribution of discriminal processes. The law of comparative judgment has been considered in this paper under five cases which involve different assumptions and degrees of simplification for practical use. These may be summarized as Case I.—The law is stated in complete form by equation (I). It is a rational equation for the method of paired comparison. It is applicable to all problems involving the method of constant stimuli for the measurement of both ( 286) quantitative and qualitative stimulus differences. It concerns the repeated judgments of a single observer. Case II.—The same equation (1) is here used for a group of observers, each observer making only one judgment for each pair of stimuli, or one serial ranking of all the stimuli. It assumes that the distribution of the perceived relative values of each stimulus is normal for the group of observers. Case III.—The assumptions of Cases I. and II. are involved here also and in addition it is assumed that the correlation between the discriminal deviations of the same judgment are uncorrelated. This leads to the simpler form of the law in equation (2). Case IV.—Besides the preceding assumptions the still simpler form of the law in equation (3) assumes that the discriminal deviations are not grossly different so that in general one may write σ[2] — σ[l] < σ[l] and that preferably σ[2] — σ[l]=d in which d is a small fraction of σ[l]. Case V.—This is the simplest formulation of the law and it involves, in addition to previous assumptions, the assumption that all the discriminal dispersions are equal. This assumption should not be made without experimental test. Case V. is identical with Thorndike's method of constructing quality scales for handwriting and for children's drawings. His unit of measurement is the standard deviation of the distribution of discriminal differences when the discriminal dispersions are assumed to be equal. Since the standard error of the observed proportion of judgments, p[1>2, ] is not uniform, it is advisable to weight each of the observation equations by a factor shown in equation (II) which is applicable to the observation equations in any of the five cases considered. Its application to equation (3) leads to the weighted observation equation (12).
{"url":"http://www.brocku.ca/MeadProject/Thurstone/Thurstone_1927f.html","timestamp":"2014-04-18T21:34:26Z","content_type":null,"content_length":"38102","record_id":"<urn:uuid:e3d3e126-f040-4c5b-b805-e03108625d97>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
Football Finances In this activity, students analyze pictures of football stands to make estimates related to the attendance at the Super Bowl. The students will realize that estimates must, at times, be made with little background information and that a range of answers might be correct. Students also make estimates about the television audience. To begin the lesson, lead a discussion about the Super Bowl. Depending upon their cultural backgrounds, some students may not be familiar with the Super Bowl. Thus, it is important for students who have watched the Super Bowl to share their experiences with other students in the class. Ask students to identify some of the expenses related to the Super Bowl. They may suggest the following expenses: • price of a ticket • ads/commercials on television • coaches and players’ salaries • cost of food at the game • security, maintenance, and other expenses Ask students which of these costs are easier to estimate than others. Students may say that they need to do some research using internet resources to obtain relatively close estimates. You may wish to give students time to explore the Super Bowl website. At this website, they can learn more about some of the finances and statistics of the Super Bowl. Distribute a copy of the Football Finances activity sheet to each student. Football Finances Activity Sheet Have the students read the introductory information and examine the pictures. Discuss the pictures with the students and have them identify techniques that they can use to make the best estimates possible. Such techniques might include the following: • Students can answer the question, "How may people can sit on benches 10 yards long?" • Survey people about the number of hot dogs they eat at a football game. Students should make their estimates on the information that is available. Have students identify factors that will make it difficult for their estimates to be as accurate as they would like them to be. You may wish to collect the students' Football Finances activity sheet and review their responses to the questions on the activity sheet. 1. Students can identify variables (e.g., temperature at the stadium) that would affect the actual results. 2. Have students gather information after the football game is over to see how close their estimates are. 3. Have students make estimates of the number of souvenirs (i.e., pennants, sweatshirts, and hats) that will be purchased. Learning Objectives Students will: • Estimate large numbers • Make written predictions from pictures and other limited information
{"url":"http://illuminations.nctm.org/Lesson.aspx?id=875","timestamp":"2014-04-17T15:53:35Z","content_type":null,"content_length":"59685","record_id":"<urn:uuid:7b337e7a-4f09-4a3f-96dd-57c9c637a4e8>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Brooks, GA Math Tutor Find a Brooks, GA Math Tutor I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery. 12 Subjects: including discrete math, linear algebra, algebra 1, algebra 2 ...I can also recall tutoring an older student in science, and she has done extremely well as a result of my tutoring. I have found that having a precise teaching routine, exercising extreme patience with the student, as well as being consistent and persistent, have proven to be successful for stud... 57 Subjects: including geometry, algebra 1, algebra 2, SAT math ...I have a PhD in Polymer Science as an elastomer physical chemist. I am a published chemist and a lecturer at the graduate school level. I am certified to teach science and mathematics in Ohio, was a "preferred substitute teacher" in two school districts in NE Ohio, and have tutored and taught s... 12 Subjects: including algebra 1, algebra 2, chemistry, prealgebra ...I currently work for a company installing HVAC controls and building automation. I also attend Southern Crescent Tech pursuing my degree in Computer Information Systems with a direct focus on Computer Networking. I am a down to earth person and even tempered. 18 Subjects: including algebra 2, algebra 1, Microsoft Excel, trigonometry ...I love helping students understand Prealgebra! Tutored on Precalculus topics during high school and college. Completed coursework through Multivariable Calculus. 28 Subjects: including trigonometry, government & politics, biostatistics, linear algebra Related Brooks, GA Tutors Brooks, GA Accounting Tutors Brooks, GA ACT Tutors Brooks, GA Algebra Tutors Brooks, GA Algebra 2 Tutors Brooks, GA Calculus Tutors Brooks, GA Geometry Tutors Brooks, GA Math Tutors Brooks, GA Prealgebra Tutors Brooks, GA Precalculus Tutors Brooks, GA SAT Tutors Brooks, GA SAT Math Tutors Brooks, GA Science Tutors Brooks, GA Statistics Tutors Brooks, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/brooks_ga_math_tutors.php","timestamp":"2014-04-20T04:30:04Z","content_type":null,"content_length":"23644","record_id":"<urn:uuid:6553b60b-ac29-4c53-8d44-06215c3fca0c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
La Mirada Statistics Tutor Find a La Mirada Statistics Tutor Hello! My name is Jasmyn and I am a current college student preparing to transition into my senior year. My major is in Sociology and I am minoring in Biblical Studies. 32 Subjects: including statistics, reading, English, writing ...I have more than 8 years experience using IBM's SPSS statistical program (from version 12 to current version 21) including extensions in SPSS AMOS and SPSS Modeler. I have cleaned raw data from my clientele and have properly named, assigned codes for missing values and coded each variables for g... 3 Subjects: including statistics, SPSS, ecology ...I have been tutoring students in Biostatistics for many years. I am specialized in statistical modeling (linear regression, logistics regression, longitudinal modeling, mixed modeling, general linear models, etc.). I am very familiar with SAS, R and STATA. I have taken the statistics class using STATA at graduate level. 10 Subjects: including statistics, calculus, Chinese, SAT math ...Additionally, I like teaching so I'll always show up with a smile on my face for sessions each of my students. Whether you just need a little help mastering a subject for a class, or are looking for someone to help you boost your LSAT score up to law school competitiveness, just drop me a line a... 16 Subjects: including statistics, economics, SAT math, LSAT ...Second, I make sure that my students are learning, not just memorizing techniques. Thirdly, I would give my students proper techniques to solve word problems in mathematics. Lastly, I have to help my students to improve their grades in math.I am qualified to teach Algebra 1 because I taught in the universities and colleges for more than 4 years. 3 Subjects: including statistics, algebra 1, algebra 2
{"url":"http://www.purplemath.com/la_mirada_ca_statistics_tutors.php","timestamp":"2014-04-21T02:11:47Z","content_type":null,"content_length":"23993","record_id":"<urn:uuid:6a13d5c2-faa3-48bc-b307-b4f5cccced52>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Narberth Precalculus Tutor ...I hear bad grammar in classes and in the media daily. Let's understand and focus on the differences. The reviewer/grader has 2 minutes to evaluate your essay. 35 Subjects: including precalculus, chemistry, English, geometry ...I have a minor in biochemistry from University of Delaware. This incorporates detailed knowledge of protein structure and cell biology. I also took further courses in my Materials Science degree as my concentration was in Biomaterials. 14 Subjects: including precalculus, chemistry, algebra 1, algebra 2 I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun... 12 Subjects: including precalculus, calculus, physics, ACT Math ...I want to help everyone see what I see in the subject, greatness. I love learning and teaching. I have a lot of patience and want to see everyone succeed. 8 Subjects: including precalculus, calculus, SAT math, trigonometry ...I also currently work as an academic and SAT tutor for StudyPoint. When I was in college, I was the editor of a biweekly magazine for two years: because I went to an engineering school, many of my writers didn't have some of the basics down. So, before every issue, I would work one-on-one with ... 25 Subjects: including precalculus, chemistry, writing, physics
{"url":"http://www.purplemath.com/Narberth_Precalculus_tutors.php","timestamp":"2014-04-16T13:44:39Z","content_type":null,"content_length":"23811","record_id":"<urn:uuid:2c9d4e30-fc8a-4aad-b422-b95581e46537>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Help with GCF • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5102126be4b0ad57a5625a48","timestamp":"2014-04-17T09:45:44Z","content_type":null,"content_length":"53481","record_id":"<urn:uuid:3425f0a9-7c05-4127-bb8b-6dc9c15d3d38>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Operational semantics for monads While randomly browsing around on Planet Haskell I’ve found a post on Heinrich Apfelmus‘ blog about something called “operational semantics” for monads. Found it very iluminating. Basically it’s a form to implement monads focusing not on defining the bind and return operators, but on what the monad is really supposed to do. It’s a view where a monad define a Domain Specific Language, that must be interpreted in order to cause it’s effects. It seems to me it’s exactly what is implemented in the monadprompt (Control.Monad.Prompt) package, although I’m not sure. > {-# LANGUAGE GADTs #-} > import Control.Monad > import Data.Map (Map, fromList, unionWith) The definition of a monad on this approach starts with a common interface given by the following data type and a singleton function: > data Program m a where > Then :: m a -> (a -> Program m b) -> Program m b > Return :: a -> Program m a > singleton :: m a -> Program m a > singleton i = i `Then` Return Note that the types of the data constructors Then and Return are very similar (but not equal…) to the types of the monadic operations (>>=) and return. This identification of class functions with data constructors is recurring throughout this post. This data type is instanciated as a traditional monad as follows: >instance Monad (Program m) where > return = Return > (Return a) >>= f = f a > (i `Then` is) >>= f = i `Then` (\ x -> is x >>= f) This is all we need! As an example let’s describe the implementation of the State Monad within this approach. This is exactly the first example given by Apfelmus on his post, disguised as a stack The operational approach to monads begins with recognizing what operations you want your monad to perform. A State Monad have a state, a return value and two function: one that allows us to retrieve the state as the return value, and one that allows us to insert a new state. Let’s represent this in the following GADT: > data StateOp st retVal where > Get :: StateOp st st -- retrieve current state as a returned value > Put :: st -> StateOp st () -- insert a new state This are the operations needed on the State Monad, but the monad itself is a sequence of compositions of such operations: > type State st retVal = Program (StateOp st) retVal Note that the type synonim State st is a monad already and satisfy all the monad laws by construction. We don’t need to worry about implementing return and (>>=) correctly: they are already defined. So far, so good but… how do we use this monad in practice? This types define a kind of Domain Specific Language: we have operations represented by Get and Put and we can compose them in little programs by using Then and Return. Now we need to write an interpreter for this language. I find this is greatly simplified if you notice that the construct do x <- singleton foo bar x can be translated as foo `Then` bar in this context. Thus, to define how you’ll interpret the later, just think what’s the effect you want to have when you write the former. Our interpreter will take a State st retVal and a state st as input and return a pair: the next state and the returned value (st, retVal): > interpret :: State st retVal -> st -> (st, retVal) First of all, how should we interpret the program Return val ? This program just takes any state input and return it unaltered, with val as it’s returned value: > interpret (Return val) st = (st, val) The next step is to interpret the program foo `Then` bar. Looking at the type of things always helps: Then, in this context, have type StateOp st a -> (a -> State st b) -> State st b. So, in the expression foo `Then` bar, foo is of type StateOp st a, that is, it’s a stateful computation with state of type st and returned value of type a. The rest of the expression, bar, is of type a -> State st b, that is, it expects to receive something of the type of the returned value of foo and return the next computation to be executed. We have two options for foo: Get and Put x. When executing Get `Then` bar, we want this program to return the current state as the returned value. But we also want it to call the execution of bar val, the rest of the code. And if val is the value returned by the last computation, Get, it must be the current state: > interpret (Get `Then` bar) st = interpret (bar st) st The program Put x `Then` bar is suposed to just insert x as the new state and call bar val. But if you look at the type of Put x, it’s returned value is empty: (). So we must call bar (). The current state is then discarded and substituted by x. > interpret (Put x `Then` bar) _ = interpret (bar ()) x We have our interpreter (which, you guessed right, is just the function runState from Control.Monad.State) and now it’s time to write programs in this language. Let’s then define some helper > get :: State st st > get = singleton Get > put :: st -> State st () > put = singleton . Put and write some code to be interpreted: > example :: Num a => State a a > example = do x <- get > put (x + 1) > return x > test1 = interpret example 0 > test2 = interpret (replicateM 10 example) 0 This can be runned in ghci to give exactly what you would expect from the state monad: *Main> test1 *Main> test2 The approach seems very convenient from the point of view of developing applications, as it’s focused on what are actions the code must implement and how the code should be executed. But it seems to me that the focus on the operations the monad will implement is also very convenient to think about mathematical structures. To give an example, I’d like to implement a monad for Vector Spaces, in the spirit of Dan Piponi (Sigfpe)’s ideas here, here and here. A vector space $\mathcal{V_F}$ is a set of elements $\mathbf{x}\in\mathcal{V_F}$ that can be summed ($\mathbf{x} + \mathbf{y} \in\mathcal{V_F}$ if $\mathbf{x},\mathbf{y} \in \mathcal{V_F}$) and multiplied elements of a field ($\alpha\mathbf{x}$ if $\alpha\in \mathcal{F}$ and $\mathbf{x}\in\mathcal{V_F}$). If we want this to be implemented as a monad then, we should, in analogy with what we did for the State Monad, write a GADT with data constructors that implement the sum and product by a scalar: > data VectorOp field label where > Sum :: Vector field label > -> Vector field label > -> VectorOp field label > Mul :: field > -> Vector field label > -> VectorOp field label > type Vector field label = Program (VectorOp field) label and then we must implement a interpreter: > runVector :: (Num field, Ord label) => Vector field label > -> Map label field > runVector (Return a) = fromList [(a, 1)] > runVector (Sum u v `Then` foo) = let uVec = (runVector (u >>= foo)) > vVec = (runVector (v >>= foo)) > in unionWith (+) uVec vVec > runVector (Mul x u `Then` foo) = fmap (x*) (runVector (u >>= foo)) The interpreter runVector takes a vector and returns it’s representation as a Map. As an example, we could do the following: > infixr 3 <*> > infixr 2 <+> > u <+> v = singleton $ Sum u v > x <*> u = singleton $ Mul x u > data Base = X | Y | Z deriving(Ord, Eq, Show) > x, y, z :: Vector Double Base > x = return X > y = return Y > z = return Z > reflectXY :: Vector Double Base -> Vector Double Base > reflectXY vecU = do cp <- vecU > return (transf cp) > where transf X = Y > transf Y = X > transf Z = Z and test this on ghci: *Main> runVector $ x <+> y fromList [(X,1.0),(Y,1.0)] *Main> runVector $ reflectXY $ x <+> z fromList [(Y,1.0),(Z,1.0)] As Dan Piponi points out in his talk, any function acting on the base f :: Base -> Base is lifted to a linear map on the vector space Space field Base by doing: > linearTrans f u = do vec <- u > return (f vec) More on this later. :) HTML generated by org-mode 6.34c in emacs 23 • Nice post. I found two little bugs, though: 1) Data.Map clashed with prelude, so you would want to say something like > import Data.Map(Map, fromList, unionWith) 2) Your types don’t match up. I’m pretty sure you meant to write > Then :: m a -> (a -> Program m b) -> Program m b in your “Program” data type □ Hi! Thanks for the corrections, you’re right. • For vector spaces in general you’ll need to distinguish multiplication of vectors ( :: V F -> V F -> V F) vs scaling vectors (.*> :: F -> V F -> V F or F -> V F). Thus, you should reflect this asymmetry in the names of the multiplicative operations. Also, you need to define an identity element for vector addition; this is in Dan’s Vector, so you’ll probably want it in your VectorOp as well. Also also, Num isn’t a field, it’s a ring; you want Fractional for fields. □ Hmm. It seems my type signatures got eaten: <*> :: V F -> V F -> V F .*> :: F -> V F -> V F <*. :: V F -> F -> V F • Hola! I’ve been reading your web site for a while now and finally got the courage to go ahead and give you a shout out from Houston Tx! Just wanted to say keep up the good job!
{"url":"http://randomagent.wordpress.com/2010/08/26/operational-semantics-for-monads/","timestamp":"2014-04-17T12:41:42Z","content_type":null,"content_length":"79642","record_id":"<urn:uuid:013a3fe2-b9a5-4278-be0b-9a00cef5cf15>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
The Glass-Bottom Blog James Lever on Franzen the book is, among other things, possibly the most lachrymose novel of modern times. There are, in its 560 pages, 26 separate instances of weeping, not counting the many blinked-back tears or suppressed sobs or ‘Tiny pearls of tear … clinging to her eyelashes’ (a formulation so heartfelt it is recycled from page 421 of The Corrections). Meanwhile the final results for our ensemble have come in: Republican go-getter Joey has seen the error of his ways and become an importer of ethically grown coffee. Jessica is a junior editor at a literary publishing house in Manhattan, excited to be publishing ‘an earnest young novelist’. Patty’s rotten sister Abigail has become a successful art-clown in Italy. Patty’s less rotten sister Veronica is an unappreciated but possibly genius-level painter. Patty works with kids; Walter, one supposes, with birds. Richard, ‘busy and successful’, has just completed ‘one of those avant-garde orchestral thingies for the Brooklyn Academy of Music’ and is currently working on scores for art-house movies. And pretty much everyone lives in New York. Now, that’s how life oughta be! At last the question ‘How to live? ’, posed throughout the novel, has been answered: we should live like they do in Hannah and Her Sisters. It’s around now that it may dawn on the reader that Freedom has more in common with Richard’s country-tinged, Grammy-nominated middlebrow hit record than Franzen might have intended. This book is ‘Nameless Lake’. , like , is a book I would presumably finish but that I cannot get excited about. Zadie Smith, writing about Netherland and Remainder a year or two ago , got at part of the drabness of "lyrical realism" of the McEwanesque sort, but, if the passages she quoted (or that Lever quotes from Franzen) really are the purplest at hand, what is surprising is how unlyrical, how lacking in "sharp tender shocks," the writing is, despite its elegance. Perhaps the problem with all these books is their quest for "relevance"; I wish there were more of "the sluggish cream wound curdling spirals through her tea" or at least far less of "I was seized for the first time by a nauseating sense of America, my gleaming adopted country, under the secret actuation of unjust, indifferent powers." Or perhaps the reviewers are doing the book an injustice by picking out those bits. There is an obvious joke, which I've never actually heard anyone make, re Schrodinger's cat, the measurement process, and "curiosity killing the cat." (This joke would be vaguely appealing because of the role reversal -- it's the scientist's curiosity that kills the cat -- and would fit very naturally in an elementary discussion of quantum mechanics.) Tangentially, I wonder what it says about my intellectual seriousness that I sometimes get interested in a problem purely because I can think of an amusing title for a talk about it. I started working on (what I thought were) macroscopic tunneling phenomena in superconducting whiskers because I wanted to give a talk titled "Schrodinger's whiskers." (This has the additional advantage of providing a segue into trapped flux in SQUID-like rings, which of course one can call "quantum handcuffs." In the event, the program sort of petered out.) Recently I've been finding it hard to get interested in problems that would not lead to a talk titled "Fear and loathing in the electron gas." I was planning to blog about Kay Ryan -- one of the three or four living poets whose work I actually care about -- but Paris Review just interviewed her so I can put this off for a while: How did you come up with what you’ve called recombinant rhyme? When I started writing nobody rhymed—it was in utter disrepute. Yet rhyme was a siren to me. I had this condition of things rhyming in my mind without my permission. Still I couldn’t take end-rhyme seriously, which meant I had to find other ways—I stashed my rhymes at the wrong ends of lines and in the middles—the front of one word would rhyme with the back of another one, or one word might be identical to three words. In “Turtle,” for instance, I rhyme “afford” with “a four-oared,” referring to a four-oared helmet: “Who would be a turtle who could help it?/A barely mobile hard roll, a four-oared helmet,/she can ill afford the chances she must take/in rowing toward the grasses that she eats.” The rhymes are just jumping all around in there, holding everything together. What’s recombinant rhyme? It’s like how they add a snip of the jellyfish’s glow-in-the-dark gene to bunnies and make them glow green; by snipping up pieces of sound and redistributing them throughout a poem I found I could get the poem to go a little bit luminescent. This bit from Humphry Clinker is worth excerpting if only for its concreteness. I am pent up in frowzy lodgings, where there is not room enough to swing a cat; and I breathe the steams of endless putrefaction; and these would, undoubtedly, produce a pestilence, if they were not qualified by the gross acid of sea-coal, which is itself a pernicious nuisance to lungs of any delicacy of texture: but even this boasted corrector cannot prevent those languid, sallow looks, that distinguish the inhabitants of London from those ruddy swains that lead a country-life — I go to bed after midnight, jaded and restless from the dissipations of the day — I start every hour from my sleep, at the horrid noise of the watchmen bawling the hour through every street, and thundering at every door; a set of useless fellows, who serve no other purpose but that of disturbing the repose of the inhabitants; and by five o'clock I start out of bed, in consequence of the still more dreadful alarm made by the country carts, and noisy rustics bellowing green pease under my window. If I would drink water, I must quaff the maukish contents of an open aqueduct, exposed to all manner of defilement; or swallow that which comes from the river Thames, impregnated with all the filth of London and Westminster — Human excrement is the least offensive part of the concrete, which is composed of all the drugs, minerals, and poisons, used in mechanics and manufacture, enriched with the putrefying carcasses of beasts and men; and mixed with the scourings of all the wash-tubs, kennels, and common sewers, within the bills of mortality. This is the agreeable potation, extolled by the Londoners, as the finest water in the universe — As to the intoxicating potion, sold for wine, it is a vile, unpalatable, and pernicious sophistication, balderdashed with cyder, corn-spirit, and the juice of sloes. In an action at law, laid against a carman for having staved a cask of port, it appeared from the evidence of the cooper, that there were not above five gallons of real wine in the whole pipe, which held above a hundred, and even that had been brewed and adulterated by the merchant at Oporto. The bread I eat in London, is a deleterious paste, mixed up with chalk, alum, and bone-ashes; insipid to the taste, and destructive to the constitution. The good people are not ignorant of this adulteration — but they prefer it to wholesome bread, because it is whiter than the meal of corn: thus they sacrifice their taste and their health, and the lives of their tender infants, to a most absurd gratification of a mis-judging eye; and the miller, or the baker, is obliged to poison them and their families, in order to live by his profession. The same monstrous depravity appears in their veal, which is bleached by repeated bleedings, and other villainous arts, till there is not a drop of juice left in the body, and the poor animal is paralytic before it dies; so void of all taste, nourishment, and savour, that a man might dine as comfortably on a white fricassee of kid-skin gloves; or chip hats from Leghorn. As they have discharged the natural colour from their bread, their butchers-meat, and poultry, their cutlets, ragouts, fricassees and sauces of all kinds; so they insist upon having the complexion of their potherbs mended, even at the hazard of their lives. Perhaps, you will hardly believe they can be so mad as to boil their greens with brass halfpence, in order to improve their colour; and yet nothing is more true — Indeed, without this improvement in the colour, they have no personal merit. They are produced in an artificial soil, and taste of nothing but the dunghills, from whence they spring. My cabbage, cauliflower, and 'sparagus in the country, are as much superior in flavour to those that are sold in Covent-garden, as my heath-mutton is to that of St James's-market; which in fact, is neither lamb nor mutton, but something betwixt the two, gorged in the rank fens of Lincoln and Essex, pale, coarse, and frowzy — As for the pork, it is an abominable carnivorous animal, fed with horse-flesh and distillers' grains; and the poultry is all rotten, in consequence of a fever, occasioned by the infamous practice of sewing up the gut, that they may be the sooner fattened in coops, in consequence of this cruel retention. Of the fish, I need say nothing in this hot weather, but that it comes sixty, seventy, fourscore, and a hundred miles by land-carriage; a circumstance sufficient without any comment, to turn a Dutchman's stomach, even if his nose was not saluted in every alley with the sweet flavour of fresh mackarel, selling by retail. This is not the season for oysters; nevertheless, it may not be amiss to mention, that the right Colchester are kept in slime-pits, occasionally overflowed by the sea; and that the green colour, so much admired by the voluptuaries of this metropolis, is occasioned by the vitriolic scum, which rises on the surface of the stagnant and stinking water — Our rabbits are bred and fed in the poulterer's cellar, where they have neither air nor exercise, consequently they must be firm in flesh, and delicious in flavour; and there is no game to be had for love or money. It must be owned, the Covent-garden affords some good fruit; which, however, is always engrossed by a few individuals of overgrown fortune, at an exorbitant price; so that little else than the refuse of the market falls to the share of the community; and that is distributed by such filthy hands, as I cannot look at without loathing. It was but yesterday that I saw a dirty barrow-bunter in the street, cleaning her dusty fruit with her own spittle; and, who knows but some fine lady of St James's parish might admit into her delicate mouth those very cherries, which had been rolled and moistened between the filthy, and, perhaps, ulcerated chops of a St Giles's huckster — I need not dwell upon the pallid, contaminated mash, which they call strawberries; soiled and tossed by greasy paws through twenty baskets crusted with dirt; and then presented with the worst milk, thickened with the worst flour, into a bad likeness of cream: but the milk itself should not pass unanalysed, the produce of faded cabbage-leaves and sour draff, lowered with hot water, frothed with bruised snails, carried through the streets in open pails, exposed to foul rinsings, discharged from doors and windows, spittle, snot, and tobacco-quids from foot passengers, overflowings from mud carts, spatterings from coach wheels, dirt and trash chucked into it by roguish boys for the joke's sake, the spewings of infants, who have slabbered in the tin-measure, which is thrown back in that condition among the milk, for the benefit of the next customer; and, finally, the vermin that drops from the rags of the nasty drab that vends this precious mixture, under the respectable denomination of milk-maid. I shall conclude this catalogue of London dainties, with that table-beer, guiltless of hops and malt, vapid and nauseous; much fitter to facilitate the operation of a vomit, than to quench thirst and promote digestion; the tallowy rancid mass, called butter, manufactured with candle grease and kitchen stuff; and their fresh eggs, imported from France and Scotland. [NB The "bills of mortality" is a wonderful [S:synecdoche :S]metonym for London. Would be curious to know of any equivalent contemporary phrases. "Within the police blotter" is almost apt for Champaign nowadays but seems unlikely to catch on.] I read the usual Restoration plays in an edition that introduced them as clever but heartless precursors of Sheridan and Goldsmith, and now I've read Sheridan's plays in an edition that insinuates that they are bowdlerized and insipid descendants of Restoration drama. (Why don't people edit books they like?) I'm more sympathetic to the latter point of view, at least re Sheridan; Goldsmith is a different kettle of fish altogether, and not Restoration-like at all. (Sheridan, it appears, actually bowdlerized Vanbrugh's Relapse to make it acceptable to a 1760s audience.) Both The Rivals and The School for Scandal are comedies of manners; the plots -- e.g., man woos idiosyncratic woman with comic aunt -- are very much in the mold of Congreve and Wycherley. The plots are clever enough, but both plays are short on spirit and edge; the characters are pretty blah, though I imagine a good actor could make them come to life, and the dialogue, despite stunts like Mrs. Malaprop's malapropisms, never gets off the ground. In general these plays are what one would expect of period pieces -- unlike The Country Wife, which is entirely contemporary. The difference, it seems, lies in the smut. Are plays without explicit sexual references ipso facto quaint? There's something appealingly reductive about this notion, and it certainly is quite hard to find sexually explicit ancient texts that are boring. I'd venture an alternative explanation, though, which is that the basic dramatic tension in the comedy of manners -- the horror of being cast out of the loop -- depends on social existence being a bit of a tightrope-walk, which requires a degree of cruelty and willingness to ostracize on the part of the "men of sense" (as ostracism would otherwise be empty) as well as a degree of objective danger (as the men of sense must appear sensible). Sheridan's tendencies deprecated the former; the age's conventions prevented the latter, and one is left, in the end, with two halfhearted plays that were unable -- unlike She Stoops to Conquer -- to create a genre to match their temperament. The School for Scandal is interesting and flawed in the way that Auden found Twelfth Night interesting and flawed: a few of the characters -- in this case, Sir Peter Teazle; in the other, Viola -- exist at the wrong level of seriousness, and intrude rather damagingly into the fabric of the play. Sir Peter is an old man who has married a young woman who is about to start an affair with his hypocritical ward who's "gulled" Sir Peter; in a Restoration play this would have been a purely comic part, but Sheridan lacks the heartlessness to make it work, and there is an entirely jarring degree of pathos to the scenes in which Sir Peter appears -- jarring because he is so much more "real" than the others; because his presence critiques and undermines the scandalmongers; because it is clear that in Sheridan's view the entire "school for scandal" is at some level out-of-date and irrelevant, like the women in Pope: As hags hold Sabbaths, less for joy than spite, So these their merry, miserable night. Still round and round the ghosts of beauty glide, And haunt the places where their honor died. And because these facts let the air out of the main action of the play: if sensible people are indifferent to scandal, the activities of scandalmongers can only be so important; but if so almost everything that happens in the play is insignificant. I should say that by contrast The Critic is an entirely admirable and very funny play, much better than its Restoration model The Rehearsal. James Davidson has a delightful article in the LRB about Greek names. You should read the whole thing; I wanted to flag this bit, though: The most famous account of intentionality in Greek naming comes from Aristophanes’ Clouds; Strepsiades explains how he wanted to call his son Pheidonides (‘Of the Line of Thrift’) but his posh wife wanted a Hippos-name to evoke upper-class horsemanship and chariots. So they ended up with Pheidippides. That name (‘Of the line of Thrifty with Horses’?), shared with the famous long-distance runner of Marathon, shows that although the elements of a name might be transparent they might not necessarily make sense when combined. Of course, sparing-with-horses was a sensible name, to the point of being bizarrely apt, for the guy who ran the original marathon. It is reminiscent of the kinds of things that pop up in Old English poetry, which consists almost entirely of litotes, kennings, and compound epithets. Or the Icelandic sagas -- cf. the habit of calling blood "dark beer," or saying, e.g., "He twisted the tail of his cloak around Thorbjorn's throat and bit through it, then snapped his head back, breaking his neck. With such rough treatment Thorbjorn quietened down considerably." (Not to mention that the Laxdaela saga begins with a guy called Ketil Flat-nose; the Gk. for flat-nose, "Simon," is cross-culturally popular as a name for upper-class twits.) My initial reaction to this Language Log post about gerunds/participles was that it was imagining an ambiguity where none actually exists. I can't remember ever having been puzzled about whether a particular -ing construction was a gerund or a participle; it's generally clear, and (as some of the commenters said) it seems useful to have different names for the noun-functions and the adjective-functions of a given -ing construction. Upon reflection, however, I've come around a little to MYL's point of view -- gerunds are not, as a rule, echt nouns. This becomes clear when you try to modify them: consider, e.g., (a) "Drinking continuously is a good idea" vs. (b) "Continuous drinking..." These are both essentially idiomatic constructions to my ear; however, if drinking really were a noun, (a) should sound ridiculous. In fact there are probably instances, like "Going on endlessly about grammar will lose you friends," where you really need the adverbial form, though on a traditional parse "going" is a noun qua gerund. The fact that adverbs can modify gerunds appears quite general to constructions with gerunds in them; and -- to my mind -- offers very strong evidence that gerunds are not to be treated as true nouns. (Obviously the entire phrase is still functioning as a noun; the ambiguity is about the order of operations, as one can noun the verb either before or after adding the qualifier, and depending on what one does first the qualifier is either an adjective or an adverb.) Some constructions w/ gerunds are of course a lot more noun-like than others -- definitely a pluralized gerund is an echt noun. But they do seem to occupy a bizarre syntactic middle ground, along with infinitives and other similar beasts. Google analytics has started collecting site traffic data; I was amused to find that the two leading non-obvious searches (i.e., other than my name, the blog's name, or a specific poem) that lead here are for "malcolm gladwell is an idiot" (which leads here) and "ezra klein is a hack" (which leads to this STOATUSblog post). Not "asshole loser fame" by any means, but this suggests a more replicable approach to picking up readers. Look out for "Dan Brown is an imbecile," "David Foster Wallace is a moron," "David Brooks is a douchebag," and other bouts of biliousness. What's odd about Borges's Personal Anthology is how boring it is relative to his collected works. Borges explains in a preface that he's left out stories that were "superficial exercises in local color" (or something like that); what remain are bland and repetitive statements of certain metaphysical positions -- about infinity and idealism and such -- that obviously meant a great deal to Borges but are trite as philosophy. A good example of the sort of thing Borges seems to have liked in later life is "The Other Tiger"; for all I know it's a good poem in Spanish but in translation one finds it drab and obvious. A good example of the sort of thing he did not like is "Tlon, Uqbar, Orbis Tertius." I wonder if this is due in part to Borges (mistakenly) comparing himself to Kafka. Jonathan Mayhew has a delightful post likening the Oxfordian theory of Shakespeare to a story by Borges or James or Kafka. I would be much more specific: such a story would have been at home among Borges's Ficciones but has very little to do with Kafka. I am not sure, either, that a reader who knew Borges exclusively through the Personal Anthology would have perceived the accuracy of this comparison, as none of the "notes on imaginary books" are in it. It seems to me quite inaccurate to bracket Kafka with Borges at all. Kafka's novels can be understood in textbook modernist terms, as being rather like Macbeth -- successful attempts to find objective correlatives for a certain set of feelings, a certain sense the isolated mind has of its relation to the world. This will not work with Borges as the stories aren't mood-driven. A Kafkaesque situation is a nightmare; a Borgesian situation is an The Borges stories I like best (other than the enjoyable but silly stuff in A Universal History of Iniquity) are the notes on imaginary books and two later stories, "The South" and "Averroes' Search," which come off for reasons that might be fortuitous. The general problem with writing "philosophical" literature, as Eliot remarks, is that the philosophy has to be realized -- fleshed out, peopled, colonized -- for the enterprise to work. (One should make an exception for purely frivolous uses, like the Hitchhiker's Guide.) A lot of Borges stories, like "The Circular Ruins," are bad b'se insufficiently real. In later work like "The Aleph," concreteness coexists uncomfortably with philosophical notions, but the philosophy comes off as an exotic and unjustified plot device. But in Tlon, "Pierre Menard" etc., idealism finds an odd but satisfying local habitation in names. Like Swinburne's poems (insert more Eliot here) these stories seem to indicate that there are other worlds than the physical one that are rich and irregular enough to "inhabit" or "realize" ideas in: the world of words and literature, in particular. I wonder, though, if the truth isn't simpler: these stories depend for effect largely on the ability of language to refract ordinary objects, say the moon, "into something rich and strange" -- all literature does, I think -- and lose their charm when there aren't any objects to be looked at. I've expressed vaguely similar sentiments about Stevens in the past, the good bits of his poems are the half-distinct, dazzling images seen out of the corner of the eye, while he's going on about something or other. This is probably a somewhat heretical opinion. See this post for context. A "universality class" is a basin of attraction, i.e., it consists of the set of microscopic models that coarse-grain to a particular fixed point. Universality classes are, of course, equivalence classes -- of sequences of models that coarse-grain similarly. This structure is to some extent analogous to the set-theoretic construction of real numbers from rational numbers (i.e., pairs of integers): a real number x "is" an equivalence class of Cauchy sequences of rationals (i.e., all sequences that converge to x). The analogy is admittedly not very good: the reals have binary operations on them, etc., whereas there isn't really anything analogous for models. However, I think it is good enough to get at the main point: viz. that when one talks about the properties of the set of equivalence classes of rational numbers, one is doing a different sort of mathematics from the theory of rational numbers: the theory is defined on a different set, so very different sorts of things are true -- reduction to lowest terms in one case, the extreme value theorem in the other -- and the "reduction" of one theory to the other is a reduction of analysis, not to number theory, but to number theory plus set theory. It is also well-understood in the mathematical case that the reduction is not useful in that it doesn't help you prove theorems about the reals; its only potential use is in consistency proofs (which are anyhow precluded by Godel's theorems). A similar statement seems to be true in physics -- the theory of fixed points is a theory of equivalence classes of sequences of models; this is not a reduction of many-body physics to particle physics but rather to particle physics plus set theory on renormalization group flows. The clumpy, highly classified, scale-invariant space of macroscopic objects is not like the relatively smooth landscape of parameters allowed by the standard model (or the "landscape" in string theory): the reduction is "useless" in the same sense as above. This is closely connected to the intuitive point that coarse-graining doesn't preserve distances in parameter space (two very similar microscopic theories can have very different macroscopic limits, etc.), which is why microscopic theories do not constitute explanations of macroscopic phenomena. Batterman is, I think, correct to try to find more formal and precise ways of saying this than just saying that it's a "useful idealization" to think of emergent phenomena as existing -- while strictly speaking this is all that one can say, "useful" is an ambiguous word, and it is worth emphasizing, I think, that emergent phenomena are "useful idealizations" in the same way as real numbers are useful idealizations of the way we talk about rational numbers. Although I don't understand the holographic principle terribly well, I should note John McGreevy's claim that the (d + 1)-dimensional holographic dual of a d-dimensional model can be understood as a stack of d-dimensional slices of the model at various stages under the renormalization group. (The d+1 dimensional universe has two boundaries: a surface corresponding to the original model, and a point corresponding to its fixed point.) I suspect that this only really works for the "AdS"-like models, which don't describe the large-scale structure of our universe, but it would be neat if the renormalization group had a "physical" interpretation. If you haven't read it yet, I recommend Batterman's article on the philosophical connection between emergent phenomena and singularities. It is nice to have philosophers taking the renormalization-group idea seriously, as this idea has had an enormous impact on how physics is done and interpreted by physicists -- at least by theorists -- but hasn't made it to the pop physics books or the undergraduate curriculum. Batterman correctly observes that physicists understand emergent phenomena in terms of the renormalization group, that the renormalization group concept needs limits (like that of infinite system size) to be made precise, and that the limits lead to singularities; he goes on to make what I think are some misleading statements about the interpretation of singularities. In this post I'll try to run through the usual argument and explain how I think the singularities ought to be interpreted. I understand emergent phenomena in terms of the following analogy. Suppose you drop a ball onto a hilly landscape with friction, and ask where it will end up a very long time later. The answer is evidently one of the equilibrium points, i.e., a summit, a saddle point, or (most likely) a valley. Two further points to be made here: (1) It does not matter where on the hillside the ball started out; it'll roll to the bottom of the hill. In other words, very different initial conditions often lead to the same long-time behavior. (2) It matters very much which side of the summit the ball started out on; small differences in initial conditions can lead to very different long-time behavior. So what constitutes an "explanation" of the properties of the ball (say its response to being poked) a long time after its release? One possible answer is that, because mechanics is deterministic, once you've described the initial position and velocity you've "explained" everything about the long-time behavior. However, this is unsatisfactory because point (1) implies that most of this "explanation" would be irrelevant, and point (2) implies that the inevitable fuzziness of one's knowledge of initial conditions could lead to radically indeterminate answers. A better answer would be that the explanation naturally divides into two parts: (a) a description of the properties (curvature etc.) of the equilibrium points, and (b) the (generally intractable) question of which basin-of-attraction the ball started out in. In particular, part (a) on its own suffices to classify all possible long-time behaviors; it reduces a very large number of questions (what does the ball smell like? at what speed would it oscillate or roll off if gently poked?) to a single question -- approximately where is it? (Approximate position typically implies exact position in the long-time limit, except if there are flat valleys.) "Emergent" (or "universal") phenomena are descriptions of equilibrium points, i.e., answers to part (a) of the question. The renormalization group concept is the notion that the large-scale behavior of a many-body system is like the long-time behavior of a ball in a frictional landscape, in the sense that it is governed by certain "fixed points," which can be classified, and that theories of these fixed points suffice to describe the large-scale properties of anything. So, for instance, there are three states of matter rather than infinitely many. The analogue of time is the length-scale on which you investigate the properties of the system -- as you go from a description in terms of interacting atoms to one in terms of interacting blobs and so on -- and the analogue of the "loss of information" via friction is the fact that you're averaging over larger and larger agglomerations of stuff. (All of this is quite closely related to the central limit theorem.) The role of infinite limits in the former case is obvious: if you start the ball very close to the top of the hill (where, let's say, the slope is vanishingly small), it'll take a very long time to roll off. So the fixed-point idea only really works if you wait infinitely long. However, it's also obvious that if you wait a really really long time and the ball hasn't reached its equilibrium, this is because it is near another equilibrium; so the equilibrium description becomes arbitrarily good at arbitrarily long times. (This is of course just the usual real-analysis way of talking about infinities.) The infinite-system-size limit is precisely analogous: while it only strictly works in the infinite-size limit, this "infinity" is not a pathology but is to be interpreted in the usual finitist way -- given epsilon > 0 etc. Epsilon-delta statements are true regardless of how far the series is from convergence, but they grow increasingly vacuous and useless as epsilon increases; something similar is true with dynamical systems and the renormalization group. I should explain what this has to do with fractals, by the way. In the case of the ball, a fixed point is defined as a configuration that is invariant under the equations of motion; in the case of the many-body system, a fixed point is a configuration that is invariant under a change of scale, i.e., a fractal. A continuum object is, of course, a trivial kind of fractal; you can't see the graininess of it without a microscope, and it doesn't seem to have any other scale than the size of its container. Systems near phase transitions are sometimes nontrivial fractals -- e.g., helium at the superfluid transition is a fractal network of droplets of superfluid in a bath of normal fluid, or vice versa. Phase transition points, btw, correspond to ridges; if you move slightly away from them, you "flow" into one phase or the other. The association between unstable equilibria and nontrivial fractals is not an accident. Any departure from the nontrivial fractal (say in the helium case) leads to either superfluid or normal fluid preponderating at large scales; if you average on a sufficiently large scale the density of droplets of the minority phase goes to zero, and you end up in one trivial phase or the other. I was talking about Trollope with a well-known physicist last year -- he's a fan, I'm not -- when one of his grad students, who didn't speak much English and hadn't been listening, burst in with "What's a trollop?" I explained that a trollop was a prostitute, whereas A. Trollope was a novelist. Said Trollope fan was outraged at my quip that there are no good English novels between Persuasion and Ulysses (qua novels, that is; Dickens has a lot of virtues, but construction is not among them). Having read Middlemarch, I suppose I have to revise this opinion, which is sad because it was snappy and easily stated. The Trollope fan also claimed that Fanny Burney's Evelina was superior to Tom Jones; I have now dutifully read it, and concluded that my Trollope problem is that I just don't much care for novelists who aren't eyecatching prose stylists. While neither Trollope or Burney would begin a novel with "Ingenuous debutante Evelina Anville crumpled behind a bush, having been bludgeoned by notable libertine Lord Merton," they aren't Joyce, and the temptation to skim is often overwhelming. Evelina has a lot of nice satirical touches -- esp. the heroine's stay with her bourgeois relatives in London, her reactions to their "vulgarity," and her extreme embarrassment whenever she runs into aristocratic acquaintances -- and is also very good, in ways that anticipate Austen, on how class distinctions and crushes interact. On the whole, though, it doesn't come off. One of the problems is that the same three aristocrats keep popping up absolutely everywhere, which gives the aristocracy the sense of a claustrophobic little club, and acts at cross purposes with the rest of the plot (which is about an ingenue in the big world). Another is that the posh people don't talk credibly; only the "broad," dialect-speaking characters do. A third is the role of "Sir Clement Willoughby" (btw, the names are not clever at all, another stylistic limitation) who is a rival suitor, a seducer, and, more often than not, a plot device. Mostly, though, it's the drab functional nature of the prose, which is a far cry from Fielding or Smollett; while this was inevitable to some extent in a novel written as a young woman's letters -- "it would be odd for a six-year-old girl to display the character of Winston Churchill" -- (a) that's arguably a statement that the novel was poorly conceived, (b) an easy fix, in this case, would have been to include letters written by the other characters, a la Smollett. The OED defines "Hodge" as follows: 1. A familiar by-form and abbreviation of the name Roger; used as a typical name for the English agricultural labourer or rustic. c1386 CHAUCER Cook's Prol. 12 Euer si I highte hogge of ware. [Ibid. 21 Oure host seyde I graunt it the, Now telle on, Roger.] 1483 Cath. Angl. 187/1 Hoge, Rogerus, nomen proprium. 1589 GREENE Menaphon (Arb.) 58 These Arcadians are giuen to take the benefit of euerie Hodge. a1700 B. E. Dict. Cant. Crew, Hodge, a Country Clown, also Roger. 1794 WOLCOTT (P. Pindar) Wks. III. 350 No more shall Hodge's prong and shovel start. 1826 in Hone Everyday Bk. II. 1210 You seem to think that with the name I retain all the characteristics..of a hodge. 1885 Observer 13 Dec. 5/3 The conduct of Hodge in the recent election. I wonder if being hodged is similar to being rogered. (He was rogered unto a hodgepodge.) I also wonder [well, only rhetorically] about whether the next OED will include the standard mathematical sense of "Hodge." PS to wander into the country and hang out w/ peasants is to be on a Hodge pilgrimage. Walking along the beach this evening I saw what I think were whimbrels: I really ought to buy batteries for my camera. I also finally understand what Eliot meant re Combing the white hair of the waves blown back When the wind blows the water white and black. The waves really looked like that. The sea was abnormally fine-textured and matte this evening; although it was sunny, there wasn't the cheap plasticky gloss one associates with the water on nice
{"url":"http://glassbottomblog.blogspot.com/2010_09_01_archive.html","timestamp":"2014-04-19T19:40:40Z","content_type":null,"content_length":"186360","record_id":"<urn:uuid:1c2b1f9f-a390-4994-97d6-edc3cc479e7f>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
Fuzzy Logic Ppt Presentation Slide 1: FUZZY LOGIC Presented by: A.Gupta J.Jain K.Kedia N.Gupta R.Bafna 1 Index : Index Brief History What is fuzzy logic? Fuzzy Vs Crisp Set Membership Functions Fuzzy Logic Vs Probability Why use Fuzzy Logic? Fuzzy Linguistic Variables Operations on Fuzzy Set Fuzzy Applications Case Study Drawbacks Conclusion Bibliography 2 Brief History : Brief History Classical logic of Aristotle: Law of Bivalence “Every proposition is either True or False(no middle)” Jan Lukasiewicz proposed three-valued logic : True, False and Possible Finally Lofti Zadeh published his paper on fuzzy logic-a part of set theory that operated over the range [0.0-1.0] 3 What is Fuzzy Logic? : What is Fuzzy Logic? Fuzzy logic is a superset of Boolean (conventional) logic that handles the concept of partial truth, which is truth values between "completely true" and "completely false”. Fuzzy logic is multivalued. It deals with degrees of membership and degrees of truth. Fuzzy logic uses the continuum of logical values between 0 (completely false) and 1 (completely true). Boolean (crisp) Fuzzy 4 For example, let a 100 ml glass contain 30 ml of water. Then we may consider two concepts: Empty and Full. In boolean logic there are two options for answer i.e. either the glass is half full or glass is half empty. 100 ml 30 ml In fuzzy concept one might define the glass as being 0.7 empty and 0.3 full. 5 Crisp Set and Fuzzy Set : Crisp Set and Fuzzy Set 6 μ a(x)={ 1 if element x belongs to the set A 0 otherwise } Classical set theory enumerates all element using A={a1,a2,a3,a4…,an} Set A can be represented by Characteristic function A fuzzy set can be represented by: A={{ x, A(x) }} where, A(x) is the membership grade of a element x in fuzzy set SMALL={{1,1},{2,1},{3,0.9},{4,0.6},{5,0.4},{6,0.3},{7,0.2},{8,0.1},{9,0}, {10,0},{11,0},{12,0}} In fuzzy set theory elements have varying degrees of membership Example: Consider space X consisting of natural number<=12 Prime={x contained in X | x is prime number= Fuzzy Vs. Crisp Set : Fuzzy Vs. Crisp Set A A’ a a b b c Fuzzy set Crisp set a: member of crisp set A b: not a member of set A a: full member of fuzzy set A’ b: not a member of set A’ c:partial member of set A’ 7 Fuzzy Vs. Crisp Set : Crisp set Fuzzy Vs. Crisp Set Fuzzy set 8 Features of a membership function core support boundary 1 0 μ (x) x Core: region characterized by full membership in set A’ i.e. μ (x)=1. Support: region characterized by nonzero membership in set A’ i.e. μ(x) >0. Boundary: region characterized by partial membership in set A’ i.e. 0< μ (x) <1 9 A membership function is a mathematical function which defines the degree of an element's membership in a fuzzy set. Membership Functions : Membership Functions adult(x)= { 0, if age(x) < 16years (age(x)-16years)/4, if 16years < = age(x)< = 20years, 1, if age(x) > 20years } 10 Fuzzy Logic Vs Probability : Fuzzy Logic Vs Probability Both operate over the same numeric range and at first glance both have similar values:0.0 representing false(or non-membership) and 1.0 representing true. In terms of probability, the natural language statement would be ”there is an 80% chance that Jane is old.” While the fuzzy terminology corresponds to “Jane’s degree of membership within the set of old people is 0.80.’ Fuzzy logic uses truth degrees as a mathematical model of the vagueness phenomenon while probability is a mathematical model of ignorance. 11 Why use Fuzzy Logic? : Why use Fuzzy Logic? Fuzzy logic is flexible. Fuzzy logic is conceptually easy to understand. Fuzzy logic is tolerant of imprecise data. Fuzzy logic is based on natural language. 12 Fuzzy Linguistic Variables : Fuzzy Linguistic Variables Fuzzy Linguistic Variables are used to represent qualities spanning a particular spectrum Temp: {Freezing, Cool, Warm, Hot} 13 Operations on Fuzzy Set : Operations on Fuzzy Set A B μA μB A= {1/2 + .5/3 + .3/4 + .2/5} B= {.5/2 + .7/3 + .2/4 + .4/5} Consider: >Fuzzy set (A) >Fuzzy set (B) >Resulting operation of fuzzy sets 14 Slide 15: INTERSECTION (A ^ B) UNION (A v B) COMPLEMENT (¬A) μA ∩ B μA U μA ‘ μA∩ B = min (μA(x), μB(x)) {.5/2 + .5/3 + .2/4 + .2/5} μAUB = max (μA(x), μB(x)) {1/2 + .7/3 + .3/4 + .4/5} μA’ = 1-μA(x) {1/1 + 0/ 2 + .5/3 + .7/4 + .8/5} 15 Example Speed Calculation How fast will I go if it is 65 F° 25 % Cloud Cover ? 16 Slide 17: Input: Temp: {Freezing, Cool, Warm, Hot} Cover: {Sunny, Partly cloudy, Overcast} Output: Speed: {Slow, Fast} 17 Rules : If it's Sunny and Warm, drive Fast Sunny(Cover)Warm(Temp) Fast(Speed) If it's Cloudy and Cool, drive Slow Cloudy(Cover)Cool(Temp) Slow(Speed) Driving Speed is the combination of output of these rules... Rules 18 Fuzzification:Calculate Input Membership Levels : 65 F° Cool = 0.4, Warm= 0.7 25% Cover Sunny = 0.8, Cloudy = 0.2 Fuzzification:Calculate Input Membership Levels 19 Calculating: : Calculating: If it's Sunny and Warm, drive Fast Sunny(Cover)Warm(Temp)Fast(Speed) 0.8 0.7 = 0.7 Fast = 0.7 If it's Cloudy and Cool, drive Slow Cloudy(Cover)Cool(Temp)Slow(Speed) 0.2 0.4 = 0.2 Slow = 0.2 20 Defuzzification:Constructing the Output : Speed is 20% Slow and 70% Fast Find centroids: Location where membership is 100% Speed = weighted mean = (2*25+7*75)/(9) = 63.8 mph Defuzzification:Constructing the Output 21 Fuzzy Applications : Fuzzy Applications Automobile and other vehicle subsystems : used to control the speed of vehicles, in Anti Braking System. Temperature controllers : Air conditioners, Refrigerators Cameras : for auto-focus Home appliances: Rice cookers , Dishwashers , Washing machines and others 22 Drawbacks : Fuzzy logic is not always accurate. The results are perceived as a guess, so it may not be as widely trusted . Requires tuning of membership functions which is difficult to estimate. Fuzzy Logic control may not scale well to large or complex problems Fuzzy logic can be easily confused with probability theory, and the terms used interchangeably. While they are similar concepts, they do not say the same things. Drawbacks 23 Conclusion : Fuzzy Logic provides way to calculate with imprecision and vagueness. Fuzzy Logic can be used to represent some kinds of human expertise . The control stability, reliability, efficiency, and durability of fuzzy logic makes it popular. The speed and complexity of application production would not be possible without systems like fuzzy logic. Conclusion 24 Bibliography : Bibliography BOOK : Artificial Intelligence by Elaine Rich, Kelvin Knight and Shivashankar B Nair Internet 25
{"url":"http://www.authorstream.com/Presentation/aSGuest89502-883162-fuzzy-logic-ppt/","timestamp":"2014-04-18T09:03:18Z","content_type":null,"content_length":"133153","record_id":"<urn:uuid:14dae426-1457-4bd3-9f23-71761be49603>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Additive Models (Advanced Data Analysis from an Elementary Point of View) The "curse of dimensionality" limits the usefulness of fully non-parametric regression in problems with many variables: bias remains under control, but variance grows rapidly with dimensionality. Parametric models do not have this problem, but have bias and do not let us discover anything about the true function. Structured or constrained non-parametric regression compromises, by adding some bias so as to reduce variance. Additive models are an example, where each input variable has a "partial response function", which add together to get the total regression function; the partial response functions are unconstrained. This generalizes linear models but still evades the curse of dimensionality. Fitting additive models is done iteratively, starting with some initial guess about each partial response function and then doing one-dimensional smoothing, so that the guesses correct each other until a self-consistent solution is reached. Examples in R using the California house-price data. Conclusion: there are no statistical reasons to prefer linear models to additive models, hardly any scientific reasons, and increasingly few computational ones; the continued thoughtless use of linear regression is a scandal. Reading: Notes, chapter 8; Faraway, chapter 12 Posted by crshalizi at February 09, 2012 10:30 | permanent link
{"url":"http://vserver1.cscs.lsa.umich.edu/~crshalizi/weblog/872.html","timestamp":"2014-04-24T16:10:32Z","content_type":null,"content_length":"3933","record_id":"<urn:uuid:4117053e-1cbc-4641-928b-c51eff9107dc>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
2. probability determining using mathematical methods to model outcomes of a given situation. 4. events that affect each other. 5. the probability of an event under the condition that some preceding event has occurred. 7. events that do not affect each other. 8. if event M can occur in m ways and is followed by an event N that can occur in n ways, then the event m followed by the event N can occur in m times n ways. 10. a technique used to model probability experiments for real- world applications. 11. the investigation of the different possibilities for the arrangement of objects. 12. the problem that can be solved using binomial expansion. 13. two events whose outcomes can never be the same. 15. the measure of the chance of a desired outcome happening. 17. the desired outcome of an event.
{"url":"http://www.armoredpenguin.com/crossword/Data/2012.05/1421/14215138.533.html","timestamp":"2014-04-16T07:20:26Z","content_type":null,"content_length":"88738","record_id":"<urn:uuid:30f0da4b-249d-44b5-b3ae-30ff224e53ba>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
3.2. Skip list algorithms Here is a figure showing the insertion of a new element into a skip list. In the figure above, note the structure labeled “insertion-cut-list” is a list of pointers to the predecessors of the new element at each level. The first element of this list points to the item preceding the insertion in the level-0 list, which visits every element of the list. The second element in the insertion cut list points to the predecessor in the level-1 list, and so on. This insertion cut list shows the position of a child element relative to each of the stacked linked lists. It tells us which links have to be repaired to include the newly inserted child. We call it the insertion cut list to distinguish it from the search cut list; for the reasons behind this distinction, see Section 3.2.2, “The search algorithm”. When we insert a new element, how many of the stacked linked lists should contain the new element? Pugh recommended using a random process based on a probability value p, where the number of levels used for each new element is given by this table: 1 1-p 2 p(1-p) 3 p^2(1-p) ... ... n p^n-1(1-p) For example, for p=0.25, an element has a 3 in 4 probability of being linked into only level 0, a 3/16 probability of being in levels 0 and 1, a 3/64 probability of being in levels 0 through 2, and so forth. The algorithm for deciding the number of levels is straightforward. For p=0.25, it's like throwing a 4-sided die. If it comes up 1, 2, or 3, use one level. If it comes up 4, roll again. If the second roll comes up 1, 2, or 3, use two levels. If it comes up 4, roll again, and so forth. We will incorporate the “dirty hack” described in Pugh's article, limiting the growth in the total number of levels in use to one per insertion. This prevents the pathological case where a series of “bad rolls” might create many new levels at once for only a single insertion. Otherwise we might get silly behavior like a 6-level skip list to store ten elements.
{"url":"http://infohost.nmt.edu/tcc/help/lang/python/examples/pyskip/web/algorithms.html","timestamp":"2014-04-17T10:01:18Z","content_type":null,"content_length":"9487","record_id":"<urn:uuid:5428012b-548a-45e3-b38b-97a6a23bd6b3>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
Electrical Calculation Busbar ratings are based on the expected surface temperature rise of the busbar. This is a function of the thermal resistance of the busbar and the power it dissipates. The thermal resistance of the busbar is a function of the surface area of the busbar, the orientation of the busbar, the material from which it is made, and the movement of air around it. The power dissipated by the bus bar is dependent on the square of the current passing through it, its length, and the material from which it is made. Optimal ratings are achieved when the bar runs horizontally with the face of the bar in the vertical plane. i.e. the bar is on its edge. There must be free air circulation around all of the bar in order to afford the maximum cooling to its surface. Restricted airflow around the bar will increase the surface temperature of the bar. If the bar is installed on its side, (largest area to the top) it will run at an elevated temperature and may need considerable derating. The actual derating required depends on the shape of the bar. Busbars with a high ratio between the width and the thickness, are more sensitive to their orientation than busbars that have an almost square cross section. Vertical busbars will run much hotter at the top of the bar than at the bottom, and should be derated in order to reduce the maximum temperature within allowable limits. Maximum Busbar ratings are not the temperature at which the busbar is expected to fail, rather it is the maximum temperature at which it is considered safe to operate the busbar due to other factors such as the temperature rating of insulation materials which may be in contact with, or close to, the busbar. Busbars which are sleeved in an insulation material such as a heatshrink material, may need to be derated because of the potential aging and premature failure of the insulation material. The Maximum Current rating of Aluminium Busbars is based on a maximum surface temperature of 90 degrees C (or a 60 degree C temperature rise at an ambient temperature of 30 degrees C). If a lower maximum temperature rating is desired, increase the ambient temperature used for the calculations. i.e. If the actual ambient temperature is 40 degrees C and the desired maximum bar temperature is 80 degrees C, then set the ambient temperature in the calculations to 40 + (90-80) = 50 degrees C. The Maximum Current rating of Copper Busbars is based on a maximum surface temperature of 105 degrees C (or a 75 degree C temperature rise at an ambient temperature of 30 degrees C). The Busbar Width is the distance across the widest side of the busbar, edge to edge. The Busbar Thickness is the thickness of the material from which the Busbar is fabricated. If the busbar is manufactured from a laminated material, then this is the overall thickness of the bar rather than the thickness of the individual elements. The Busbar Length is the total length of busbar used. The Busbar Current is the maximum continuous current flowing through the busbar. The power dissipated in the busbar is proportional to the square of the current, so if the busbar has a cyclic load, the current should be the RMS current rather than the average. If the maximum current flows for a considerable period of time, this must be used as the current to determine the maximum busbar temperature, but the power dissipation is based on the square root of the maximum current squared times the period for which it flows plus the lower current squared times the period it flows all divided by the square root of the total time. For example, a busbar carries a current of 600 Amps for thirty seconds, then a current of 100 amps for 3000 seconds, then zero current for 3000 seconds. The power dissipation is based on an RMS current of sqrt(600x600x30 + 100x100x3000 + 0 x 3000)/ssqrt30 + 3000 + 3000) = 82.25 Amps. The Ambient Temperature is the temperature of the air in contact with the busbar. If the air is in an enclosed space, then the power dissipated by the busbar will cause an increase in the ambient temperature within the enclosure. To calculate the rating of a busbar, enter in the width and thickness of the bar, and the ambient temperature around the bar. Select the units as either metric or imperial, and the temperature as Celsius or Fahrenheit. The program displays both the current rating of an aluminium bar of these dimensions and a copper bar of these dimensions. No comments:
{"url":"http://trustme1.blogspot.com/2009/04/busbar-ratings.html","timestamp":"2014-04-18T03:55:17Z","content_type":null,"content_length":"38050","record_id":"<urn:uuid:8be83f6f-2b66-4ca7-addb-46d0eb3ce143>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
West Roxbury ACT Tutor ...In addition, I tutor and have tutored in mathematics from elementary school to high school and throughout college. I have a good command of the material, even well beyond the regents themselves. I've programmed extensively in Mathematica. 47 Subjects: including ACT Math, chemistry, reading, calculus ...If it sounds to you like I can be helpful I hope you will consider giving me a chance. Thanks.My fascination with math really began when I started studying calculus. For more then a decade I have been using calculus to solve a wide variety of complex problems. 14 Subjects: including ACT Math, calculus, geometry, GRE ...I earned a perfect 800 on the SAT Reading section and have been helping students improve their scores for more than ten years. The key to improving in this section is learning how to talk and think about passages like an expert reader: the greatest challenge my students face (especially those wh... 47 Subjects: including ACT Math, chemistry, English, reading ...I currently work as software developer at IBM. When it comes to tutoring, I prefer to help students with homework problems or review sheets that they have been assigned. I prefer to focus on examples from each section, rather than each specific problem, to make sure they understand all of the concepts. 17 Subjects: including ACT Math, statistics, geometry, algebra 1 ...No short cuts required - if it makes sense, the student will learn it. I have a math degree from MIT and taught math at Rutgers University for 10 years. I don't just know the material; I know the student as well.I performed well in my physics courses as an MIT student. 24 Subjects: including ACT Math, chemistry, calculus, physics
{"url":"http://www.purplemath.com/west_roxbury_act_tutors.php","timestamp":"2014-04-21T04:59:45Z","content_type":null,"content_length":"23698","record_id":"<urn:uuid:86a40445-2c0e-412e-b542-f849c2ee54ad>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Non Degenerate Triangles June 28th 2012, 10:53 AM #8 Super Member June 28th 2012, 10:43 AM #7 Super Member June 28th 2012, 09:58 AM #6 Junior Member June 28th 2012, 06:11 AM #5 Junior Member June 27th 2012, 01:36 PM #4 Super Member June 27th 2012, 12:48 PM #3 MHF Contributor June 27th 2012, 10:32 AM #2 Super Member June 27th 2012, 09:36 AM #1 Junior Member Re: Non Degenerate Triangles Sorry, didn't see 1. through 5. 1. The author's claiming that the maximal k is 1. 2. $b_{2009}$, $w_{2009}$, and $r_{2009}$ are the black, white, red sides, but there is no indication that they are of the same triangle. 3. No. Once you fix an assumption on the ordering, you shouldn't make any other assumptions. It's a little difficult to explain, but I can show you other USAMO/IMO-type problems where you can only make one "WLOG" assumption. 4. Yes. 5. He is basically constructing an arbitrary triangle with $w_{2009} = w$. Re: Non Degenerate Triangles Re: Non Degenerate Triangles OK so since this is an IMO problem, the solution needs explanation too! :P Let's get started. 1. Now the first part underlined in red, I have no idea. How come does he assume that k is 1? 2. The second part in red, why does he need to prove that they are always the sides of a triangle? And why only for index 2009? Why not for others? 3. The first part in black - I know that we can assume this without losing generality, but again, why only 2009? Why not others? Is that because k is 1? 4. The second part in black, shows that 2009 is non-degenerate right? 5. The first part in blue. Is he assigning arbitrary variables to members of the set with index 2009? Why? 6. To the parts in green, I have no idea. Re: Non Degenerate Triangles Thanks guys! Got it! I am a hobbyist programmer. I knew it has something to do with an "array" or a "set" in mathematics. But thanks for the explanation. Re: Non Degenerate Triangles Re: Non Degenerate Triangles More commonly, in the United States, called "subscripts". Re: Non Degenerate Triangles Indices = plural of index. In this case, an "index" is the set $b_j, r_j, w_j$. For example, $b_1, r_1, w_1$ and $b_{594}, r_{594}, w_{594}$ are considered to be indices. The question is essentially asking, what is the largest possible k such that the statement is always true, that you can form k non-degenerate triangles with side lengths $b_j, r_j, w_j$ for some j. Last edited by richard1234; June 27th 2012 at 10:35 AM. Non Degenerate Triangles OK so since this is an IMO problem, the solution needs explanation too! :P Let's get started. 1. Now the first part underlined in , I have no idea. How come does he assume that k is 1? 2. The second part in , why does he need to prove that they are always the sides of a triangle? And why only for index 2009? Why not for others? 3. The first part in - I know that we can assume this without losing generality, but again, why only 2009? Why not others? Is that because k is 1? 4. The second part in , shows that 2009 is non-degenerate right? 5. The first part in blue. Is he assigning arbitrary variables to members of the set with index 2009? Why? 6. To the parts in , I have no idea. 1. The author's claiming that the maximal k is 1. 2. longest black, white, red sides, but there is no indication that they are of the same triangle. 3. No. Once you fix an assumption on the ordering, you shouldn't make any other assumptions. It's a little difficult to explain, but I can show you other USAMO/IMO-type problems where you can only make one "WLOG" assumption. 4. Yes. 5. He is basically constructing an arbitrary triangle with Re: Non Degenerate Triangles What is the basis of this assumption? 2. $b_{2009}$, $w_{2009}$, and $r_{2009}$ are the longest black, white, red sides, but there is no indication that they are of the same triangle. Got it. 3. No. Once you fix an assumption on the ordering, you shouldn't make any other assumptions. It's a little difficult to explain, but I can show you other USAMO/IMO-type problems where you can only make one "WLOG" assumption. Yes, please! 5. He is basically constructing an arbitrary triangle with $w_{2009} = w$. Shouldn't it be the other way round? If the arbitrary triangle has the length w, then he should assign the value of $w_{2009}$ to $w$, meaning $w = w_{2009}$ Re: Non Degenerate Triangles I was meaning to ask... Just like in this problem you mentioned, the solution requires first making a random assumption and then going onto proof it (instead of approaching the assumption from a definite pathway. He assumes that (2i - 1) * (2i) ≤ (2i -1) + (2i) and then goes on to prove it. Do these assumptions come from that so called "mental database" of yours? Or is this just the cherry picked part of the solution (and the real solution process consisted of numerous other assumptions by trial and error)? Re: Non Degenerate Triangles It's like writing a persuasive paper (more or less) -- You state your original claim, then you back it up with evidence. In this case, the author states his claim (k=1) and proves it. Last edited by richard1234; June 30th 2012 at 08:27 AM. Re: Non Degenerate Triangles Yeah but how exactly do you make a claim in the first place? How do you know it WILL be true? Is that because your mental database contains similar problems? And also clear my queries in the post #9 Last edited by cosmonavt; June 29th 2012 at 11:10 PM. Re: Non Degenerate Triangles If it's an IMO problem, you or the author probably would have spent considerable time on this (an hour, maybe two hours). The author has already proven that k=1 using the techniques he is about to describe. Remember, it's just like writing a persuasive paper -- introduce your original claim, and support it with evidence (in this case, proof). Last edited by richard1234; June 30th 2012 at 08:27 AM. Re: Non Degenerate Triangles An easy one to start: Find all solutions (x,y,z) in non-negative real numbers that satisfy $x^3 + y^3 = z$ $x^3 + z^3 = y$ $y^3 + z^3 = x$ I think that's what he did. Besides, $w = w_{2009}$ and $w_{2009} = w$ are equivalent. June 29th 2012, 08:33 PM #9 Junior Member June 29th 2012, 09:16 PM #10 Junior Member June 29th 2012, 10:00 PM #11 Super Member June 29th 2012, 11:07 PM #12 Junior Member June 30th 2012, 08:08 AM #13 Super Member June 30th 2012, 08:26 AM #14 Super Member
{"url":"http://mathhelpforum.com/algebra/200445-non-degenerate-triangles.html","timestamp":"2014-04-16T08:15:33Z","content_type":null,"content_length":"78551","record_id":"<urn:uuid:8be8a6ab-7638-4a03-be63-56c9a22acd39>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
How divisible is the average integer? up vote 14 down vote favorite I don't know any number theory, so excuse me if the following notions have names that I'm not using. For a positive natural number $n\in{\mathbb N}_{\geq 1}$, define $Log(n)\in{\mathbb N}$ to be the ``total exponent" of $n$. That is, in the prime factorization of $n$ it is the total number of primes being multiplied together (counted with multiplicity); for example $Log(20)=3.$ I'll define $log_2(n)\in{\mathbb R}$ to be the usual log-base-2 of $n$, so $log_2(20)\approx 4.32$. One can think of $log_2(n)$ as "the most factors that $n$ could have" and think of $Log(n)$ as the number of factors it actually has. Define $D(n)$ to be the ratio of those quantities $$D(n)=\frac {Log(n)}{log_2(n)}\in(0,1],$$ and call it the divisibility of $n$. Hence, powers of 2 are maximally divisible, and large primes have divisibility close to 0. Another example: $D(5040)=\frac{8}{12.3}\ approx 0.65$, whereas $D(5041)\approx\frac{2}{12.3}\approx 0.16$. Question: What is the expected divisibility $D(n)$ for a positive integer $n$? That is, if we define $$E(p):=\frac{\sum_{n=1}^p D(n)}{p},$$ the expected divisibility for integers between 1 and $p$, I want to know the value of $$E:=lim_{p\rightarrow\infty}E(p),$$ the expected divisibility for positive integers. 1. I once wrote and ran a program to determine $E(p)$ for input $p$. My recollection is a bit faint, but I believe it calculated $E(10^9)$ to be about $0.19.$ 2. A friend of mine who is a professor in number theory at a university once guessed that $E$ should be 0. I never understood why that would be. nt.number-theory factorization 2 This might help: mathworld.wolfram.com/PrimeFactor.html – Anweshi Jul 24 '10 at 2:24 add comment 2 Answers active oldest votes Hopefully I've read all your notation correctly. If so, by playing (very) fast and loose with heuristics, I think your friend is right that the answer is 0. Your function $Log(n)$ is the additive function $\Omega(n)$. According to the mathworld entry up vote 14 down vote accepted $\Omega(n)$ has been dubbed the "multiprimality of $n$" by Conway, and satisfies $$ \Omega(n)\sim \ln\ln(n)+\text{mess}, $$ so (very roughly), $$ D(n)\sim \frac{\ln\ln(n)}{\ln(n)}, $$ and $$ E(p)\sim \frac{1}{p}\int_e^p \frac{\ln\ln n}{\ln n}dn. $$ This goes to 0 (very very slowly) as $p\rightarrow\infty$. Thanks to Dror Speiser for catching some errors in an earlier version of this. – Cam McLeman Jul 25 '10 at 12:02 add comment Hardy and Wright define $$ \Omega (n) $$ to be the sum of the exponents. This is on page 354 in the fifth edition, section 22.10. Then we have Theorem 430, the "average order" of $ \Omega (n) $ is $ \log \log n .$ Then Theorem 431, same answer for the "normal order." $$ $$ I just saw Cam's answer, I think I will leave this anyway. The result on the average order answers up vote 8 your question. The book is "An Introduction to the Theory of Numbers." down vote add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory factorization or ask your own question.
{"url":"http://mathoverflow.net/questions/33162/how-divisible-is-the-average-integer?sort=oldest","timestamp":"2014-04-19T22:57:04Z","content_type":null,"content_length":"57193","record_id":"<urn:uuid:d8fdc387-2a55-4fca-b62e-384944731e47>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/valiyuh/answered","timestamp":"2014-04-20T06:34:21Z","content_type":null,"content_length":"77300","record_id":"<urn:uuid:37ca03b2-77b3-4560-9d9c-780774ddfa7f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Whats 47cm in inchs? You asked: Whats 47cm in inchs? Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/whats_47cm_in_inchs","timestamp":"2014-04-18T13:13:00Z","content_type":null,"content_length":"57644","record_id":"<urn:uuid:d6003769-6580-4508-af9a-22567f30ec6f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Algebra 2 help please • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5138e10ae4b029b01829e6f7","timestamp":"2014-04-20T03:21:11Z","content_type":null,"content_length":"68708","record_id":"<urn:uuid:43bfa9f1-bd58-47b1-a424-0e56047f8b62>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
calculating the magnitude of the force on an electron well using the right hand rule, which says that the current,force and magnetic field are at 90 degrees to each other, you'd see that if the velocity direction is parallel to the magnetic field you have no force. Use th equation F=BQvsin[itex]\theta[/itex] to get the force for part b
{"url":"http://www.physicsforums.com/showthread.php?t=301745","timestamp":"2014-04-20T15:58:08Z","content_type":null,"content_length":"22820","record_id":"<urn:uuid:f0887a8a-5cb1-495e-ac43-04160651a39d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: UNIFORM GROWTH OF POLYCYCLIC GROUPS 1. Introduction The Milnor-Wolf Theorem characterizes the finitely generated solvable groups which have exponential growth; a finitely generated solvable group has exponential growth iff it is not virtually nilpotent. Wolf showed that a finitely generated nilpo- tent by finite group has polynomial growth; then extended this by proving that polycyclic groups which are not virtually nilpotent have expontial growth, [8]. On the other hand, Milnor, [5], showed that finitely generated solvable groups which are not polycyclic have exponential growth. In both approaches exponential growth can be deduced from the existence of a free semigroup, [1, 6]. In this article we elaborate on these results by proving that the growth rate of a polycyclic group of exponential growth is uniformly exponential. This means that base of the rate of exponential growth (S, ) is bounded away from 1, independent of the set of generators, S; that is, there is a constant () so that (S, ) () > 1 for any finite generating set. The growth rate is also related to the spectral radius µ(S, G) of the random walk on the Cayley graph, with the given set of generators, The exponential polycyclic groups are an important class of groups for resolving the question of whether or not exponential growth is the same as uniform exponen-
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/787/1779082.html","timestamp":"2014-04-17T10:05:26Z","content_type":null,"content_length":"8440","record_id":"<urn:uuid:83158f4a-f681-4ce7-9518-bda80e075b39>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Relative time space travel Quote by T= 3/ (all of this under a square root sign -->)1-(2.55 x 10^8)^2/(2.55 x 10^8)^2 *i get stuck here because you cant have a zero in the denominator* Hmm, you've done: [tex]T= \frac{3}{\sqrt{1- \frac{v^2}{v^2}}}[/tex] I'm guessing you've accidentally done this?
{"url":"http://www.physicsforums.com/showpost.php?p=3659663&postcount=2","timestamp":"2014-04-18T21:29:39Z","content_type":null,"content_length":"8117","record_id":"<urn:uuid:708031e8-6894-4c72-b2be-50ee1315f4fb>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
Name some famous Ancient Indian Mathematicians?What are their contributions to mathematics? - Yahoo Answers Name some famous Ancient Indian Mathematicians?What are their contributions to mathematics? • ✓ Follow publicly • ✓ Follow privately • Unfollow I want the answers as quick as possible........Please answer me if anybody knows it...Than Q Best AnswerAsker's Choice The contributions of the Indian mathematicians start from the Indus valley civilization and the Vedas, up to the modern times. Indian mathematicians have made a number of important contributions to mathematics including place-value arithmetical notation and the concept of zero. During the Vedic Age, the noted Indian mathematicians include the names of Apastamba, Baudhayana, Katyayana, Manava, Panini and Yajnavalkya, who was credited with authorship of the Shatapatha Brahmana, which contains calculations related to altar construction. Baudhayana, the mathematician and also a priest was noted as the author of the earliest Sulba Sutra, appendices to the Vedas giving rules for the construction of altars - called the Baudhayana Sulbasutra. This book contains several important mathematical results. The Dharmasutra of Apastamba forms a part of the larger Kalpas?tra of ?pastamba, containing almost thirty prasnas, which literally means `questions` or books. The subjects of this Dharmasutra are well organized and conserved in good condition. These prasanas comprise of the ?rautasutra followed by Mantrapadha which is used in domestic rituals and is a collection of ritual formulas. Aryabhatta is the first in the line of great Indian mathematicians-astronomers from the classical age of Indian mathematics and Indian astronomy. His most famous works are the Aryabhatiya and Arya-Siddhanta. Bhaskara was one of the 7th century Indian mathematicians, who was apparently the first to write numbers in the Hindu-Arabic decimal system with a circle for the zero, and who gave a unique and noteworthy rational approximation of the sine function in his interpretation on Aryabhata`s work. Jayadeva was a ninth-century Indian mathematician, who further worked on the cyclic method (chakravala method) that was called by Hermann Hankel the premium success achieved in the theory of numbers before Lagrange (18th century). He also made momentous contributions to combinatorics. Gopala, one of the noted Indian mathematicians studied the Fibonacci numbers in 1135, more than half a century before Fibonacci popularised these numbers in Europe. Pingala was an ancient Indian writer, famous for his work, the Chandas Shastra, a Sanskrit treatise on prosody considered one of the Vedanga and he developed advanced mathematical concepts for describing the patterns of prosody. Some of the famous Indian mathematicians of the later age are Srinivasa Ramanujan, A. A. Krishnaswami Ayyangar, Prasanta Chandra Mahalanobis, D. K. Ray-Chaudhuri, Harish-Chandra, Calyampudi Radhakrishna Rao, Shreeram Shankar Abhyankar, Ramdas Lotu Bhirud, Jayant Narlikar and many others. Asker's Rating & Comment Other Answers (3) Rated Highest • 1. Bhaskara 2. Lilavati (Bhaskara's daughter) 3. Varaha mihira 4. Aryabhata 5. Brahma gupta I use here the Sanskrit phonetic practice of ending names with a vowel and not the Hindi-Urdu (Hindustani) rule and that is why every name ends with a 'a'. Many things in Mathematics with which we are familiar now are attributed to these (and some more). The format, notation and presentation of today are entirely different from those in ancient times. So, to say with certainity 'Yes! this is his invention', is difficult. Even though foundation for 'Calculus' was laid by both Newton and 'Leibnitz'; Newton's notation is cumbersome and we follow Leibnitz's. Almost all ancient Indian mathematicians were concerned with astronomy and solving astronomical problems. This is so because unlike today, those people used to look at the sky daily (often) and unwittingly keep track of happenings there (primary thing like moon's phases). Much of this knowledge is with us for calculations (algorithms) in Panchangam, the proof of which is the accuracy of predictions of eclipses. For instance Aryabhata devised a notation for Trigonometrical functions, not in the way we write like 'Sine', 'Cosine' but 'Versign' which simply is 'arc sine' of which the angle is. Roughly their contributions are in Algebra (Beeja ganitham), Pythagous' proof of a right-angled triangle, solution of quadratic equation, continued fractions, number theory since 'zero' (0) was formulated and used by them much before Romans were grappling with Roman numerals and representation of numbers with them, elemnts of Trigonometry. I suggest you to go to 'Wikipedia' now, armed with the above names.
{"url":"https://in.answers.yahoo.com/question/index?qid=20090625023526AAjNohP","timestamp":"2014-04-19T06:53:39Z","content_type":null,"content_length":"58989","record_id":"<urn:uuid:ba47c032-7fd9-4688-9733-91eaab87962d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
parabolic immediate basins always simply connected? up vote 0 down vote favorite Edit: So, my original question (stated below) was to find an error in my "proof" that immediate parabolic basins for rational maps are always simply connected. Since I have not received any answers as of yet I would ask alternatively if someone could point out an explicit example of a rational map with a parabolic fixed point which has a non-simply connected immediate basin, so that I could hopefully check by examining this example where my argument goes wrong. Kind regards an idiot Hello! Sorry if this seems stupid. I know there must be an error in my thinking. Let f be a rational map on the Riemann sphere with a parabolic fixed point $f(z_0)=z_0$, $f'(z_0)=e^{2\pi t}$ with $t\in \mathbb{Q}$. I will try to demonstrate that each parabolic immediate basin is simply connected. I know this is wrong. But I don't find the mistake in my "proof". So please help me out. By Leau-Fatou Flower theorem, in each immediate parabolic basin of $z_0$ there is an attractive petal $V$ such that each point in that immediate basin tends to $z_0$ via $V$. Let $V_0$ be such a petal, and for simplicity's sake let's say that $f(\overline{V_0})\subset V_0\cup{z_0}$ (i.e. no periodic jumping between different petals, can be achieved by simply taking an iterate $f^n$ instead of $f$ for suitable n). So we have: - $f(\overline{V_0})\subset V_0\cup{z_0}$ - $V_0\subset A^*(z_0)$ open and simply connected - For every $z\in A^*(z_0)$ there is some $n\in\mathbb{N}$ with $f^n(z)\in V_0$ We may slightly shrink $V_0$ if necessary such that $\partial V_0$ does not contain any postcritical points and $\overline{V_0}$ is homeomorphic to a closed disk. Now for $k\in\mathbb{N}$ let $V_k$ be the component of $f^{-1}(V_0)$ that contains $V_0$. It's easy to see that $A^*(z_0)=\cup_{k=0}^{\infty}V_k$. If $A^*(z_0)$ is not simply connected then there must be a minimal $m\in\mathbb{N}$ such that $V_m$ is not simply connected. In that case let $B$ be a component of $ \hat{\mathbb{C}}-\overline{V_m}$, such that $\partial B$ does not contain $z_0$. Then $\partial B\subset \partial V_m$ and so $f(\partial B)\subset\partial V_{m-1}$. Since $\partial V_0$ contains no postcritical points, $\cup_{k=0}^m \partial V_m$ contains no critical points. Thus $f^m$ is locally injective on $\partial B\subset\partial V_m$ and $f^m(\partial B)$ is a full component of $\partial V_0$ (proper covering), hence $f^m(\partial B)=\partial V_0$, since $\partial V_0$ has only one component. But then there is $z\in\partial B\subset F(f)$ with $f^n(z)=z_0\in J(f)$. That's a contradiction. Can someone help me see my mistake? I hope it's a simple one. complex-analysis complex-dynamics Just to bring this question to the top once more (hope this is allowed). I just edited the question and asked an alternative question which might help me. Kind regards, an idiot – idiot_1337 Aug 12 '12 at 12:11 add comment 1 Answer active oldest votes The mistake is in the statement that $\partial B\subset F(f)$. There can be points on $\partial B$ and $\partial V_m$ which are in $J(f)$, namely preimages of $z_0$ :-) An example is $f(z)=z+1-1/z$. There is one petal for the neutral point at infinity. Let $A$ be the dmain of attraction of $\infty$. Critical points are $\pm i$. Everything is symmetric up vote 2 down with respect to the real line, because the function is real. One critical point is in $A$, so by symmetry the other one is also in $A$. The map $f:A\to A$ is 2-to-1 (because $f$ is of vote accepted degree $2$), so Riemann and Hurwitz tell us that $A$ is infinitely connected. Thank you. And sorry, I was very sloppy. Where I wrote: "In that case let $B$ be a component of $ \hat{\mathbb{C}}-\overline{V_m}$", I now added: "such that $\partial B$ does not contain $z_0$."<br> But I see now that this also does not help.<br> Thank you very much. I truly am an idiot.<br> And thanks for the example. May the force be with you. – idiot_1337 Aug 12 '12 at 14:11 add comment Not the answer you're looking for? Browse other questions tagged complex-analysis complex-dynamics or ask your own question.
{"url":"http://mathoverflow.net/questions/104482/parabolic-immediate-basins-always-simply-connected","timestamp":"2014-04-19T12:35:42Z","content_type":null,"content_length":"55312","record_id":"<urn:uuid:27d19cfc-9248-48f9-b78b-26e81596e33a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Recent Homework Questions About Geometry Post a New Question | Current Questions to nearest degree, what is angle M Tuesday, November 19, 2013 at 6:21pm if Theta is 90 deg, you have to conclude that all sides are congruent. Tuesday, November 19, 2013 at 5:30pm Geometry (math) Tuesday, November 19, 2013 at 4:20pm In most geometry courses, we learn that there's no such thing as "SSA Congruence". That is, if we have triangles ABC and DEF such that AB = DE, BC = EF, and angle A = angle D, then we cannot deduce that ABC and DEF are congruent. However, there are a few special ... Tuesday, November 19, 2013 at 1:47pm Geometry (math) Yes it is. That really made sense. Merci :) Tuesday, November 19, 2013 at 11:36am Geometry (math) True No need to draw a figure, just look at the screen on your computer. let s be the top of the screen let q be perpendicular of your screen, how about the left side? let p be the bottom of your screen Is the top of your computer screen not parallel to its bottom? Tuesday, November 19, 2013 at 10:51am Geometry (math) Suppose line p is perpendicular to line s and line q is perpendicular to line s. are p and q parallel under all circumstances? Draw a figure to illustrate your answer . I doubt if we can post pictures on this,because my question asks to draw a figure.I seriously need an answer... Tuesday, November 19, 2013 at 9:57am Geometry (math) Determine whether line QV and line RM are parallel,perpendicular or neither ? Tuesday, November 19, 2013 at 9:51am Geometry (math) Thank you :))) Tuesday, November 19, 2013 at 9:49am Use the results of the ratio properties I gave you in your previous post Tuesday, November 19, 2013 at 8:54am For the question to be "workable", AF is probably the median, with F on BC remember the centroid cuts the median in the ratio of 2:1, or AG:GF = 2:1 then FG:AF = 1:3 (x+8)/(9x-6) = 1/3 9x - 6 = 3x + 24 6x = 30 x = 5 Tuesday, November 19, 2013 at 8:52am Geometry - incomplete I guess you forgot to say where F is. Tuesday, November 19, 2013 at 5:26am triangle HJK with median HN, JL, and KM and centroid P 1)PN=?HN 2)PL=?JP, KP?KM Tuesday, November 19, 2013 at 3:21am point g is the centroid of triangle ABC find value of x, FG=x+8 and AF=9x-6 Tuesday, November 19, 2013 at 3:18am Geometry (math) I suggest you draw the figure, it's easier to see that way. The planes that intersect are: (1) the plane containing the cross-sectional area (one triangle) and the plane containing one lateral surface area (one rectangle); and (2) the planes containing two lateral surface ... Tuesday, November 19, 2013 at 3:12am Geometry (math) If the bases are ABC and DEF, then the sides could be ABDE and BCEF, intersecting along the line BE. Tuesday, November 19, 2013 at 12:04am Geometry (math) Identify 2 planes in a triangular prism that intersect.Name their intersection Monday, November 18, 2013 at 11:46pm Geometry (math) Identify 2 planes in a triangular prism that intersect.Name their intersection Monday, November 18, 2013 at 11:12pm Arrow pointing to the bottom left Monday, November 18, 2013 at 9:08pm can it be rs for the answer or what and this what i need help on this If AB = 6, ST = 8, AC = 12, A = 40°, T = 20°, then find the length of RS. please and thank u. Monday, November 18, 2013 at 2:06pm See previous post: Sat, 11-16-13, 7:13 PM. Sunday, November 17, 2013 at 3:14pm 1. Triangle #1 a = 8 b = 15 c = 18 Triangle #2: d = 10 e = ? f = ?. a/d = b/e = c/f = 8/10 15/e = 8/10 e = 18.75 18/f = 8/10 f = 22.5 2. Scale Factor=5/4 = 15/12=25/20 = 1.25 3. x/r = y/p = z/n. Sunday, November 17, 2013 at 3:06pm If a decorative sign in the form of a circle has a diameter of 10 feet, what it the area of the sign, to the nearest square foot? A. 79 ft2 B. 16 ft2 C. 157 ft2 D. 31 ft2 Sunday, November 17, 2013 at 1:19pm What is the altitude of a rhombus if its area is 50 square meters and the length on one side is 12.5 meters? A. 10 m B. 4 m C. 7.5 m D. 12.5 m Sunday, November 17, 2013 at 1:11pm A monument in the form of a marble cylinder has a circular base with a radius of 1.5 meters. The altitude of the monument is 3.5 meters. How many cubic meters of marble does this monument contain? A. 7.07 m^3 B. 21.21 m^3 C. 24.74 m^3 D. 6.19 m^3 Sunday, November 17, 2013 at 12:31pm geometry easy so, if these are easy, what's the trouble? #1. The similar triangle's sides are in the ratio 10:8 = 5/4 as big as the smaller triangle's. #2. The scaling is obviously 5/4 = 15/12 = 25/20 #3. assuming the vector signs, x/r = y/p and two others #4. Since the sides ... Sunday, November 17, 2013 at 7:32am geometry easy The sides of a triangle are 8,15 and 18 the shortest side of a similar triangle is 10 how long are the other sides? Find the scale factor of similar triangles whose sides are 4,12,20 and 5,15,25 Assume that traingle xyz is similar to triangle rpn with x(ray sign) r and p(ray ... Sunday, November 17, 2013 at 7:26am The sides of a triangle are 8,15 and 18 the shortest side of a similar triangle is 10 how long are the other sides? Find the scale factor of similar triangles whose sides are 4,12,20 and 5,15,25 Assume that traingle xyz is similar to triangle rpn with x(ray sign) r and p(ray ... Sunday, November 17, 2013 at 5:24am geometry triangles The sides of a triangle are 8,15 and 18 the shortest side of a similar triangle is 10 how long are the other sides? Find the scale factor of similar triangles whose sides are 4,12,20 and 5,15,25 Assume that traingle xyz is similar to triangle rpn with x(ray sign) r and p(ray ... Saturday, November 16, 2013 at 9:30pm The sides of a triangle are 8,15 and 18 the shortest side of a similar triangle is 10 how long are the other sides? Find the scale factor of similar triangles whose sides are 4,12,20 and 5,15,25 Assume that traingle xyz is similar to triangle rpn with x(ray sign) r and p(ray ... Saturday, November 16, 2013 at 7:13pm Saturday, November 16, 2013 at 4:05pm A segment is shifted 2 units up and 3 units to the left. A triangle is shifted 2 units to the right and 3 units down. A segment is shifted 2 units to the right and 3 units down. A triangle is shifted 2 units up and 3 units to the left. Saturday, November 16, 2013 at 3:34pm AB = AE BC = DE prove 1 = 2 Saturday, November 16, 2013 at 1:51pm How are R,S,T arranged? in a line? in a triangle? If as line, which point is in the middle? Friday, November 15, 2013 at 12:08pm Geometry - eh? Neither do I. How are R,S,T arranged? Friday, November 15, 2013 at 12:05pm (b) as your text surely stated. Friday, November 15, 2013 at 12:04pm math (please help)! Geometry ***(PLEASE CHECK MY ANSWER)*** 1. If triangle RST = triangle NPQ, which of the following is true? A.) ÚR = ÚP B.) ÚR = ÚQ C.) ÚT = ÚP D.) ÚT = ÚQ <----- My ... Friday, November 15, 2013 at 10:24am The measure of an angle formed by two secants intersecting outside the circle equals ½ the sum of the intercepted arcs ½ the difference of the intercepted arcs ½ the measure of the intercepted arc Friday, November 15, 2013 at 9:20am If RS = 4x, Line RS approximately equal to line ST and RT = 24, then find the value of x. Can someone please help me, i don't understand this at all! Friday, November 15, 2013 at 8:50am If ST = y + 2, RT = 5y 10 and RS = 2y, then find the value of y and all three lengths. If RS = 4x, Line RS approximately equal to line ST and RT = 24, then find the value of x. Friday, November 15, 2013 at 8:03am Post a New Question Current Questions Homework Help: Chemistry Posted by Anonymous on Thursday, November 14, 2013 at 8:45pm. CONTINUE>>>>>>>>>>> The enthalpy changes for two different hydrogenation reactions of C2H2 are: C2H2+H2---->... Thursday, November 14, 2013 at 9:40pm Concurrent lines are lines that intersect at a single point. Well, a figure cannot be shown here, but try to draw/imagine. For instance, in an x-y plane, you have the three concurrent lines that intersect at the origin. Now, in order to have one pair of parallel lines, the ... Thursday, November 14, 2013 at 3:43pm If you have four lines how can you position them so that three are concurrent with each other and two are parallel to each other? Thursday, November 14, 2013 at 2:03pm High School Geometry note that ADE+BDE = 180 74-8x + 2x+94 = 180 -6x = 12 x = -2 So, ADC = 86°, meaning plane ABE is not ┴ plane m. So, A and B are No and C is Yes Thursday, November 14, 2013 at 4:28am High School Geometry I am have a hard time understanding how A and B can be No, but C can be Yes on this problem i.imgur (dot)com/rpJQDf3.png I thought that since AD is not perpendicular to the plane m then no lines shooting off of D onto the plane could be perpendicular. Wednesday, November 13, 2013 at 10:13pm Which equation can be used to find the volume of a sphere that has a radius of 9 inches Wednesday, November 13, 2013 at 1:21pm math geometry how can i prove in geometry proofs that something is a midpoint of a line? thanks Tuesday, November 12, 2013 at 5:38pm it is not a right triangle. the easiest way i determine is if they are factors of 3,4 and 5 Tuesday, November 12, 2013 at 5:37pm m Ú1 = 6x and m Ú3 = 120. Find the value of x for p to be parallel to q. The diagram is not to scale. Tuesday, November 12, 2013 at 3:14pm centre is at (9,0), radius = √484 or 22 (use ^ to indicate an exponent such as (x-9)^2 + y^2 = 484 Tuesday, November 12, 2013 at 11:31am Think of laying out the 3 given sides in a straight line to get a sum of 2011+2012+2013 = 2036 As long as x is < 2036 you will be able to "kink" the 3 lines at their joints and form a quad. If x = 2036, they will all lie on a straight line. if x > 2036, you can... Tuesday, November 12, 2013 at 11:23am Think of laying out the 3 given sides in a straight line to get a sum of 2011+2012+2013 = 2036 As long as x is < 2036 you will be able to "kink" the 3 lines at their joints and form a quad. If x = 2036, they will all lie on a straight line. if x > 2036, you can... Tuesday, November 12, 2013 at 10:56am Given the equation of the circle (x 9)2 + y2 = 484 , where is the center of the circle located at? Tuesday, November 12, 2013 at 9:51am The lengths of the sides of a quadrilateral are 2011 cm, 2012 cm, 2013 cm and x cm. If x is an integer, what is the largest possible value of x? Help is badly needed. my teacher frightens me huhu. Tuesday, November 12, 2013 at 8:40am geometry PLEASE Going back to your previous posts, I did read #5 carefully when you posted these before. At the end you said "an equilateral circumscribed about the same circle" I can only interpret that as "an equilateral triangle circumscribed about the same circle", and... Sunday, November 10, 2013 at 10:15am geometry PLEASE imageshack(.)(us)/photo/my-images/42/2vja.jpg/ Open that link please , just delete the parentheses please please help me with my assignments I beg you cooperate with me huhu 1. in the figure , the areas of traingle cef, triangle abe, triangle adf are 3,4, and 5 ... Sunday, November 10, 2013 at 6:20am What about the triangle ? uhm. . . no triangle there.. Sunday, November 10, 2013 at 5:27am geometry ratio why did u include triangle ? wait i dot understand Sunday, November 10, 2013 at 5:27am 25/x = sin 30 Saturday, November 9, 2013 at 3:44pm an escalator lifts people to the second floor, 25 ft above the first floor. the escalator rises at a 30 angle. How far does a person travel from the bottom to the top of the esalator? Saturday, November 9, 2013 at 3:35pm I did #5 for you on Thursday. http://www.jiskha.com/display.cgi?id=1383829501 Did you not look at it ? for the above, it says" your image has been removed" Saturday, November 9, 2013 at 12:22pm (imageshack)(.)us(/)photo (/) my-images (/)42(/)2vja. jpg(/) Open that link please , just delete the parentheses please please help me with my assignments I beg you cooperate with me huhu 1. in the figure , the areas of traingle cef, triangle abe, triangle adf are 3,4, and 5 ... Saturday, November 9, 2013 at 11:05am (imageshack)123 (.)us(/)photo 123 (/) my-images 123 (/)42(/) 123 2vja. jpg(/)123 Open that link please , just delete the parentheses and spaces and 123's please please help me with my assignments I beg you cooperate with me huhu 1. in the figure , the areas of traingle ... Saturday, November 9, 2013 at 10:07am i submitted the url but they said theyre still going to review it. what todo Friday, November 8, 2013 at 6:09am i submitted the url but they said theyre still going to review it. what todo Friday, November 8, 2013 at 6:08am You can upload your image to this website: http://imageshack.us/ Then when you get the URL for your image, post it here by separating the elements ... example: imageshack (dot) us (slash) ... Thursday, November 7, 2013 at 9:00am geometry ratio make a good sketch, let each side of the equilateral triangle be 2 then the height of the equilateral triangle .... using the 30-60-90 triangle ratios, is √3 area of triangle = (1/2)(2)√3 = √3 You should have a small equilateral triangle sitting on top of the... Thursday, November 7, 2013 at 8:49am where do you think should i post the link to see the figures pictures please? please help huhu Thursday, November 7, 2013 at 8:18am Open that link please please help me with my assignments 1. in the figure , the areas of traingle cef, triangle abe, triangle adf are 3,4, and 5 respectively. find the area of triangle aef 2. equialateral triangle abc has an area of square root of 3 and side of length 2. point... Thursday, November 7, 2013 at 8:17am geometry ratio 5. find the ratio between the area of a square inscribed in a circle and an equilateral circumscribed about the same circle. Thursday, November 7, 2013 at 8:05am where to locate it please? Thursday, November 7, 2013 at 8:00am area of similar objects is proportional to the square of their sides 16/9 = 4^2/3^2 so the ratio of their sides is 4:3 the volume of similar objects is proportional to the cube of their sides 4^3/3^3 = 128/x 64/27 = 128/x 64x = 27(128) x = 27(128)/64 = 54 Wednesday, November 6, 2013 at 11:35pm the lateral area of two similar solids are 16pie cmsquared and 9pie cmsquared. The volume of the larger solid is 128pie cm cubed. find the volume of the smaller solid please explain Wednesday, November 6, 2013 at 10:10pm Right so far. 2x^2 = 144 x^2 = 72 x = 8.4852813 Each side = 8.4852813 feet A = s^2 A = 72 square feet Wednesday, November 6, 2013 at 7:08pm I manage to get X^2+X^2=144 Wednesday, November 6, 2013 at 6:54pm Use the Pythagorean Theorem. What do you get for the measurement of a side of the square? Wednesday, November 6, 2013 at 6:45pm The diagonal of a square measures 12 feet. a. What is the exact measure of a side of the square? b. What is the area of the square Wednesday, November 6, 2013 at 6:39pm Pythagorean Theorem a^2 + b^2 = c^2 8^2 + 8^2 = c^2 64 + 64 = c^2 11.3 = c Wednesday, November 6, 2013 at 5:46pm Find, to the nearest tenth of a centimeter, the length of a diagonal of a square if the measure of one side is 8.0 centimeters. Wednesday, November 6, 2013 at 5:34pm sorry, "file:///C:..." indicates a file on your computer, which we cannot access. Wednesday, November 6, 2013 at 3:28pm what is true of rectangle that isn't true or parallelograms in general? Wednesday, November 6, 2013 at 12:25pm geometry ratio 5. find the ratio between the area of a square inscribed in a circle and an equilateral circumscribed about the same circle. Wednesday, November 6, 2013 at 8:28am Here are the figures. file:///C:/Users/ALIZAJOY/Pictures/Untitled.png Open that link please please help me with my assignments 1. in the figure , the areas of traingle cef, triangle abe, triangle adf are 3,4, and 5 respectively. find the area of triangle aef 2. ... Wednesday, November 6, 2013 at 8:27am The diagram shows to triangles connected at one vertex. The tops of the triangles are both right angles. The angles closest to the vertex are 30 degrees and the other is a variable (c). I'm confused on how to find the value of c, without the value of the space between c ... Tuesday, November 5, 2013 at 10:32pm 2(2x+7)=5x-1 4x+14=5x-1 14=x-1 15=x 2x+7 2(15)+7 30+7 =37 Tuesday, November 5, 2013 at 7:44pm 2(2x+7)=5x-1 4x+14=5x-1 14=x-1 15=x Tuesday, November 5, 2013 at 7:42pm Tuesday, November 5, 2013 at 5:27pm http://www.regentsprep.org/Regents/math/geometry/GCG3/PracDistance.htm http://www.purplemath.com/modules/distform.htm https://www.khanacademy.org/math/algebra/linear-equations-and-inequalitie/ Saturday, November 2, 2013 at 7:42pm Thinking Mathematically - Geometry Can't even draw a diagram? Draw a vertical line. That's the rocket's path. Label the bottom of the line B and place point T up somewhere and label BT with the number "4" the rocket's height. Now draw a horizontal line through B, and at distance "... Saturday, November 2, 2013 at 5:15pm Thinking Mathematically - Geometry Use the Pythagorean Theorem. http://www.mathsisfun.com/pythagoras.html Saturday, November 2, 2013 at 4:48pm Thinking Mathematically - Geometry A rocket ascends vertically after being launched from a location that is midway between two ground-based tracking stations. When the rocket reaches an altitude of 4 kilometers, it is 5 kilometers from each of the tracking stations. Assuming that this is a locale where the ... Saturday, November 2, 2013 at 4:37pm Thinking Mathematically - Geometry Ok. I got it. Thank you! Saturday, November 2, 2013 at 4:35pm Thinking Mathematically - Geometry I'm just stuck. Don't know exactly where to start. Saturday, November 2, 2013 at 4:35pm Thinking Mathematically - Geometry if you just want confirmation of your hard work, what did you get? If you got stuck, how far did you get? Did you draw a diagram? I think you'll see some more 3-4-5 triangles there. Saturday, November 2, 2013 at 4:32pm Thinking Mathematically - Geometry A rocket ascends vertically after being launched from a location that is midway between two ground-based tracking stations. When the rocket reaches an altitude of 4 kilometers, it is 5 kilometers from each of the tracking stations. Assuming that this is a locale where the ... Saturday, November 2, 2013 at 4:30pm Thinking Mathematically - Geometry the length x of each cable is given by x^2 = 9^2 + 12^2 Think 3-4-5 triangle and multiply by 3. Saturday, November 2, 2013 at 4:19pm Thinking Mathematically - Geometry A flagpole has a height of 16 yards. It will be supported by three cables, each of which is attached to the flagpole at a point 4 yards below the top of the pole and attached to the ground at a point that is 9 yards from the base of the pole. Find the total number of feet of ... Saturday, November 2, 2013 at 4:13pm Use cosine (let x = angle): cos (x) = 3/12 Solve for x. Thursday, October 31, 2013 at 11:31pm C(-3,-1), P(-3,2). r = 2 - (-1) = 3. Thursday, October 31, 2013 at 8:48pm A ladder 12 m in length rests against a wall. The foot of the ladder is 3 m from the wall. What is the measure of the angle the ladder forms with the floor? Thursday, October 31, 2013 at 8:07pm Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> Post a New Question | Current Questions
{"url":"http://www.jiskha.com/math/geometry/?page=8","timestamp":"2014-04-19T11:00:04Z","content_type":null,"content_length":"33874","record_id":"<urn:uuid:16aa8653-910d-4edb-8644-9ae1d16c426d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Norwood, NJ Math Tutor Find a Norwood, NJ Math Tutor ...Throughout his career Ken has been particularly good at helping people understand basic principles in chemistry, physics and math and how they are and can be applied to everyday life. Teaching and communications skills were critical in reporting research results, selling products and technology ... 12 Subjects: including trigonometry, differential equations, precalculus, algebra 1 ...Many of my math courses are posted on Youtube by some colleges to help students learn individually. Also, because of my pedagogical tact some people asked me to help them learn and speak French which is my first language. I do it correctly such that learners can speak French perfectly after only three months. 11 Subjects: including SAT math, GMAT, probability, algebra 1 ...Even more exciting is watching that light-bulb go on when I am tutoring. I have experience tutoring students of all ages, (elementary through graduate school) in many subject areas - although my real passion is math. I have a BA in statistics from Harvard and will be starting nursing school shortly. 18 Subjects: including algebra 1, algebra 2, biology, chemistry ...Recently, most of my clients have been teachers. Many of them have struggled with a math course at some point, and my tutoring assistance helped them greatly towards earning their Master's Degree. I have shown proficiency in math since my youth. 22 Subjects: including linear algebra, nutrition, ACT Science, algebra 1 Thank you for taking the time to look into my profile. I am currently a third grade teacher. In the past four years, I have taught a plethora of subjects and levels including tenth grade math, and third, fourth, and fifth grade. 14 Subjects: including algebra 1, algebra 2, prealgebra, precalculus Related Norwood, NJ Tutors Norwood, NJ Accounting Tutors Norwood, NJ ACT Tutors Norwood, NJ Algebra Tutors Norwood, NJ Algebra 2 Tutors Norwood, NJ Calculus Tutors Norwood, NJ Geometry Tutors Norwood, NJ Math Tutors Norwood, NJ Prealgebra Tutors Norwood, NJ Precalculus Tutors Norwood, NJ SAT Tutors Norwood, NJ SAT Math Tutors Norwood, NJ Science Tutors Norwood, NJ Statistics Tutors Norwood, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/norwood_nj_math_tutors.php","timestamp":"2014-04-19T17:37:39Z","content_type":null,"content_length":"23756","record_id":"<urn:uuid:e5db4967-e502-4855-a286-e975b3604ce2>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Grade 7 NYSAA » Extension/AGLI Questions & Explanations » Grade 7 Grade 7: Q:Can the task be assessed with 2 different topics for each VE? The example stated one topic AT 70421 states, " The student will identify the main idea and supporting detail in diverse media, when presented with two or more formats (e.g., the student identifies the main idea and supporting detail, when presented with a newspaper, radio, and Internet blog on the same topic). (AT74121)" A: It would be acceptable to demonstrate the specified skill using different topics. Please be reminded that the information following the "e.g." is provided as an example of what the evidence might look like. It is not required. Q:Grade 7, ELA SL 7.2 (74111), can a main idea be represented in a singe picture from a set of two choices. Specific example, a newspaper article about a weather, the choices were weather conditions "sunny, rainy, etc". The student would select the picture which represents the main idea or weather condition discussed in the newspaper article. I do not believe the selection of the weather condition alone meets the intent of the Extension but wanted to check. I think the student is selecting the topic of the weather article but not main idea. Am I looking at this too strictly. A:AT74111 states, " The student will recognize the main idea, when presented with two or more media formats on the same topic (e.g., the student recognizes the main idea from a set of choices, when presented with two advertisements for the same product from different media). (AT74111)". The evidence you have described above does not appear to meet the intent of the Extension and Assessment Task. The student would be expected to recognize the main idea when presented with a choice of at least two media formats. Formats could include newspaper, internet resources, radio, TV, etc. Q: Can calculators be used at Grade 7 where calculation skills are required? Specifically Grade 7, 7EE (70831) solve equations. I do not believe this is permitted but I was asked to confirm. A: As indicated on page 23 of the NYSAA manual, we do not allow the use of a calculator nor arithmetic tables for grades 3-5, however it would be allowed for grade 7 math. This brings NYSAA into alignment with the Grade-3-8 general math assessments Q: Math AT50311B states, “the student will represent a fraction as a division problem using a model” It appears the evidence described would meet the intent of the Extension and Assessment Task Would it be acceptable for a student to choose a shape meeting the defined conditions e.g. a shape with four equal sides and four 90• angles from a selection containing rectangles and other shapes or does the student have to physically create it by drawing or building it with blocks etc. ? We are questioning what the word "produce" requires. When we look at the essence of the cluster it seems to support accepting recognition of the characteristics more than requiring physical creation. A:AT70511A states, "The student will produce a geometric shape based on a given condition. (AT70511A)" Please refer to the guidelines for "compose" in the ELA glossary. Q:Grade 7, mathematicsAT70711A and AT70721A These tasks appear identical except for the examples. We note that the example for AT70721A is more complex since it includes a mix of fractions and decimals. Is such a mix required by this task or is it simply "an example" of one possible way? A: Please refer to the examples give with each Extension. The intent of Extension 70711 (and the Assessment Tasks aligned to it) was to add and/or subtract fractions to fractions, decimals to decimals, etc. The intent of 70721 (and the Assessment Tasks aligned to it) was to add and/or subtract across fractions, decimals and percentages – bringing the conversion piece into the skill. Q: extension 70521 in 7th grade, assessment task 70521B (Page 19). The task reads “the student will identify a relationship between two or more closed geometric figures (e.g. given a set of shapes, the student identifies which two shapes are congruent) Are we asking the student to identify the congruent shapes/figures, or is it necessary for the student to tell us the relationship (that the shapes/figures are congruent). We had one teacher that wanted to give a student an array of shapes/figures to choose the congruent ones. We had another teacher that wanted to give the student a set of shapes/figures and ask are these congruent A:intent of this AT is for the student to identify the relationship. So in other words, given a set of geometric figures the student identifies what relationship they have. The example (e.g.) uses congruent, but other relationships could be used as well. Q: grade 7 math: ·The student will produce a geometric shape based on a given condition. (AT70511A)What does given condition mean? A: Please refer to the examples (e.g.) provided for the Assessment Tasks. The given conditions would be the side lengths (dimensions) or number of angles (measurement of angles). Q: ELA, RI.7.1 and RI.7.8 On the seventh grade ELA, RI 7.1 and RI 7.8 both explicitly list text in them. The examples all use visual items, such as pictures or text. Can we also use auditory information, like e-books or CDs? The teacher who works with these kids is worried, because her understanding, after the training, was that they had to use text. Some of my kids can't see pictures or text well enough to comprehend them and they can't read braille due to physical issues, like CP. They are auditory learners and have this stated in their IEPs and in their functional vision assessments. A: Text does not need to be actual print words, images, or braille. When an Extension or AGLI requires the use of text, the text can be presented to the student in his/her typical method of interacting with a text. Q: Grade 7 ELA, SL.7.2, AT74111Recognize main idea in diverse media formats (74111). Can Scholastic News and News 2 You be considered media formats? A: If the format of Scholastic News and News 2 vary enough, they could be used. However, if they are both considered newspapers it would not meet the intent of the Extension for “two or more media Q: ELA AT73211A - verbal students: What was your point of view on this book? I felt..., I thought...., I liked... Nonverbal student able to make 2 choice inconsistently (lowest NYSAA level): same question, choice of "I felt" or "I liked" if I felt, give 2 emotion choices (the book made me happy, the book made me scared) if assessing “I liked”, give parts of story for choice. Also, how do we do math 7 pg 19 - produce a geometric shape - for those very low students who are only able to identify or choose. A: The student’s method of response remains flexible in the new NYSAA. A teacher can set up an activity in a manner that allows the student to select a response from a set of choices. This is how the student can “produce”. Your suggestions are on track for adjusting the presentation of the task in a way that is appropriate for the student. Q: ELA AT73211A Worksheet presented to student includes question “Did you like this book?”. Does this VE connect to the task? A: In order to ensure a clear connection between the Verifying Evidence and the Assessment Task, we would recommend using the vocabulary from the task in what the student is being asked. In other words, instead of asking “did you like this book” the student should be asked what their personal point of view about the book was. They might respond, “I felt the book was …” or “I thought the book Grade 7 ELA, AT74131A The student will explain how the detail supports and/or clarifies the main idea. Would it be probable for the teacher to pose a list of various details, identify the main idea and then pose the question as listed (see below)? 1. Supporting Details: ● Gasoline ● Brakes ● Driver’s Seat ● Passenger’s Seat ● Steering Wheel ● Gas Pedal ● Windshield Wipers ● Seatbelts ● Radio ● Rear View Mirror Main Idea: Car (Automobile) How did the list of supporting details of the car help you to understand the main idea? _______________________ Is this too guiding to the student, and should the question be posed in an alternate manner? Any suggestions as to how to improve alignment to the task would be greatly appreciated. A:This would be fine for some students. The intent is getting the student to determine how all the details come together in a certain way to reach the main idea Grade 7, Math, AT71011A, When predicting probability, can the question be a yes/no question? Example – when given one purple marble and two red marbles, can you draw 2 purple marbles? Yes or No? A: yes Grade 7, 7.NS, Extension 70711 and 70721 have a question about 7th grade math, number system. The student is supposed to add fractions, decimals, and/or percentages. Can the student use a calculator to add decimals and can the decimals be prices for items the student will purchase? A: Yes. The use of a calculator is not allow for grades 3-5, but students assessed at grade 7 are allowed to use a calculator Mrs. Krawczyk- Special Education Coach
{"url":"http://www.oswego.org/webpages/bkrawczyk/nysaa.cfm?subpage=9933","timestamp":"2014-04-20T20:55:42Z","content_type":null,"content_length":"29008","record_id":"<urn:uuid:ad4f9b55-fced-463d-91bf-f32e5b3fac46>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Plot the histogram of loge(area) to show that it is approximately Normal. May 17th 2011, 01:33 AM Plot the histogram of loge(area) to show that it is approximately Normal. I am not clearly understanding this specific question. if you cld please explain. I am not providing the data. If we take the natural log of the area column of the forest fire data, we find that it is approximately Normally distributed. We will call this set of values loge(area). 1. Plot the histogram of loge(area) to show that it is approximately Normal. Do i just do a histogram? is the loge exchanged to represent forest fire data? 2. Compute the mean and standard deviation of loge(area). This is pretty straight fwd but must check. The term 'compute' does that mean anything specifically? is that an estimate with a similar resulting answer?? or they just want straight up whatever the mean and 's' is? 3. Assuming that loge(area) follows a Normal distribution with a population mean and population standard deviation equal to the mean and standard deviation from part (i), find the probability that a randomly sampled value from the population of loge(area) will be between 0 and 2. 5. Is there a difference in the previous two answers? Explain why. Thanks Heaps if you get this!!! :) May 17th 2011, 03:27 AM I dont understand the statement "loge exchanged to represent forest fire data". My interpretation of your question is that you should find the log of each data point and draw a histogram of the results. "Compute" means "do" or "calculate" i think. you dont seem to have asked any questions about 3 or 5. May 17th 2011, 03:59 AM hi, thanks for replying. for question 1. i dont think it needs to be converted in logarithms because we havent learnt log all semester. for question 3. i dont understand the log part and also what it means from 0 and 2. very confusing... there is 150 data entries. ranginging stastiscally lets say from 0 to 250. for Q3. do they just want whatever the data is ranging from 0 to 2. in that case there is 31 data entries from 0 to 2 ??? May 17th 2011, 04:50 AM I am not clearly understanding this specific question. if you cld please explain. I am not providing the data. If we take the natural log of the area column of the forest fire data, we find that it is approximately Normally distributed. We will call this set of values loge(area). 1. Plot the histogram of loge(area) to show that it is approximately Normal. Do i just do a histogram? is the loge exchanged to represent forest fire data? Yes you just plot the histogram of $log_e$ of the data 2. Compute the mean and standard deviation of loge(area). This is pretty straight fwd but must check. The term 'compute' does that mean anything specifically? is that an estimate with a similar resulting answer?? or they just want straight up whatever the mean and 's' is? It just means calculate the mean and standard deviation. 3. Assuming that loge(area) follows a Normal distribution with a population mean and population standard deviation equal to the mean and standard deviation from part (i), find the probability that a randomly sampled value from the population of loge(area) will be between 0 and 2. 5. Is there a difference in the previous two answers? Explain why. No answer to 5 as you have not posted 4. May 25th 2011, 03:59 AM What do you do for the last few questions? 3. Count the fraction of samples from the forest fire data where loge(area) lies between 0 and 2. 4. Assuming that loge(area) follows a Normal distribution with a population mean and population standard deviation equal to the mean and standard deviation from part (i), find the probability that a randomly sampled value from the population of loge(area) will be between 0 and 2. 5. Is there a difference in the previous two answers? Explain why.
{"url":"http://mathhelpforum.com/advanced-statistics/180822-plot-histogram-loge-area-show-approximately-normal-print.html","timestamp":"2014-04-18T00:59:35Z","content_type":null,"content_length":"11137","record_id":"<urn:uuid:27e19cda-1a3a-4add-95a9-d5cfcef097bb>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Determine Average Costs in Managerial Economics As a manager you frequently want to know the cost per unit associated with producing a good, because you can use this information to establish your product’s price and determine your profit per unit. If you want to know cost per unit, average cost is what you need to know. Average total cost Average total cost (ATC) is the total cost per unit of output. It’s determined by dividing the total cost equation by the quantity of output. Using the total cost equation average total cost equals where q is the quantity of output produced. Marginal cost always intersects the minimum point on the average total cost curve. Thus, average total cost initially decreases, and then begins to increase, resulting in a U-shaped curve. Average total cost has two components — average fixed cost and average variable cost. As an equation, Average fixed cost Average fixed cost (AFC) is fixed cost per unit of output and is determined by dividing total fixed cost by the quantity of output. In the total cost equation total fixed cost is $5,600, so averaged fixed cost is Because the numerator of average fixed cost is a constant, while the denominator is the quantity of output produced, average fixed cost always decreases as the quantity of output produced increases. Average variable cost Average variable cost (AVC) represents variable cost per unit of output and equals total variable cost divided by the quantity of output. Given total variable cost equals average variable cost equals Typically, average variable cost initially decreases, and then begins to increase, resulting in a U-shaped curve. Marginal cost intersects the minimum point on the average variable cost curve. The illustration shows the average total cost, average fixed cost, average variable cost, and marginal cost curves. Note that the average fixed cost curve is always decreasing and also note that the difference between average total cost and average variable cost is average fixed cost — so AFC[b] at q[b] is less than AFC[a] at q[a]. As you produce more output, average variable cost and average total cost get closer to one another. Finally, marginal cost intersects the minimum points on the average variable and average total cost
{"url":"http://www.dummies.com/how-to/content/how-to-determine-average-costs-in-managerial-econo.navId-816627.html","timestamp":"2014-04-20T22:43:13Z","content_type":null,"content_length":"52652","record_id":"<urn:uuid:9e8e1637-f86a-4224-8a2d-4e54280f4b84>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
On Common Solutions for Fixed-Point Problems of Two Infinite Families of Strictly Pseudocontractive Mappings and the System of Cocoercive Quasivariational Inclusions Problems in Hilbert Spaces International Journal of Mathematics and Mathematical Sciences Volume 2011 (2011), Article ID 691839, 32 pages Research Article On Common Solutions for Fixed-Point Problems of Two Infinite Families of Strictly Pseudocontractive Mappings and the System of Cocoercive Quasivariational Inclusions Problems in Hilbert Spaces Faculty of Science, Maejo University, Chiangmai 50290, Thailand Received 24 February 2011; Accepted 14 June 2011 Academic Editor: Vittorio Colao Copyright © 2011 Pattanapong Tianchai. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper is concerned with a common element of the set of common fixed points for two infinite families of strictly pseudocontractive mappings and the set of solutions of a system of cocoercive quasivariational inclusions problems in Hilbert spaces. The strong convergence theorem for the above two sets is obtained by a novel general iterative scheme based on the viscosity approximation method, and applicability of the results has shown difference with the results of many others existing in the current literature. 1. Introduction Throughout this paper, we always assume that is a nonempty closed-convex subset of a real Hilbert space with inner product and norm denoted by and , respectively, and denotes the family of all the nonempty subsets of . Let be a single-valued nonlinear mapping and a set-valued mapping. We consider the following quasivariational inclusion problem, which is to find a point such that where is the zero vector in . The set of solutions of the problem (1.1) is denoted by . As special cases of the problem (1.1), we have the following. (i)If , where is a proper convex lower semicontinuous function such that is the set of real numbers, and is the subdifferential of , then the quasivariational inclusion problem (1.1) is equivalent to find such that which is called the mixed quasivariational inequality problem (see [ 1]). (ii)If , where is the indicator function of , that is, then the quasivariational inclusion (1.1) is equivalent to find such that which is called Hartman-Stampacchia variational inequality problem (see [2–4]). Recall that is the metric projection of onto , that is, for each , there exists the unique point in such that A mapping is called nonexpansive if and the mapping is called a contraction if there exists a constant such that A point is a fixed point of provided . We denote by the set of fixed points of , that is, . If is bounded, closed, and convex and is a nonexpansive mapping of into itself, then is nonempty (see [5]). Recall that a mapping is said to be (i)monotone if (ii)-Lipschitz continuous if there exists a constant such that if , then is a nonexpansive, (iii)pseudocontractive if (iv)-strictly pseudocontractive if there exists a constant such that and it is obvious that is a nonexpansive if and only if is a 0-strictly pseudocontractive, (v)-strongly monotone if there exists a constant such that (vi)-inverse-strongly monotone (or -cocoercive) if there exists a constant such that if , then is called that firmly nonexpansive; it is obvious that any -inverse-strongly monotone mapping is monotone and (1/)-Lipschitz continuous, (vii)relaxed -cocoercive if there exists a constant such that (viii)relaxed -cocoercive if there exists two constants such that and it is obvious that any -strongly monotonicity implies to the relaxed -cocoercivity. The existence common fixed points for a finite family of nonexpansive mappings have been considered by many authors (see [6–9] and the references therein). In this paper, we study the mapping defined by where is nonnegative real sequence in , for all , from a family of infinitely nonexpansive mappings of into itself. It is obvious that is a nonexpansive of into itself, such a mapping is called a -mapping generated by and . A typical problem is to minimize a quadratic function over the set of fixed points of a nonexpansive mapping in a real Hilbert space , where is a bounded linear operator on , is the fixed-point set of a nonexpansive mapping on , and is a given point in . Recall that is a strongly positive bounded linear operator on if there exists a constant such that Marino and Xu [10] introduced the following iterative scheme based on the viscosity approximation method introduced by Moudafi [11]: where , is a strongly positive bounded linear operator on , is a contraction on , and is a nonexpansive on . They proved that under some appropriateness conditions imposed on the parameters, if , then the sequence generated by (1.19) converges strongly to the unique solution of the variational inequality which is the optimality condition for the minimization problem where is a potential function for (i.e., for ). Iiduka and Takahashi [12] introduced an iterative scheme for finding a common element of the set of fixed points of a nonexpansive mapping and the set of solutions of the variational inequality (1.4) as in the following theorem. Theorem IT. Let be a nonempty closed-convex subset of a real Hilbert space . Let be an -inverse-strongly monotone mapping of into , and let be a nonexpansive mapping of into itself such that . Suppose that and is the sequence defined by for all , where and such that satisfying the following conditions: (C1) and ,(C2) and , then converges strongly to . Definition 1.1 (see [13]). Let be a multivalued maximal monotone mapping, then the single-valued mapping defined by , for all , is called the resolvent operator associated with , where is any positive number, and is the identity mapping. Recently, Zhang et al. [13] considered the problem (1.1). To be more precise, they proved the following theorem. Theorem ZLC. Let be a real Hilbert space, let be an -inverse-strongly monotone mapping, let be a maximal monotone mapping, and let be a nonexpansive mapping. Suppose that the set , where is the set of solutions of quasivariational inclusion (1.1). Suppose that and is the sequence defined by for all , where and satisfying the following conditions: (C1) and ,(C2), then converges strongly to . Peng et al. [14] introduced an iterative scheme for all , where , is an -cocoercive mapping on H, is a contraction on , is a nonexpansive on , is a maximal monotone mapping of into , and is a bifunction from into . We note that their iteration is well defined if we let , and the appropriateness of the control conditions and of their iteration should be and (see Theorem3.1 in [14]). They proved that under some appropriateness imposed on the other parameters, if , then the sequences , , and generated by (1.24) converge strongly to of the variational inequality where is the set of solutions of equilibrium problem defined by Moreover, Plubtieng and Sriprad [15] introduced an iterative scheme for all , where , is a strongly bounded linear operator on , is an -cocoercive mapping on H, is a contraction on , is a nonexpansive on , is a maximal monotone mapping of into , and is a bifunction from into . We note that the appropriateness of the control conditions and of their iteration should be and (see Theorem 3.2 in [15]). They proved that under some appropriateness imposed on the other parameters, if , then the sequences , , and generated by (1.27) converge strongly to . On the other hand, Li and Wu [16] introduced an iterative scheme for finding a common element of the set of fixed points of a -strictly pseudocontractive mapping with a fixed point and the set of solutions of relaxed cocoercive quasivariational inclusions as follows: for all , where , is a strongly positive bounded linear operator on , is a contraction on , is a mapping on defined by for all , such that is a -strictly pseudocontractive mapping on with a fixed point, is relaxed cocoercive and Lipschitz continuous mappings on , and is a maximal monotone mapping of into . They proved that under the missing condition of , which should be (see Theorem2.1 in [16]) and some appropriateness imposed on the other parameters, if , then the sequence generated by (1.28) converges strongly to . Very recently, Tianchai and Wangkeeree [17] introduced an implicit iterative scheme for finding a common element of the set of common fixed points of an infinite family of a -strictly pseudocontractive mapping and the set of solutions of the system of generalized relaxed cocoercive quasivariational inclusions as follows: for all , where , is a strongly positive bounded linear operator on , is a contraction on , is a -mapping on generated by and such that for all , is a -strictly pseudocontractive mapping on with a fixed point, is a maximal monotone mapping of into , and , are two mappings of relaxed cocoercive and Lipschitz continuous mappings on for each . They proved that under some appropriateness imposed on the parameters, if such that the mapping defined by then the sequence generated by (1.29) converges strongly to . In this paper, we introduce a novel general iterative scheme (1.32) below by the viscosity approximation method to find a common element of the set of common fixed points for two infinite families of strictly pseudocontractive mappings and the set of solutions of a system of cocoercive quasivariational inclusions problems in Hilbert spaces. Firstly, we introduce a mapping , where is a -mapping generated by and for solving a common fixed point for two infinite families of strictly pseudocontractive mappings by iteration such that the mapping defined by for all , where and are two infinite families of and -strictly pseudocontractive mappings with a fixed point, respectively, and for some . It follows that a linear general iterative scheme of the mappings and is obtained as follows: for all , where , is a maximal monotone mapping, is a cocoercive mapping for each , is a contraction mapping, and are two mappings of the strongly positive linear bounded self-adjoint operator mappings. As special cases of the iterative scheme (1.32), we have the following. (i)If for all , then (1.32) is reduced to the iterative scheme (ii)If , then (1.32) is reduced to the iterative scheme (iii)If for all , then (1.34) is reduced to the iterative scheme (iv)If for all , then (1.34) is reduced to the iterative scheme (v)If for all , then (1.36) is reduced to the iterative scheme (vi)If and , then (1.37) is reduced to the iterative scheme (vii)If for each and , then (1.32) is reduced to the iterative scheme Furthermore, if for all , then the mapping in (1.31) is reduced to for all . It follows that the iterative scheme (1.32) is reduced to find a common element of the set of common fixed points for an infinite family of strictly pseudocontractive mappings and the set of solutions of a system of cocoercive quasivariational inclusions problems in Hilbert spaces. It is well known that the class of strictly pseudocontractive mappings contains the class of nonexpansive mappings; it follows that if the mapping is defined as (1.31) and , then the iterative scheme (1.32) is reduced to find a common element of the set of common fixed points for two infinite families of nonexpansive mappings and the set of solutions of a system of cocoercive quasivariational inclusions problems in Hilbert spaces, and if the mapping is defined as (1.40) and , then the iterative scheme (1.32) is reduced to find a common element of the set of common fixed points for an infinite family of nonexpansive mappings and the set of solutions of a system of cocoercive quasivariational inclusions problems in Hilbert spaces. We suggest and analyze the iterative scheme (1.32) above under some appropriateness conditions imposed on the parameters, the strong convergence theorem for the above two sets is obtained, and applicability of the results has shown difference with the results of many others existing in the current literature. 2. Preliminaries We collect the following lemmas which are used in the proof for the main results in the next section. Lemma 2.1. Let be a nonempty closed-convex subset of a Hilbert space then the following inequalities hold: Lemma 2.2 (see [10]). Let be a Hilbert space, let be a contraction with coefficient , and let be a strongly positive linear bounded operator with coefficient , then (1)if , then (2)if , then . Lemma 2.3 (see [18]). Assume that is a sequence of nonnegative real numbers such that where is a sequence in and is a sequence in such that (1) and ,(2) or ,then . Lemma 2.4 (see [9]). Let be a nonempty closed-convex subset of a Hilbert space , define mapping as (1.16), let be a family of infinitely nonexpansive mappings with , and let be a sequence such that , for all , then (1) is nonexpansive and for each ,(2)for each and for each positive integer , exists,(3)the mapping defined by is a nonexpansive mapping satisfying , and it is called the -mapping generated by and . Lemma 2.5 (see [13]). The resolvent operator associated with is single-valued and nonexpansive for all . Lemma 2.6 (see [13]). is a solution of quasivariational inclusion (1.1) if and only if , for all , that is, Lemma 2.7 (see [19]). Let be a nonempty closed-convex subset of a strictly convex Banach space . Let be a sequence of nonexpansive mappings on . Suppose that . Let be a sequence of positive real numbers such that , then a mapping on defined by for , is well defined, nonexpansive, and holds. Lemma 2.8 (see [2]). Let be a nonempty closed-convex subset of a Hilbert space and a nonexpansive mapping, then is demiclosed at zero. That is, whenever is a sequence in weakly converging to some and the sequence strongly converges to some , it follows that . Lemma 2.9 (see [20]). Let be a nonempty closed-convex subset of a real Hilbert space and a -strict pseudocontraction. Define by for each , then, as , S is a nonexpansive such that . 3. Main Results Lemma 3.1. Let be a nonempty closed-convex subset of a real Hilbert space , and let be two mappings of and -strictly pseudocontractive mappings with a fixed point, respectively. Suppose that and define a mapping by where such that , then is well defined, nonexpansive, and . Proof. Define the mappings as follows: for all . By Lemma 2.9, we have and as nonexpansive such that and . Therefore, for all , we have It follows from Lemma 2.7 that is well defined, nonexpansive, and . Theorem 3.2. Let be a real Hilbert space, let be a maximal monotone mapping, and let be a -cocoercive mapping for each . Let be two mappings of the strongly positive linear bounded self-adjoint operator mappings with coefficients such that and , respectively, and let be a contraction mapping with coefficient . Let and be two infinite families of and -strictly pseudocontractive mappings with a fixed point such that , respectively. Define a mapping by for all , where such that . Let be a -mapping generated by and such that , for some . Assume that and . For , suppose that is generated iteratively by for all , where , such that , , and for each satisfying the following conditions: (C1),(C2) and ,(C3) and ,(C4), , and ,(C5) and , then the sequences and converge strongly to where is a unique solution of the variational inequality Proof. From , for all , (C1) and (C2), we have , as and . Thus, we may assume without loss of generality that for all . For any and for each , by the -cocoercivity of , we have which implies that is a nonexpansive. Since and are two mappings of the linear bounded self-adjoint operators, we have Observe that Therefore, we obtain that is positive. Thus, by the strong positivity of and , we get Define the sequences of mappings and as follows: for all . Firstly, we prove that has a unique fixed point in . Note that for all , by (3.11), (C3), the nonexpansiveness of , and , we have Therefore, is a nonexpansive. It follows from (3.10), (3.11), (3.12), the contraction of , and the linearity of and that Hence, is a contraction with coefficient . Therefore, Banach contraction principle guarantees that has a unique fixed point in , and so the iteration (3.5) is well defined. Next, we prove that is bounded. Pick . Therefore, by Lemma 2.6, we have for each . By (3.14), the nonexpansiveness of , and , we have Let . By (3.14), (C3), the nonexpansiveness of , and , we have Since , where , and are two infinite families of and -strict pseudocontractions with a fixed point, respectively, such that ; therefore, by Lemma 3.1, we have that is a nonexpansive and for all . It follows from Lemma 2.4(1) that we get , which implies that . Hence, by (3.16) and the nonexpansiveness of , we have By (3.10), (3.17), the contraction of , and the linearity of and , we have It follows from induction that for all . Hence, is bounded, and so are , , , , , , and . Next, we prove that as . By (C3), the nonexpansiveness of , and , we have By the nonexpansiveness of and , we have for some constant such that . Therefore, from (3.21), by the nonexpansiveness of , we have Since combining (3.20), (3.22), and (3.23), we have By the linearity of and , we have Therefore, by (3.10), (3.24), (3.25), and the contraction of , we have where and such that By (C1), (C3), (C4), and (C5), we can find that , , and ; therefore, by (3.26) and Lemma 2.3, we obtain Next, we prove that as . By the linearity of , we have It follows that Hence, by (C1), (C2), (3.29), and (3.31), we have Since therefore, by (3.29) and (3.32), we obtain For all , by Lemma 2.2(2), the nonexpansiveness of , the contraction of , and the linearity of , we have Therefore, is a contraction with coefficient ; Banach contraction principle guarantees that has a unique fixed point, say , that is, . Hence, by Lemma 2.1(1), we obtain Next, we claim that To show this inequality, we choose a subsequence of such that Since
{"url":"http://www.hindawi.com/journals/ijmms/2011/691839/","timestamp":"2014-04-18T22:42:37Z","content_type":null,"content_length":"1046771","record_id":"<urn:uuid:acd24dea-0840-4b9a-aba1-ba1061780927>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Gram Help : Videos | Worksheets | Word Problems in 4th Grade Math. in 5th Grade Math. in 6th Grade Math. in Basic Math. Many students find gram difficult. They feel overwhelmed with gram homework, tests and projects. And it is not always easy to find gram tutor who is both good and affordable. Now finding gram help is easy. For your gram homework, gram tests, gram projects, and gram tutoring needs, TuLyn is a one-stop solution. You can master hundreds of math topics by using TuLyn. At TuLyn, we have over 2000 math video tutorial clips including gram videos gram practice word problems gram questions and answers , and gram worksheets gram videos replace text-based tutorials and give you better step-by-step explanations of gram. Watch each video repeatedly until you understand how to approach gram problems and how to solve them. • Hundreds of video tutorials on gram make it easy for you to better understand the concept. • Hundreds of word problems on gram give you all the practice you need. • Hundreds of printable worksheets on gram let you practice what you have learned by watching the video tutorials. How to do better on gram: TuLyn makes gram easy. Conversion Of Metric Units Grams To Milligrams Video Clip Conversion Of Metric Units Kilograms To Grams Video Clip Conversion Of Metric Units Centigrams To Grams Video Clip Conversion Of Metric Units Grams To Decigrams Video Clip Conversion Of Metric Units Decigrams To Grams Video Clip Conversion Of Metric Units Milligrams To Grams Video Clip Post a homework question on gram: How Others Use Our Site I am taking a math endorsement program. Independent study program will allow students to use tutorial when they are by themselves. I think this site will help me understand venn diagrams better. I think the site will help me better understand how to solve venn diagrams. Help students who are having to work individually on skills in a program that requires me to teach many different math levels in the same setting. Printable math worksheets for my summer school program hopefully. Enhance my math program giving students more practice. I saw a sheet that another teacher assigned to her class from you site and thought there would be other good activities to supplement my 5th grade Math program. This is to help my son prepare for an industry program requiring trigonometry. I homeschool my 3 children. They had skipping grades and put into gifted programs. I just wanted them to learn naturally and be away from the compettition. Also, I have decided to attend an online school and I need extra help. Examples and illustrations. Supplement my current program with practice sheets and video explanations. The ability to use this site as a tutor based program will help my students that need additional help in some areas. I am studying for a test to enter a master program and need a refresher in basic math. I am currently trying to get my math certification through the ABCTE Program. I am just trying to get some help on my trouble spots. It will help supplement my math program. Providing access to math worksheets to suplement our math program. Our math program lacks practice examples for the kids to use prior to quizzes and tests. I`m hoping to find usable practice sheets on your site. It will provide me with more practice for my Math program! I`m enrolled into a college program to obtain an AA Degree in Health Care. My curriculum involves Pre-Algebra, Intermediate Algebra, and College Algebra. I have never taken an Algebra class before. Algebra scares me and I want to conquer this fear. I think with your worksheet and tutorials, this will help me excell in my classes. In my business study program. I teach in the after school program. Students will listen to the explain and see it, also. I will explore the site to see what else that I can use the site for. I will forwarding this to other math Post a Comment Watch video clips on metric units of mass, print metric units of mass worksheets, practice metric units of mass word problems.
{"url":"http://www.tulyn.com/gram.htm","timestamp":"2014-04-21T04:33:09Z","content_type":null,"content_length":"22224","record_id":"<urn:uuid:a446582c-f9e6-481b-b494-2d6284f6ab8b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimal metropolis algorithms for product measures on the vertices of a hypercube. Roberts, G. O. (1997) Optimal metropolis algorithms for product measures on the vertices of a hypercube. Stochastics, 62 (3 & 4). pp. 275-284. Full text not available from this repository. Optimal scaling problems for high dimensional Metropolis-Hastings algorithms can often be solved by means of diffusion approximation results. These solutions are particularly appealing since they can often be characterised in terms of a simple, observable property of the Markov chain sample path, namely the overall proportion of accepted iterations for the chain. For discrete state space problems, analogous scaling problems can be defined, though due to discrete effects, a simple characterisation of the asymptotically optimal solution is not available. This paper considers the simplest possible (and most discrete) example of such a problem, demonstrating that, at least for sufficiently 'smooth' distributions in high dimensional problems,the Metropolis algorithm behaves similarly to its counterpart on the continuous state space Item Type: Article Journal or Publication Title: Stochastics Uncontrolled Keywords: Metropolis-Hastings algorithm ; scaling problem ; weak convergence ; Mathematics Subject Classification 1991 ; Primary ; 60F05 ; Secondary ; 65U05 Subjects: Q Science > QA Mathematics Departments: Faculty of Science and Technology > Lancaster Environment Centre ID Code: 19469 Deposited By: ep_ss_importer Deposited On: 18 Nov 2008 09:13 Refereed?: Yes Published?: Published Last Modified: 26 Jul 2012 15:30 Identification Number: URI: http://eprints.lancs.ac.uk/id/eprint/19469 Actions (login required)
{"url":"http://eprints.lancs.ac.uk/19469/","timestamp":"2014-04-16T08:56:33Z","content_type":null,"content_length":"15688","record_id":"<urn:uuid:771526e1-1985-466d-a317-711c4d3e77e4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: Arguments Up: Generalized Symmetric Eigenvalue Problems Previous: LA_SBGV / LA_SBGVD / &nbsp Contents &nbsp Index LA_SBGV, LA_SBGVD, LA_HBGV and LA_HBGVD compute all eigenvalues and, optionally, all eigenvectors of the generalized eigenvalue problem where LA_SBGV and LA_SBGVD and complex Hermitian in the cases of LA_HBGV and LA_HBGVD. Matrix LA_SBGVD and LA_HBGVD use a divide and conquer algorithm. If eigenvectors are desired, they can be much faster than LA_SBGV and LA_HBGV for large matrices but use more workspace. Susan Blackford 2001-08-19
{"url":"http://www.netlib.org/lapack95/lug95/node291.html","timestamp":"2014-04-20T11:17:11Z","content_type":null,"content_length":"4182","record_id":"<urn:uuid:dce79a1e-5a8e-4d01-b4ca-9f164affdb46>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Orthonormal Help! (Lin. Alg.) October 30th 2008, 06:08 AM #1 Junior Member Sep 2008 Orthonormal Help! (Lin. Alg.) 12) Determine whether the given orthogonal set of vectors is orthonormal. If it is not, normalize the vectors to form an orthonormal set. (Need help on this one) q1 q2 [1/2] , [1/2] [1/2] [-1/2] I did the dot product which yielded 0. Then I took the dot product of each by its self and it yielded two of the same fractions, 1/2. Then I multiplied each by the magnitude, 1/sqroot(2), and it yielded 1/4 for the dot products of themselves. q1dotq2 = 0 q1dotq1 = 1/4 q2dotq2 = 1/4 I thought the last two were supposed to by ones... What did I do wrong? Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-algebra/56579-orthonormal-help-lin-alg.html","timestamp":"2014-04-20T14:15:12Z","content_type":null,"content_length":"29217","record_id":"<urn:uuid:e9d4d12b-c33d-4290-91fe-a3fc082643b5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
The Complexity of Decision Versus Search Results 1 - 10 of 22 , 2000 "... This paper puts forward a new notion of a proof based on computational complexity and explores its implications for computation at large. Computationally sound proofs provide, in a novel and meaningful framework, answers to old and new questions in complexity theory. In particular, given a random o ..." Cited by 92 (3 self) Add to MetaCart This paper puts forward a new notion of a proof based on computational complexity and explores its implications for computation at large. Computationally sound proofs provide, in a novel and meaningful framework, answers to old and new questions in complexity theory. In particular, given a random oracle or a new complexity assumption, they enable us to 1. prove that verifying is easier than deciding for all theorems; 2. provide a quite effective way to prove membership in computationally hard languages (such as Co-NP-complete ones); and 3. show that every computation possesses a short certificate vouching its correctness. Finally, if a special type of computationally sound proof exists, we show that Blum’s notion of program checking can be meaningfully broadened so as to prove that NP-complete languages are checkable. - Complexity theory retrospective II , 1997 "... ABSTRACT Recent results on the internal, measure-theoretic structure of the exponential time complexity classes E and EXP are surveyed. The measure structure of these classes is seen to interact in informative ways with bi-immunity, complexity cores, polynomial-time reductions, completeness, circuit ..." Cited by 90 (13 self) Add to MetaCart ABSTRACT Recent results on the internal, measure-theoretic structure of the exponential time complexity classes E and EXP are surveyed. The measure structure of these classes is seen to interact in informative ways with bi-immunity, complexity cores, polynomial-time reductions, completeness, circuit-size complexity, Kolmogorov complexity, natural proofs, pseudorandom generators, the density of hard languages, randomized complexity, and lowness. Possible implications for the structure of NP are also discussed. 1 - Theoretical Computer Science , 1992 "... Under the hypothesis that NP does not have p-measure 0 (roughly, that NP contains more than a negligible subset of exponential time), it is show n that there is a language that is P T -complete ("Cook complete "), but not P m -complete ("Karp-Levin complete"), for NP. This conclusion, widely be ..." Cited by 56 (12 self) Add to MetaCart Under the hypothesis that NP does not have p-measure 0 (roughly, that NP contains more than a negligible subset of exponential time), it is show n that there is a language that is P T -complete ("Cook complete "), but not P m -complete ("Karp-Levin complete"), for NP. This conclusion, widely believed to be true, is not known to follow from P 6= NP or other traditional complexity-theoretic hypotheses. Evidence is presented that "NP does not have p-measure 0" is a reasonable hypothesis with many credible consequences. Additional such consequences proven here include the separation of many truthtable reducibilities in NP (e.g., k queries versus k+1 queries), the class separation E 6= NE, and the existence of NP search problems that are not reducible to the corresponding decision problems. This research was supported in part by National Science Foundation Grant CCR9157382, with matching funds from Rockwell International. 1 Introduction The NP-completeness of decision problems - Theoretical Computer Science , 1997 "... In this paper we extend a key result of Nisan and Wigderson [17] to the nondeterministic setting: for all ff ? 0 we show that if there is a language in E = DTIME(2 O(n) ) that is hard to approximate by nondeterministic circuits of size 2 ffn , then there is a pseudorandom generator that can be u ..." Cited by 42 (3 self) Add to MetaCart In this paper we extend a key result of Nisan and Wigderson [17] to the nondeterministic setting: for all ff ? 0 we show that if there is a language in E = DTIME(2 O(n) ) that is hard to approximate by nondeterministic circuits of size 2 ffn , then there is a pseudorandom generator that can be used to derandomize BP \Delta NP (in symbols, BP \Delta NP = NP). By applying this extension we are able to answer some open questions in [14] regarding the derandomization of the classes BP \Delta \Sigma P k and BP \Delta \Theta P k under plausible measure theoretic assumptions. As a consequence, if \Theta P 2 does not have p-measure 0, then AM " coAM is low for \Theta P 2 . Thus, in this case, the graph isomorphism problem is low for \Theta P 2 . By using the NisanWigderson design of a pseudorandom generator we unconditionally show the inclusion MA ` ZPP NP and that MA " coMA is low for ZPP NP . 1 Introduction In recent years, following the development of resource-bounded meas... , 1993 "... We obtain several results that distinguish self-reducibility of a language L with the question of whether search reduces to decision for L. These include: (i) If NE 6= E, then there exists a set L in NP \Gamma P such that search reduces to decision for L, search does not nonadaptively reduces to de ..." Cited by 39 (9 self) Add to MetaCart We obtain several results that distinguish self-reducibility of a language L with the question of whether search reduces to decision for L. These include: (i) If NE 6= E, then there exists a set L in NP \Gamma P such that search reduces to decision for L, search does not nonadaptively reduces to decision for L, and L is not self-reducible. Funding for this research was provided by the National Science Foundation under grant CCR9002292. y Department of Computer Science, State University of New York at Buffalo, 226 Bell Hall, Buffalo, NY 14260 z Department of Computer Science, State University of New York at Buffalo, 226 Bell Hall, Buffalo, NY 14260 x Research performed while visiting the Department of Computer Science, State University of New York at Buffalo, Jan. 1992--Dec. 1992. Current address: Department of Computer Science, University of Electro-Communications, Chofu-shi, Tokyo 182, Japan. -- Department of Computer Science, State University of New York at Buffalo, - In CRYPTO , 2003 "... Abstract. We construct several new statistical zero-knowledge proofs with efficient provers, i.e. ones where the prover strategy runs in probabilistic polynomial time given an NP witness for the input string. Our first proof systems are for approximate versions of the Shortest Vector Problem (SVP) a ..." Cited by 39 (8 self) Add to MetaCart Abstract. We construct several new statistical zero-knowledge proofs with efficient provers, i.e. ones where the prover strategy runs in probabilistic polynomial time given an NP witness for the input string. Our first proof systems are for approximate versions of the Shortest Vector Problem (SVP) and Closest Vector Problem (CVP), where the witness is simply a short vector in the lattice or a lattice vector close to the target, respectively. Our proof systems are in fact proofs of knowledge, and as a result, we immediately obtain efficient lattice-based identification schemes which can be implemented with arbitrary families of lattices in which the approximate SVP or CVP are hard. We then turn to the general question of whether all problems in SZK ∩ NP admit statistical zero-knowledge proofs with efficient provers. Towards this end, we give a statistical zero-knowledge proof system with an efficient prover for a natural restriction of Statistical Difference, a complete problem for SZK. We also suggest a plausible approach to resolving the general question in the positive. 1 , 2001 "... It is shown that for essentially all MAX SNP-hard optimization problems finding exact solutions in subexponential time is not possible unless W [1] = FPT . In particular, we show that O(2 o(k) p (n)) parameterized algorithms do not exist for Vertex Cover, Max Cut, Max c-Sat, and a number of pr ..." Cited by 36 (2 self) Add to MetaCart It is shown that for essentially all MAX SNP-hard optimization problems finding exact solutions in subexponential time is not possible unless W [1] = FPT . In particular, we show that O(2 o(k) p(n)) parameterized algorithms do not exist for Vertex Cover, Max Cut, Max c-Sat, and a number of problems on bounded degree graphs such as Dominating Set and Independent Set, unless W [1] = FPT . Our results are derived via an approach that uses an extended parameterization of optimization problems and associated techniques to relate the parameterized complexity of problems in FPT to the parameterized complexity of extended versions that are W [1]-hard. - In Proc. 9th Structures , 1994 "... Resource-bounded measure as originated by Lutz is an extension of classical measure theory which provides a probabilistic means of describing the relative sizes of complexity classes. Lutz has proposed the hypothesis that NP does not have p-measure zero, meaning loosely that NP contains a non-neglig ..." Cited by 18 (1 self) Add to MetaCart Resource-bounded measure as originated by Lutz is an extension of classical measure theory which provides a probabilistic means of describing the relative sizes of complexity classes. Lutz has proposed the hypothesis that NP does not have p-measure zero, meaning loosely that NP contains a non-negligible subset of exponential time. This hypothesis implies a strong separation of P from NP and is supported by a growing body of plausible consequences which are not known to follow from the weaker assertion P ̸= NP. It is shown in this paper that relative to a random oracle, NP does not have p-measure zero. The proof exploits the following independence property of algorithmically random sequences: if A is an algorithmically random sequence and a subsequence A0 is chosen by means of a bounded Kolmogorov-Loveland - In Proceedings of the 24th Conference on Foundations of Software Technology and Theoretical Computer Science , 2004 "... Abstract We consider hypotheses about nondeterministic computation that have been studied in dif-ferent contexts and shown to have interesting consequences: * The measure hypothesis: NP does not have p-measure 0.* The pseudo-NP hypothesis: there is an NP language that can be distinguished from anyDT ..." Cited by 18 (5 self) Add to MetaCart Abstract We consider hypotheses about nondeterministic computation that have been studied in dif-ferent contexts and shown to have interesting consequences: * The measure hypothesis: NP does not have p-measure 0.* The pseudo-NP hypothesis: there is an NP language that can be distinguished from anyDTIME(2 nffl) language by an NP refuter. * The NP-machine hypothesis: there is an NP machine accepting 0 * for which no 2n ffl-time machine can find infinitely many accepting computations. We show that the NP-machine hypothesis is implied by each of the first two. Previously, norelationships were known among these three hypotheses. Moreover, we unify previous work by showing that several derandomizations and circuit-size lower bounds that are known to followfrom the first two hypotheses also follow from the NP-machine hypothesis. In particular, the NPmachine hypothesis becomes the weakest known uniform hardness hypothesis that derandomizesAM. We also consider UP versions of the above hypotheses as well as related immunity and scaled dimension hypotheses. 1 Introduction The following uniform hardness hypotheses are known to imply full derandomization of ArthurMerlin games (NP = AM): * The measure hypothesis: NP does not have p-measure 0 [24]. - BASIC RESEARCH IN COMPUTER SCIENCE, CENTER OF THE DANISH NATIONAL RESEARCH FOUNDATION , 1997 "... Several alternative formulations of the concept of an efficient proof system are nowadays coexisting in our field. These systems include the classical formulation of NP , interactive proof systems (giving rise to the class IP), computationally-sound proof systems, and probabilistically checkable pro ..." Cited by 15 (2 self) Add to MetaCart Several alternative formulations of the concept of an efficient proof system are nowadays coexisting in our field. These systems include the classical formulation of NP , interactive proof systems (giving rise to the class IP), computationally-sound proof systems, and probabilistically checkable proofs (PCP), which are closely related to multi-prover interactive proofs (MIP). Although these notions are sometimes introduced using the same generic phrases, they are actually very different in motivation, applications and expressive power. The main objective of this essay is to try to clarify these differences.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=789259","timestamp":"2014-04-18T17:52:27Z","content_type":null,"content_length":"39437","record_id":"<urn:uuid:37d88ee7-a604-4c87-8a27-5233fef6d1b3>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Another hard puzzle but this one is trickier One day a engineer was ordered by his boss to measure the length of a road. However, the engineer had only a single piece of rope which he didn't know the length of.What he did learn was that if he stretched this rope out 30 times it would be 2m long. But if he did it 29 times it was 4m short. Can you figure out the length of both the rope and the road? Is that harder for you mathsy? Last edited by espeon (2006-02-15 00:55:34)
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=27512","timestamp":"2014-04-20T03:21:18Z","content_type":null,"content_length":"12757","record_id":"<urn:uuid:cad766c5-511d-4b24-8236-3a387e27615a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Boolean satisfiability Replies: 6 Last Post: Dec 16, 2013 6:57 AM Messages: [ Previous | Next ] quasi Re: Boolean satisfiability Posted: Dec 12, 2013 6:13 AM Posts: 9,887 Registered: 7/15/05 Steeve Beroard wrote: >So you mean by stating that "any unsatisfiable CNF is equivalent >to a CNF with all combination of variables and their negation" > s to give a particular method of resolution? Not exactly. In general, the new CNF will have exponential size relative to the size of the original CNF, so any particular method used to determine its satisfiability will have exponential time cost relative to the size of the original CNF (not necessarily exponential relative to its own size, but that's irrelevant). Rather than talk about it abstractly, take an actual example so you can compare the sizes of the two CNFs. Date Subject Author 12/11/13 steeve.beroard@gmail.com 12/11/13 Re: Boolean satisfiability quasi 12/11/13 Re: Boolean satisfiability steeve.beroard@gmail.com 12/12/13 Re: Boolean satisfiability quasi 12/12/13 Re: Boolean satisfiability steeve.beroard@gmail.com 12/12/13 Re: Boolean satisfiability quasi 12/16/13 Re: Boolean satisfiability steeve.beroard@gmail.com
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2610612&messageID=9341856","timestamp":"2014-04-16T16:57:08Z","content_type":null,"content_length":"23581","record_id":"<urn:uuid:d552a322-6bc6-466e-9eb3-ec812bad132a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
My regular readers are no doubt wondering what it was that Bram and I talked about yesterday evening that was so much fun it left my diary so inarticulate. I'll try to summarize. Rigorously formal mathematics is interesting, but has a well-earned reputation for being too painful and tedious to be really practical for anything. So far, it's been much more the domain of philosophers than engineering. Bram and I feel that the Web has the power to change that. For the most part, tools for mechanically checking proofs have been basically been solo tools. For one person to get through a significant slice of mathematics is an ordeal. But if the work is split up among many, it's a lot more reasonable. Collaboration requires standard file formats. Bram and I are thinking along the lines of a low-level, simple formal proof format. The main thing is to apply inference rules. Then, there's a higher level metadata format for saying things like "this is a proof of Fermat's Little Theorem using the Peano axioms for natural numbers". The design of such languages parallels the design of computer languages. Names and namespaces are particularly important. Proofs should be written as stand-alone modules, but also so that you can build up a structure by reusing the results in other proofs. Since these file formats are intended to live on the Web, links are critically important too. In particular, you want to be able to import other namespaces. Thus, in the example above, "the Peano axioms for natural numbers" would be a link. A link can be a URL, but it can also be an immutable hash of the content. That way, if you have the file locally, you don't need to go to the net to resolve it. Typically, these formal systems have set theory as a primitive (Metamath uses ZFC). Other objects are encoded in the primitives. For example, the usual encoding of natural numbers is: 0 = {}, i + 1 = i \union {i}. But there are other ways of encoding numbers, some of which might make more sense. For example, if you're doing calculations, a sequence of bits might be better. Of course, all the encodings of integers are equivalent, but if two proofs use two different encodings, it might be hard to knit them together. So I think it's important to make it easy to convert between these different conventions. There are a couple of techniques I can think of. First, you might be able to make inferences based on models: "if P is a proof of theorem T using encoding E0, and E1 is a model of E0, then (some simple transformation of T) is also true". Another way would be to parameterize proofs, so they take, say, the integers as arguments. Any encoding of the integers which satisfied the Peano axioms would do; to use a proof with a different encoding, you just prove the Peano axioms in your new encoding. The higher-level format is useful for humans wishing to browse the space of proofs, and also for clients to automatically verify proofs. At the very least, you want to let them crawl enough of the proof web that the expansion of the proof is complete. I'd also expect the usual Web patterns to show up: search engines, directories, pages full of annotated links and commentary. The exact choice of lower-level language is probably not that important. You can always use tools to translate more convenient proof formats into the lower-level one, or even to automate part of the process. By using a common lower-level format, people using different tools can still collaborate. One important test of this principle is whether the Metamath proofs can be so translated. Formal logic for proofs is not new, of course. The idea of a proof repository isn't new, either. But I am unaware of anyone who has designed languages for formal proofs with the explicit goal of fostering collaboration through the Web. This Introduction to Mathematical Logic may be rewarding to those intrigued by these ideas.
{"url":"http://advogato.org/person/raph/diary.html?start=240","timestamp":"2014-04-18T00:20:53Z","content_type":null,"content_length":"32595","record_id":"<urn:uuid:c6e32946-b3e3-4033-8eaa-abdfd1ffacad>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
SF&PA: One more example SF&PA: One more example April 9, 2008 Posted by Noah Snyder in planar algebras, subfactors, Uncategorized. Sorry for the delay, Scott’s been in town and so I’ve been too busy doing actual research to get much blogging done. This post was also a little delayed because I didn’t understand this example as well as I’d hoped. I still don’t fully grok it so maybe you all can help me out. If you don’t follow this example, don’t worry, we’ll be moving on to pictures in the next post. Take a finite group G. Let C[G] denote the group algebra and let F(G) denote the algebra of functions on G with pointwise product (that is F(G) has a basis of elements of the form $\delta_g$ and $\ delta_g \delta_h$ is 0 unless $h=g$ in which case it is $\delta_g$). Recall that for both C[G] and F(G) there is a notion of tensor product for modules (for the former g acts on a tensor product via $g \otimes g$, whereas for the latter $\delta_g$ acts on a tensor product via $\sum_{xy=g} \delta_x \otimes \delta_y$ We define a bioidal category called the “group Subfactor” as follows • The A-A objects are C[G]-modules • The A-B objects are vector spaces (thought of as representations of the trivial group) • The B-A objects are vector spaces (thought of as representations of the trivial group) • The B-B objects are F(G)-modules Now we need to define the four flavors of tensor products. All of these are of the following form: first push things around (using restriction/induction) until they live where the tensor product is supposed to live, then take the tensor product there. For example if we take an A-B object (i.e. vector space) V and a B-A object (i.e. vector space) W, their tensor product should be a C[G]-module. To do this we first induce V and W up to G getting two C[G]-modules, then we take their tensor product as C[G]-modules. In particular, if V and W are 1-dimensional, then their tensor product is the regular representation. For another example, suppose we want to take the tensor product of an A-A object (that is a representation V) and an A-B object (that is a representation of the trivial). The answer is supposed to be an A-B object, so we first turn V into a vector space by restriction (forgetting the C[G]-module structure) and then we take the tensor product. One last example, we want to take the tensor product of two vector spaces and get a F (G)-module. So first we take their tensor product as Vector spaces, and then we push that up to an F(G)-module by tensoring with the regular. This whole tensor product process is a bit more confusing than it ought to be. Can anyone figure out what’s really going on here? In order to make this a Subfactor category, I also need to fix a simple A-B object called X which tensor-generates. That’s easy here since there’s only one such simple: the one-dimensional vector space. It is easy to see that this tensor generates as $X \otimes X^*$ and $X^* \otimes X$ arejust the regular representation of the corresponding ring and thus contains all simples. What is the dimension of X? (I’m going to leave the definition of dimension vague for the moment.) The A-A part of the category is just the representation theory of G, so we know the dimensions of objects there. In particular, the regular representation $X \otimes X^*$ has dimension #G. Hence, X has dimension $\sqrt{\#G}$. Since I haven’t told you much about how actual Subfactor people think about Subfactors, let me sketch the construction in this case. Let N be the von Neumann completion of the countable tensor product of 2×2 matrix rings. N is a hyperfinite II_1 factor. Pick your favorite faithful action of G on a set, and use that action to give an outer action of G on N by permuting the tensor factors in N. Now consider the fixed points N^G = M. Because the action is outer you can prove that M is also a factor, and since G is finite the inclusion M<N is finite index. This is the group subfactor. “For example if we take an A-B object (i.e. vector space) V and a B-A object (i.e. vector space) W, their tensor product should be a C[G]-module. To do this we first induce V and W up to G getting two C[G]-modules, then we take their tensor product as C[G]-modules.” Just to clarify, you mean to take the tensor product over C[G] here? Otherwise you won’t get “In particular, if V and W are 1-dimensional, then their tensor product is the regular representation.” (Equivalently, you could do something like what you did for F(G).) Whereas for two A-A objects you take the tensor product over C, as you described at the beginning? (So in particular, “dimension over C” behaves as expected.) Arg, no, I *meant* tensor over C, but I clearly made a mistake somewhere. You must want to take the tensor product of the vector spaces before you induce. I’m trying to figure out the right way to make this statement from a source that only says how to do it for simple objects. Hi Noah; just a quick question to help me understand the relationship between planar algebras and fusion categories. On your website (by the way, the atlas of unitary fusion categories sounds fascinating) you say “A planar algebra is a combinatorial model for a pivotal fusion category…The relationship between planar algebras and fusion categories is that the planar algebra describes the Hom spaces between tensor powers of a chosen fundamental object in the fusion category.” I can see how that works, and it’s great, but I’m confused about the associators… how do I see them in the planar algebra picture? I’d like to know if the planar algebra framework gives an alternative framework to understand things like the 6j symbols. Is there a precise statement somewhere about the relationship between planar algebras and pivotal fusion categories? I’ve been meaning to get back to this series of posts, but I was very busy teaching at Mathcamp, and now I’m trying to finish papers since I’m going on the job market. Here’s the rapid explanation. Planar Algebras come from a pivotal category together with a choice of (tensor generating) object. The single strand represents the tensor generator, while tensor powers of it are given by other strands. Other simple objects only appear here in the guise of projections (in other words, you have to take Karoubi envelope to recover the whole category). So explicitly there’s no associators in the theory, because the only objects are V^(x)k, and for tensor products of those the associator is trivial. However, suppose you want to understand tensor products of projections. Then you have some 3j (aka Clebsch-Gordan) elements of the planar algebra which give explicit maps between A (x) B -> C (where A, B, and C here are projections in the planar algebra). Using these you get 6j symbols from tetrahedra. As for references, it’s not all together at one place yet (Vaughan Jones is writing a book this year though), but good places to start looking are Scott Morrison’s thesis, or Kuperberg’s spiders paper. Another good introduction (which goes the other way, from the planar algebra to the tensor category) is Scott, Emily, and my recent preprint on D_2n. More later, when I’ve finished writing some papers! Ok thanks a lot. Just from perusing the preprint on D_2n I can see this framework certainly brings a whole lot of new ideas to the table and a new perspective on things.I think it will take me a long while to get it all straight in my head. It seems to me that the planar algebra paradigm -does- bring new insight into the role of the associators in a fusion category. Shooting from the hip, it looks as if the two pictures seem to excel in different areas: it’s easiest to see the fusion rules in the fusion category perspective, but other things (such as, I suspect, the associators) might be clearest in a planar algebra framework. Sorry comments are closed for this entry Recent Comments Erka on Course on categorical act… Qiaochu Yuan on The many principles of conserv… David Roberts on Australian Research Council jo… David Roberts on Australian Research Council jo… Elsevier maths journ… on Mathematics Literature Project…
{"url":"http://sbseminar.wordpress.com/2008/04/09/sfpa-one-more-example/","timestamp":"2014-04-17T01:03:52Z","content_type":null,"content_length":"75667","record_id":"<urn:uuid:97921749-7ac2-445a-915f-a8c428af0b38>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
An eternity of infinities: the power and beauty of mathematics | The Curious Wavefunction, Scientific American Blog Network The biggest intellectual shock I ever received was in high school. Someone gifted me a copy of the physicist George Gamow’s classic book “One two three…infinity”. Gamow was not only a brilliant scientist but also one of the best science popularizers of the late twentieth century. In his book I encountered the deepest and most utterly fascinating pure intellectual fact I have ever known; the fact that mathematics allows us to compare ‘different infinities’. This idea will forever strike awe and wonder in me and I think is the ultimate tribute to the singularly bizarre and completely counter-intuitive worlds that science and especially mathematics can uncover. Gamow starts by alerting us to the Hottentot tribe in Africa. Members of this tribe cannot formally count beyond three. How then do they compare commodities such as animals whose numbers are greater than three? By employing one of the most logical and primitive methods of counting- the method of counting by one-to-one correspondences or put more simply, by pairing objects with each other. So if a Hottentot has ten animals and she wishes to compare these with animals from a rival tribe, she will pair off each animal with its counterpart. If animals are left over in her own collection, she wins. If they are left over in her rival’s collection, she has to admit the rival tribe’s superiority in sheep.What is remarkable is that this simplest of counting methods allowed the great German mathematician Georg Cantor to discover one of the most stunning and counter-intuitive facts ever divined by pure thinking. Consider the set of natural numbers 1, 2, 3… Now consider the set of even numbers 2, 4, 6…If asked which set is greater, commonsense would quickly point to the former. After all the set of natural numbers contains both even and odd numbers and this would of course be greater than just the set of even numbers, wouldn’t it? But if modern science and mathematics have revealed one thing about the universe, it’s that the universe often makes commonsense stand on its head. And so it is the case here. Let’s use the Hottentot method. Line up the natural numbers and the even numbers next to each other and pair them up. 1 2 3 4 5… 2 4 6 8 10… So 1 pairs up with 2, 2 pairs up with 4, 3 pairs up with 6 and so on. It’s now obvious that every natural number n will always pair up with an even number 2n. Thus the set of natural numbers is equal to the set of even numbers, a conclusion that seems to fly in the face of commonsense and shatters its visage. We can extend this conclusion even further. For instance consider the set of squares of natural numbers, a set that would seem even ‘smaller’ than the set of even numbers. By similar pairings we can show that every natural number n can be paired with its square n^2, again demonstrating the equality of the two sets. Now you can play around with this method and establish all kinds of equalities, for instance that of whole numbers (all positive and negative numbers) with squares. But what Cantor did with this technique was much deeper than amusing pairings. The set of natural numbers is infinite. The set of even numbers is also infinite. Yet they can be compared. Cantor showed that two infinities can actually be compared and can be shown to be equal to each other. Before Cantor infinity was just a place card for ‘unlimited’, a vague notion that exceeded man’s imagination to visualize. But Cantor showed that infinity can be mathematically precisely quantified, captured in simple notation and expressed more or less like a finite number. In fact he found a precise mapping technique with which a certain kind of infinity can be defined. By Cantor’s definition, any infinite set of objects which has a one-to-one mapping or correspondence with the natural numbers is called a ‘countably’ infinite set of objects. The correspondence needs to be strictly one-to-one and it needs to be exhaustive, that is, for every object in the first set there must be a corresponding object in the second one. The set of natural numbers is thus a ruler with which to measure the ‘size’ of other infinite sets. This countable infinity was quantified by a measure called the ‘cardinality’ of the set. The cardinality of the set of natural numbers and all others which are equivalent to it through one-to-one mappings is called ‘aleph-naught’, denoted by the symbol ℵ[0]. The set of natural numbers and the set of odd and even numbers constitute the ‘smallest’ infinity and they all have a cardinality of ℵ[0]. Sets which seemed disparately different in size could all now be declared equivalent to each other and pared down to a single classification. This was a towering achievement. The perplexities of Cantor’s infinities led the great mathematician David Hilbert to propose an amusing situation called ‘Hilbert’s Hotel’. Let’s say you are on a long journey and, weary and hungry, you come to a fine-looking hotel. The hotel looks like any other but there’s a catch: much to your delight, it contains a countably infinite number of rooms. So now when the manager at the front desk says “Sorry, but we are full”, you have a response ready for him. You simply tell him to move the first guest into the second room, the second guest into the third room and so on, with the n^th guest moving into the (n+1)^th room. Easy! But now what if you are accompanied by your friends? In fact, what if you are so popular that you are accompanied by a countably infinite number of friends? No problem! You simply ask the manager to move the first guest into the second room, the second guest into the fourth room, the third guest into the sixth room…and the n^th guest into the 2n^th room. Now all the odd-numbered rooms are empty, and since we already know that the set of odd numbers is countably infinite, these rooms will easily accommodate all your countably infinite guests, making you even more popular. Mathematics can bend the laws of the material world like nothing else. But the previous discussion leaves a nagging question. Since all our infinities are countably infinite, is there something like an ‘uncountably’ infinite set? In fact, what would such an infinity even look like? The ensuing discussion probably constitutes the gem in the crown of infinities and it struck infinite wonder in my heart when I read it. Let’s consider the set of real numbers, numbers defined with a decimal point as a.bcdefg… The real numbers consist of the rational and the irrational numbers. Is this set countably infinite? By Cantor’s definition, to demonstrate this we would have to prove that there is a one-to-one mapping between the set of real numbers and the set of natural numbers. Is this possible? Well, let’s say we have an endless list of rational numbers, for instance 2.823, 7.298, 4.001 etc. Now pair up each one of these with the natural numbers 1, 2, 3…, in this case simply by counting them. For instance: S1 = 2.823 S2 = 7.298 S3 = 4.001 S4 = … Have we proved that the rational numbers are countably infinite? Not really. This is because I can construct a new real number not on the list using the following prescription: construct a new real number such that it differs from the first real number in the first decimal place, the second real number in the second decimal place, the third real number in the third decimal place…and the nth real number in the nth decimal place. So for the example of three numbers above the new number can be: S0 = 3.942 (9 is different from 8 in S1, 4 is different from 9 in S2 and 2 is different from 1 in S3) Thus, given an endless list of real numbers counted from 1, 2, 3…onwards, one can always construct a number which is not on the list since it will differ from the 1^st number in the first decimal place, 2^nd number in the second decimal place…and from the nth number in the nth decimal place. Cantor called this argument the ‘diagonal argument’ since it really constructs a new real number from a line that’s diagonally drawn across all the relevant numbers after the decimal points in each of the listed numbers. The image from the Wikipedia page makes the picture clearer: In this picture, the new number is constructed from the red numbers on the diagonal. It’s obvious that the new number Eu will be different from every single number E1…En on the list. The diagonal argument is an astonishingly simple and elegant technique that can be used to prove a deep truth. With this comparison Cantor achieved something awe-inspiring. He showed that one infinity can be greater than another, and in fact it can be infinitely greater than another. This really drives the nail in the coffin of commonsense, since a ‘comparison of two infinities’ appears absurd to the uninformed mind. But our intuitive ideas about sets break down in the face of infinity. A similar argument can demonstrate that while the rational numbers are countably infinite, the irrational numbers are uncountably so. This leads to another shattering comparison; it tells us that the tiny line segment between 0 and 1 on the number line containing real numbers (denoted by [0, 1]) is ‘larger’ than the entire set of natural numbers. A more spectacular case of David obliterating Goliath I have never seen. The uncountably infinite set of reals comprises a separate cardinality from the cardinality of countably infinite objects like the naturals which was denoted by ℵ[0]. Thus one might logically expect the cardinality of the reals to be denoted by ℵ1. But as usual reality thwarts logic. This cardinality is actually denoted by ‘c’ and not by the expected ℵ1. Why this is so is beyond my capability to understand, but it is fascinating. While it can be proven that 2^ℵ[0] = c, the hypothesis that c = ℵ1 is actually just a hypothesis, not a proven and obvious fact of mathematics. This hypothesis is called the ‘continuum hypothesis’ and happens to be one of the biggest unsolved problems in pure mathematics. The problem was in fact the first of the 23 famous problems for the new century proposed by David Hilbert in 1900 during the International Mathematical Congress in France (among others on the list were the notorious Riemann hypothesis and the fond belief that the axioms of arithmetic are consistent, later demolished by Kurt Gödel). The brilliant English mathematician G H Hardy put the continuum at the top of his list of things to do before he died (he did not succeed). A corollary of the hypothesis is that there are no sets with cardinality between ℵ[0] and c. Unfortunately the continuum hypothesis may be forever beyond our reach. The same Gödel and the Princeton mathematician Paul Cohen damned the hypothesis by proving that, assuming the consistency of the basic foundation of set theory, the continuum hypothesis is undecidable and therefore it cannot be proved nor disproved. This is assuming that there are no contradictions in the basic foundation of set theory, something that itself is ‘widely believed’ but not proven. Of course all this is meat and drink for mathematicians wandering around in the most abstract reaches of thought and it will undoubtedly keep them busy for years. But it all starts with the Hottentots, Cantor and the most primitive methods of counting and comparison. I happened to chance upon Gamow’s little gem yesterday, and all this came back to me in a rush. The comparison of infinities is simple to understand and is a fantastic device for introducing children to the wonders of mathematics. It drives home the essential weirdness of the mathematical universe and raises penetrating questions not only about the nature of this universe but about the nature of the human mind that can comprehend it. One of the biggest questions concerns the nature of reality itself. Physics has also revealed counter-intuitive truths about the universe like the curvature of space-time, the duality of waves and particles and the spooky phenomenon of entanglement, but these truths undoubtedly have a real existence as observed through exhaustive experimentation. But what do the bizarre truths revealed by mathematics actually mean? Unlike the truths of physics they can’t exactly be touched and seen. Can some of these such as the perceived differences between two kinds of infinities simply be a function of human perception, or do these truths point to an objective reality ‘out there’? If they are only a function of human perception, what is it exactly in the structure of the brain that makes such wondrous creations possible? In the twenty-first century when neuroscience promises to reveal more of the brain than was ever possible, the investigation of mathematical understanding could prove to be profoundly significant. Blake was probably not thinking about the continuum hypothesis when he wrote the following lines: To see a world in a grain of sand, And a heaven in a wild flower, Hold infinity in the palm of your hand, And eternity in an hour. But mathematics would have validated his thoughts. It is through mathematics that we can hold not one but an infinity of infinities in the palm of our hand, for all of eternity. 1. Shecky R. 7:58 pm 01/22/2013 nice summary of Cantorian thought… further mind-blowing is the “Cantor Set” or “Cantor’s Dust” which simultaneously ‘seems’ to contain ‘nothing’ or an infinity of elements: Link to this 2. rloldershaw 9:42 pm 01/22/2013 “Blake was probably not thinking about the continuum hypothesis when he wrote the following lines: To see a world in a grain of sand, And a heaven in a wild flower, Hold infinity in the palm of your hand, And eternity in an hour. ” No, probably he was thinking about nature. Which brings up an interesting application of Canor’s method to the physical world. If the cosmos is a discrete self-similar (i.e., fractal)hierarchy, an important question would be whether or not the hierarchy was infinite in scale, or whether the hierarchy had cutoffs. It might seem like one could never scientifically test the finite vs infinite issue, but there is a way to do so in the specific case of exact self-similarity. In this special case, the fractal hierarchy must be infinite and this can be tested using a limited region of the hierarchy. Only for an infinite hierarchy can a specific higher-level system and its specific self-similar lower-scale analogue have exact self-similarity. The proof involves Cantor’s matching technique applied to the number of subsystem levels contained within the higher-level system and its lower-scale analogue. Only for an infinite hierarchy can analogue systems at different levels have the same number of levels of subsystems. Robert L. Oldershaw Discrete Scale Relativity/Fractal Cosmology Link to this 3. KaiRoesner 6:50 am 01/23/2013 Let’s try out this famous Hilbert’s Hotel and ask the manager to move all his guests by one room. How is he going to do that? He can’t possibly walk down the corridor, knock on each door and ask the residents to move to the next room because that would take an infinite time and I would never be able to move in. Ok, so maybe the manager has previously installed a PA system in his hotel by which he now orders all his guests at once to move to the next room. But wait, the signal that travels from his mike to the speakers in all the rooms will take longer and longer the further down the corridor it goes. It will again take an infinite time until it reaches the guests at the (non-existing) end of the corridor. Hm… last try: The manager knocks at the first door, asks the resident to (please) step out, go to the next room and ask the guest there to do the same. While the first guest does as she was told I move into the first room – mission accomplished! Uh… but now there is still always (i.e. for an infinite time) at least one person who does not have a room. So I can’t really think of a practical way to fit one additional guest into Hilbert’s Hotel without kicking one out. But I guess “practical” is not a requirement here… Link to this 4. marclevesque 11:46 am 01/23/2013 If I’m following: ((1,2,3…) + (1,2,3…) + (1,2,3…) …) > (1,2,3…) and assuming: (1,2,3…) = (2,4,6…) then I’m not sure what > means in the first case, the cardinality is greater in what way ? Link to this 5. marclevesque 11:46 am 01/23/2013 ((1,2,3…), (1,2,3…), (1,2,3…)…) > (1,2,3…) Link to this 6. Allen Hazen 5:31 pm 01/23/2013 “The problem was in fact the first of the 23 famous problems for the new century proposed by David Hilbert in 1900 during the International Mathematical Congress in France (among others on the list were the notorious Riemann hypothesis and the fond belief that the axioms of arithmetic are consistent, later demolished by Kurt Gödel)” Slightly misleading way of putting it. Gödel didn’t demolish the belief that the axioms of arithmetic are consistent in his famous 1931 paper (his later work contains two proofs that the axioms of (first order Peano) Arithmetic are consistent): he demolished Hilbert’s hope that this consistency itself could be proved by certain restricted means. Not to distract people from the main point though. Is Gamow’s book still in print, preferably as a cheap paperback? The leading mathematician of MY high school class had it, shared ideas with me: we both learned from it. It’s a book bright high school students should be able to come across! Link to this 7. curiouswavefunction 9:48 pm 01/23/2013 Thanks. Yes, Gamow’s book is still available as a cheap, nice Dover paperback. Link to this 8. dilefante 1:06 am 01/24/2013 It’s a pity that Gamow wrote his book long before Archimedes’ palimpsest was re-discovered. Because, somewhere between our Hottentot-like past and Cantor, there is an interesting tidbit from Siracuse. We didn’t know it in the ’90s, but now we know that Archimedes anticipated Cantor in at least two things: First, he dealt with “actual infinities” (instead of the “potential” infinity of unbounded things, usually supposed to be the ceiling over Greek thought and still the only notion of infinity allowed by many of Cantor’s contemporaries). Second, when he happened to extend a proportion by means of equating two infinite sets, he felt the need to first show… that those sets could be put in a one-to-one relationship! Alas, his intuition was way too ahead of his time, and two millennia had to pass before the ideas reappeared. The story of the re-discovery and the analysis of the palimpsest, around ~2000, is wonderfully told in “The Archimedes Codex”. Link to this 9. S. N. Tiwary 3:04 am 01/24/2013 Power and beauty of mathematics is well known to all. Mathematics is the queen of all subjects. God is mathematician, hence mathematics is beautiful and powerful. S. N. Tiwary Link to this 10. kbkop 11:31 pm 04/26/2013 While many have attempted to prove that real numbers are countable by putting forth “proofs” that Georg Cantor’s Diagonal proof is flawed, it also seems that nearly every attempt to do so has involved complicated explanations that tend to cloud the clarity of the assertion. After much analysis and contemplation about the problem, I will now make my attempt at showing clearly and concisely why Cantor’s proof is flawed, and why real numbers are indeed infinitely countable. Link to Google Doc with explanation: Link to this 11. kbkop 11:51 pm 04/26/2013 Actually, the entire notion that an infinity has any “size” at all or that one is smaller or larger than another is just erroneous. By its very definition, SIZE is a limit, while infinity has no limits. It is like talking about the density of air in a total vacuum…there is no such thing! To say there are “less” even or odd integers than the number of total integers is just as wrong. There is NO number of even integers and NO number of integers. When it comes to infinity, the concept of NO-THING and EVERY-THING is equivalent, and that is precisely what makes infinity…NO count, NO quantity. Link to this 12. kbkop 1:54 am 04/27/2013 And of course there is a one-to-one matching between the members of different infinities, and while that may imply they are countable, they are not quantifiable. And for fractions, we are actually talking about a single infinite sequence of finite sets, where each set is made up of a finite number of progressively smaller parts, as follows: 1/2, 1/3, 2/3, 1/4, 2/4, 3/4, 1/5, 2 /5, 3/5, 4/5, 1/6, 2/6, 3/6, 4/6, 5/6, etc., so while the size of each piece gets infinitely smaller, the number of pieces increases infinitely. So, in reality, the infinite sequence is a linear set of values representing how many times the whole is divided into equal parts. Then for each of those values, there is a finite number representing how many of those equal parts there are. Link to this Add a Comment You must sign in or register as a ScientificAmerican.com member to submit a comment.
{"url":"http://blogs.scientificamerican.com/the-curious-wavefunction/2013/01/22/an-eternity-of-infinities-the-power-and-beauty-of-mathematics/?tab=read-posts","timestamp":"2014-04-19T17:20:14Z","content_type":null,"content_length":"123150","record_id":"<urn:uuid:987cd445-3012-423e-a3ee-b881383da685>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Sinusoidal Characteristics June 9th 2011, 05:11 AM #1 Junior Member Jun 2011 Sinusoidal Characteristics I am having difficulty with this function: y = -2sin (2 ((t+1)/3) I know to find the period I need to divide 360 degrees by the absolute value of b, but I can't figure out how to factor the equation properly to find the correct value of b. I thought I could multiply both the 2, and the (t+1)/3 by 3 to eliminate the fraction, but that doesn't seem to be right. I think I'm probably missing something fairly obvious... the other questions have been so easy! I would multiply out the argument of the sin function. If you're trying to find the period, you can ignore the phase angle. I'm not sure what you mean by, "multiply out the argument of the sin function." I mean, 2 ((t+1)/3) = 2t/3 + 2/3. I don't mean using the sum of angles formula. Okay, I see what you're talking about. That works perfectly to find the period, great! I am able to find the period, amplitude, max value, min value, range, domain, and vertical displacement from that, but am not sure how to find the horizontal shift then... Would it be one unit to the right? (t+1)/3 = (t-(-1))/3 Does the 3 impact it at all? I'm a little confused on that... You're on the right track. Think of horizontal shifting this way: suppose I have a function, x^2. It's zero when x = 0, right? Now suppose I shift it horizontally. That always looks like replacing the x with an x-a or x+a. So which is which? Well, if I replace x with x-2 in our function, then I get (x-2)^2. When is that zero? When x = 2. Evidently, then, the x-2 shifting is to the right. If I were to do x+2, then I'd have (x+2)^2, which is zero when x = -2. So that one's shifted to the left. This is how I remember which one is which. Hope that works for you. In answer to your question, the 3 does not impact the shifting. Awesome, thanks so much for all of your help!!! You're very welcome! June 9th 2011, 05:14 AM #2 June 9th 2011, 05:16 AM #3 Junior Member Jun 2011 June 9th 2011, 05:17 AM #4 June 9th 2011, 05:34 AM #5 Junior Member Jun 2011 June 9th 2011, 05:36 AM #6 Junior Member Jun 2011 June 9th 2011, 05:42 AM #7 June 9th 2011, 05:49 AM #8 Junior Member Jun 2011 June 9th 2011, 05:50 AM #9
{"url":"http://mathhelpforum.com/trigonometry/182688-sinusoidal-characteristics.html","timestamp":"2014-04-18T23:24:21Z","content_type":null,"content_length":"54189","record_id":"<urn:uuid:10fa0d9b-2049-48d6-86d4-1855020e9f8d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Scientific notation John Lee wrote (in part): >> Write in scientific notation: 255,000. >> (1) 255 x 10^3 >> (2) 25.5 x 10^4 >> (3) 2.55 x 10^5 >> (4) 255,000 Dave L. Renfro wrote (in part): > In my opinion, this is the kind of question that should > be avoided in tests. If it is important for the student > to know what "scientific notation" means, then it shouldn't > be tested in a multiple choice format. It should be in a > fill-in-the-blank format (which should be very easy to > grade). It's also difficult for me to determine what is > being tested, aside from simply knowing the meaning of > a certain word, which to me isn't something that I'd > consider all that important when making [...] Here's another thing that bothers me about this question. Notice how all the options give the same number, only written in different ways, the first two of which are just minor variations of what is desired. In fact, if the student knows nothing more than that you're supposed to have a mantissa between 1 and 9.999...9, no math insight or background is needed to select (3) as correct. Perhaps the other options should have been things like 2.55 x 10^4, 2.55 x 10^6, (2.55)^5, (255)^3, etc. Dave L. Renfro
{"url":"http://math-teach.2386.n7.nabble.com/Scientific-notation-td21685.html","timestamp":"2014-04-17T07:04:40Z","content_type":null,"content_length":"56201","record_id":"<urn:uuid:426c43e8-11a0-4b03-bd03-9a041d609a4f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
inverse function challenge January 28th 2010, 10:54 PM #1 Jan 2010 inverse function challenge Can someone show me the way? I need to find the inverse of f(x) = 9 + (X squared) + tan(Pi times X/2) Perhaps there are tools on this website for writing the equation. I hope you can understand. $f(x)= 9+ x^2+ tan(\frac{\pi}{2} x)$ Click on that to see the code. There is also a thread titled "Latex Help" under "Math Resources". What do you mean by "find the inverse"? That is not a one-to-one function and so has no inverse. Even if you were to restict the domain to, say 0 to $\pi/2$, I don't believe it has an inverse in terms of elementary functions. January 29th 2010, 02:45 AM #2 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/advanced-algebra/126074-inverse-function-challenge.html","timestamp":"2014-04-24T09:54:18Z","content_type":null,"content_length":"33849","record_id":"<urn:uuid:90b584ba-57ad-4ae8-bdbc-98418c6c1906>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantitative Estimation Chapter 3: Predicting Sentencing for Low-Level Crimes: A Cognitive Modeling Approach Laws and guidelines regulating legal decision making are often imposed without taking the cognitive processes of the legal decision maker into account. In the case of sentencing, this raises the question of to what extent the sentencing decisions of prosecutors and judges are consistent with legal policy. Especially in handling low-level crimes, legal personnel suffer from high case loads and time pressure, which can make it difficult to comply with the often complex rulings of the law. To understand sentencing decisions it is beneficial to consider the cognitive processes underlying the decision. An analysis of fining and incarceration decisions in cases of larceny, fraud, and forgery showed that prosecutors’ sentence recommendations were not consistent with legal policy. Instead they were well described by a cognitive theory of quantitative estimation that assumes sentence recommendations rely on a categorization of cases based on their characteristics. Predicting Sentencing for Low-Level Crimes: A Cognitive Modeling Approach How are criminal sentences determined? Although legal systems differ from country to country, judges worldwide struggle with the problem of determining which factors should be considered and how they should be combined to form appropriate and just sentences. Even if the legal system provides guidelines to regulate the sentencing process, the question still remains how well judges and other legal personnel follow the prescribed policies (Ruback & Wroblewski, 2001). Research on sentencing has a long tradition of identifying deviations from legal policy: Extralegal factors such as race or gender have been found to influence sentencing, and in some cases legal factors are not properly taken into account (e.g., Davis, Severy, Kraus, & Whitaker, 1993; Ebbesen & Konečni, 1975; ForsterLee, ForsterLee, Horowitz, & King, 2006; Henning & Feder, 2005; Johnson, 2006; Ojmarrh, 2005). This indicates that the cognitive processes of legal professionals do not always lead to sentencing that is consistent with the sentencing policy specified by the law (Dhami & Ayton, 2001; Ebbesen & Konečni, 1975; Hertwig, 2006; Tata 1997; Van Duye, 1987). The goal of this article is to investigate to what extent sentencing decisions deviate from legal regulations and how these deviations can be explained by cognitive models of the sentencing process. For this purpose we test whether prosecutors’ sentence recommendations can be better explained by a cognitive model or by adherence to legal policy. Additionally we examine whether the same cognitive processes underlie both fining and incarceration decisions. Heuristics in Legal Decision Making The legal decision environment is highly complex and the workload of legal personnel heavy; decisions need to be made under time pressure and often little or no feedback regarding the quality of the decision is available (Gigerenzer, 2006). Even if specific rules exist to guide the decision process, they are often too complex to be executed in the allotted time (Ruback & Wroblewski, 2001). Not surprisingly, then, research on sentencing has found that often only a small part of the available information is used (Ebbesen & Konečni, 1975, 1981) to determine a sentence. Heuristics are simple strategies that allow decisions to be made without much information or complex computations. Although there is disagreement on to what extent heuristics allow good decisions and how they should be formalized (Gigerenzer, 1996; Kahneman &Tversky, 1996), there is converging evidence that heuristics provide good accounts of people’s decision processes (e.g., Bröder & Schiffer, 2003; Payne, Bettman & Johnson, 1993; Rieskamp & Otto, 2006). In particular, when making complex decisions under time pressure, reliance on heuristics increases (Payne, Bettman, & Johnson, 1988; Rieskamp & Hoffrage, in press), making the legal domain an area conducive to decision-making heuristics. In fact, reliance on heuristics has been shown in several areas of legal decision making, such as bail decisions (Dhami & Ayton, 2001; Dhami, 2003; Leiser & Pachman, 2007), tort law (Guthrie, Rachlinksi, & Wistrich, 2001), and sentencing (Englich, Mussweiler & Strack, 2006; for an overview see Colwell, 2005; Engel & Gigerenzer, 2006). Especially for the domain of low-level offenses where the decision situation can be relatively transparent and the costs of wrong decisions low, reliance on heuristics might be a way to deal with the immense workload involved. Although regrettably widely ignored by research (for an exception, see Albrecht, 1980), the majority of the cases in courts are low-level crimes and petty offenses. For example, in Germany, about 80% of the cases are punished with a fine (Langer, 1994; Meier, 2001), an alternative to incarceration that can only be imposed in minor cases. Thus, particularly in cases sentenced with a fine, heuristics might be prevalent. Sentencing Decisions by the Prosecution As in most legal systems, in Germany the sentence is determined by the judge. However, the judge makes this decision after hearing sentencing recommendations from both the prosecution and the defense. Research has shown that the sentencing recommendation of the prosecution is the single most important factor influencing the decision of the judge (Ebbesen & Konečni, 1975; Schünemann, 1988). For instance, Englich and Mussweiler (2001) found that, all things being equal, the recommendation of the prosecution significantly influenced a criminal’s sentence; similarly Dhami and Ayton (2001) showed that in bail decisions, British magistrates followed almost without exception the recommendation of the prosecution. Additionally the prosecution can directly impose fines by penalty order. If the defendant accepts the fine, the case never goes to trial. These findings indicate that to understand which factors influence a sentence’s magnitude, it is indispensable to first investigate the process by which the prosecution determines the sentence recommendation. How should the prosecution do this? German sentencing is regulated by the German penal code (Strafgesetzbuch, StGB; Tröndle & Fischer, 2007), more specifically by articles 21, 23, 46, 47, and 49 and by decisions of the German Federal Court of Justice. Both judge and prosecution are bound by the same legal regulations. The general goal is to achieve an appropriate sentence that is proportional to the guilt of the offender. For each offense there exists a sentencing range that establishes a minimum and a maximum sentence that can be imposed. Within these often rather broad sentencing ranges, the placement of the sentence depends on the seriousness of the case and is largely left to the discretion of the judge. The judge’s task, as well as the prosecution’s, is to evaluate the factors mitigating or aggravating the guilt of the offender and to determine the sentence accordingly. Which factors should be considered as mitigating or aggravating is specified in the penal code. Article 46 of the StGB alone lists over 20 factors relevant for the sentencing decision although it cautions that it is not an exhaustive list.^1 What the German penal code (§ 46) does not provide is explicit guidelines on how the factors should be combined. However, the German Federal Court of Justice recommends that mitigating and aggravating factors be balanced in an integrative evaluation of the overall picture (Schäfer, 2001). According to the predominant opinion in the legal literature, this is best accomplished with a three-step sentencing process: All relevant factors are evaluated according to the direction of their effect on the sentence (aggravating or mitigating), then weighted by their importance, and finally added up to form the sentence (Bruns 1985, 1988; Foth, 1985; Schäfer, 2001; but see Mösl, 1981, 1983; and Theune, 1985a, 1985b). Thus, the legal prescription asks for a linear additive decision process. Models of Sentence Magnitude How can the underlying cognitive process of sentencing decisions be described? In many areas of psychology multiple linear regression models are applied to analyze decision policies (Doherty & Kurz, 1996; Brehmer, 1994, Cooksey, 1996). Likewise, in the legal domain these have been the predominant models used to analyze sentencing policies and to identify which factors influence sentence magnitude (Engen & Gainy, 2000; Johnson, 2006; Kautt, 2002; Kautt & Spohn, 2002). Regression models are especially attractive to model sentencing, as the three-step model is consistent with their linear additive approach (Brehmer, 1994; Hammond, 1996). More specifically, regression models assume that quantitative judgments, such as determining the magnitude of a sentence, can be modeled as a process of weighting and adding information (Doherty & Brehmer, 1997; Einhorn, Kleinmuntz & Kleinmuntz, 1979; Juslin, Karlsson, & Olsson, in press). Each factor is weighted according to its importance and the judgment is determined as the sum of the weighted factor values. The weights that best characterize the sentencing process are found by minimizing the squared deviation between the actual and the estimated sentence (cf. Cohen, Cohen, West, & Aiken, 2003; Cooksey, 1996): (1) , where the estimate, ŷ [p] , for the case p is given by the sum of the product of the factor values, c [j] , of the factors j with their respective weights, β [j] , plus an intercept, β [0]. If prosecutors and judges in fact weigh mitigating and aggravating factors against each other and then add up the weighted factor values to arrive at a final sentence, sentencing should be well captured by multiple regression. In this case multiple regression allows us to identify the factors that influenced the sentencing decision. Furthermore, if the sentencing policy corresponds to the law, all legally relevant factors should make a significant contribution, whereas extralegal factors should not be considered. Thus, analyzing sentencing with a multiple linear regression approach allows us to compare the judges’ and prosecutors’ sentencing policies to the policy required by law. The Mapping Model: A Cognitive Theory of Quantitative Estimation Even though multiple regression can capture decision outcomes, its value as a model of human judgment processes is debatable. Researchers have doubted that people actually perform the relatively complex calculations required by multiple regression and therefore have argued that multiple regression does not provide a valid description of the cognitive process underlying a decision (Brehmer, 1994; Einhorn et al., 1979; Gigerenzer & Todd, 1999; Hoffman, 1960). In response to this criticism we have proposed an alternative, called the mapping model, that we consider to be a psychologically plausible alternative to multiple regression. The mapping model provides a cognitive theory for quantitative judgments and has been successful in predicting people’s estimations (Helversen & Rieskamp, in press). Generally, the mapping model assumes that when people make a judgment about a case or object, they assign the object to a category and use a typical criterion value for this category as an estimate. Categories are formed on the basis of previously encountered objects, and the category membership is defined by the objects’ characteristics or features. The typical criterion value of a category is represented by the median criterion value of all cases belonging to this category. For example, to estimate the selling price of a house, the mapping model assumes that one would consider the house’s features that speak in favor of a high price (e.g., great location, a deck, a swimming pool), categorize the house according to its average value on theses features into a certain price class, and estimate a price that is typical for houses within this price class, that is, the median price for which houses in this category were sold for. Helversen and Rieskamp (in press) showed that the mapping model in comparison to multiple regression was particularly suitable for predicting people’s estimations if the cases’ criterion values followed a skewed distribution, which is typical of sentencing decisions (Meier, 2001). Helversen and Rieskamp (in press) tested the mapping model under highly controlled experimental settings; yet these conditions are similar to the conditions of sentencing decisions, suggesting that the mapping model might be a good model for sentencing decisions. How is the mapping model applied for sentencing decisions? Commonly, each case is described by several characteristics or factors relevant for sentencing. To apply the mapping model, first cases are categorized according to their mean value on these factors.^2 To allow comparisons of factors with different dispersions all factors are normalized by applying range frequency theory (Parducci, 1974). Using range frequency theory for normalization instead of a purely statistical technique (i.e., z-transformation) has the advantage that a psychologically more plausible representation of how the magnitude of a factor value is subjectively perceived by an individual is accomplished (for details see Appendix A). After normalizing all factors the mean factor value for all encountered cases is determined. This mean value represents the seriousness of the case. Next, the minimum and the maximum value of cases’ seriousness are determined and the range is divided into seven equally sized categories; that is, category boundaries are chosen so that the distance between category boundaries is the same for all categories. Due to humans’ limited cognitive capacities (see also Miller, 1956) only a limited number of categories is assumed. Next, the typical sentence for each category is computed by taking the median sentence of all previously encountered cases that fall into the same category. The sentence for a new case is simply determined by establishing its category membership and then using the typical sentence of that category as a sentence for the new case. Figure 7 gives an overview of the processing steps assumed by the mapping model. Figure 7: The processing steps of the mapping model. In the first step, the relevant cues are evaluated and rated according to their severity. In the second step the cues are integrated by establishing the average severity score. Then, the case is categorized according to its average score and the typical criterion value, that is the sentence for this category is retrieved. In the last step the retrieved criterion value is used as an estimate To give an example how the mapping model can be applied to sentencing we will describe how a typical case would be sentenced according to the mapping model. Imagine a case of shoplifting: The defendant has confessed stealing minor goods in five cases. The net worth of the stolen goods amounts to $100 and the defendant has three prior convictions for theft. In the first step the prosecutor considers the cues, that is the characteristics of the case relevant for sentencing such as the number of charges, the amount of money stolen and the number of prior convictions. Next, she rates the severity of each cue; for instance, the amount of money stolen was low, but three prior convictions are of medium severity and so forth, thereby standardizing the cues. After that she forms an overall impression of the case, taking the average of the cues’ severity scores. Based on this average score she categorizes the case as a theft of medium seriousness and retrieves the typical sentence for this category. In the last step she determines the sentence recommendation based on the retrieved category value. When comparing the mapping model with the regression model two important differences can be emphasized. First, unlike the regression model, the mapping model gives all factors the same weight for assigning a case to a category. Second, in contrast to the regression model, the influence of one single factor in the mapping model can interact with the other factors. The factors determine which category is used to make an estimate. Thus, how an estimate changes when one factor provides positive compared to negative evidence depends on the evidence of the other factors. Here the mapping model differs substantially from the regression model, where a factor’s impact on the estimate is independent of the other factors. Fines versus Incarceration The second goal of this article was to investigate differences between fines and incarceration sentences. Low-level offenses can be sentenced by one or the other. Although much research has examined which factors influence sentencing length in incarceration sentences (ForsterLee et al., 2006; Johnson, 2006; Langer, 1994; Oswald, 1994; Schünemann, 1988), to our knowledge there is a lack of research on fining and the differences between incarceration and fining decisions (for an exception see Albrecht, 1980; Oswald, 1994). However, fining and incarceration are often viewed as serving different sentencing goals (Schäfer, 2001). This suggests that fining and incarceration decisions could be based on different factors and the cognitive processes underlying the decisions could also Fining decisions could be especially likely to induce heuristic decision making. As fines constitute the majority of the sentences (Meier, 2001), they represent the biggest proportion of the prosecution’s workload. More serious cases might be allotted more time and be processed more systematically, as they are less frequent, incur more public interest, and have a higher probability of appeal. Thus cases sentenced by a fine could differ systematically from cases that are sentenced with incarceration. To investigate these questions, we conducted an analysis of trial records for three common offenses. Study: Analysis of Trial Records The first goal of the study was to model sentencing decisions for common minor offenses, investigating how well the sentencing procedure corresponds with the legal policy and if sentencing decisions are best described by a cognitive theory of quantitative estimations. The study’s second goal was to examine whether fining differs systematically from incarceration decisions and which factors influence sentence magnitude in the two decisions. We approached these goals by conducting an analysis of trial records. In comparison to an experimental approach, this type of analysis has the advantage that it is based on real cases and does not need to be limited to the small number of factors that can be manipulated in an experimental study. Furthermore, the complexity of the real cases as well as the time pressure of the daily case load could be decisive for the cognitive process underlying the sentencing decisions, favoring the analysis of real case data. We focused on three common offenses against property, namely, theft, fraud, and forgery. This allowed us to include different offenses while measuring the severity of the offense on a common scale—money—and keeping the sentencing range equal (0–5 years for a common case and 3–6 months to 10 years for an aggravated case).To investigate the sentencing process we collected trial records from a small Brandenburg Court (the Amtsgericht Bad Freienwalde), for the years 2003 to 2005. All records with a main charge of theft, forgery, or fraud (§§ 242, 243, 244, 248, 263, and 267) were included in the analysis. Trial records included the indictment, the transcript of the trial, orders by the prosecution, and the verdict. Based on these documents we identified offense and offender characteristics relevant for sentencing, the sentencing range, and the recommendations of the prosecution and the defense. Categorization system. Offense and offender characteristics were classified by a categorization system that was based on the German penal code (§§ 46, 47, 52, 53, 242, 243, 244, 248, 263, and 267) in close cooperation with legal experts in the area of sentencing. Classification of a factor rested upon the indictment, the trial transcripts, and the verdict. Besides the legal factors, the categorization system also included extralegal factors that have been found to affect sentencing (e.g., Ebbesen & Konečni, 1975; ForsterLee et al., 2006). Table 12 provides an overview and a description of the factors. │Factors │Description │Values │ │Offender information │ │ ││Gender │Male vs. female │0 vs. 1 │ ││Nationality │German vs. non-German │0 vs. 1 │ ││Age │ │20–80 │ ││ │ │years │ ││Family status │Married or single with kids vs. single and no kids │0 vs. 1 │ ││Occupational status│Employed, apprenticed, or student vs. unemployed │0 vs. 1 │ ││Economic status │Above poverty line vs. below poverty line (ca. €900 per month) │0 vs. 1 │ ││Diminished capacity│No diminished capacity vs. diminished capacity │0 vs. 1 │ ││ │ │ │ ││ │(Diminished capacity was assumed if the defendant had a psychological or medical diagnosis of a mental or organic disorder) │ │ ││No. of prior │ │0–14 │ ││convictions │ │ │ ││Type of last │Fine, incarceration, or incarceration with probation │Dummy │ ││sentence │ │coded │ ││Probation status │Offender was not on probation when the offense was committed vs. was on probation │0 vs. 1 │ │Offense characteristics │ │ ││Net worth of │ │€0–80,000│ ││property violated │ │ │ ││No. of charges │ │1–112 │ ││No. of offenders │ │1–3 │ ││Mitigating evidence│Coded as a summary factor; one point was added if there was external pressure to commit the crime (e.g., an emergency situation or blackmail), the crime was a failed │0–2 │ ││I │attempt, the offender’s role was secondary, or the offender’s capacity was diminished due to alcohol │ │ ││Mitigating evidence│One point was added if the offender had no prior convictions or the net worth of property violated was below €30 │0–2 │ ││II │ │ │ ││Remorse │Defendant showed no remorse vs. showed remorse, offered reparation or amends │0 vs. 1 │ ││Confession │Defendant did not confess vs. defendant confessed │0 vs. 1 │ ││Aggravating │One point was added if any of the following conditions was fulfilled: a high number of offenses (> 5), over a long period of time (> 6 month); the offense was │0-2 │ ││evidence │carefully planned; perseverance in the face of obstacles; incited others to commit the crime; used unnecessary violence │ │ │Legal regulations │ │ ││Offense type │Theft, fraud, or forgery │Dummy │ ││ │ │coded │ ││Summary penalty │A summary penalty was not given vs. a summary penalty was given │0 vs. 1 │ ││Penalty order │Sentencing by trial vs. sentencing by penalty order │0 vs. 1 │ ││Sentencing range │Max. sentence 5 years vs. max. sentence 10 years │0 vs. 1 │ The categorization system included personal information on the offender, as well as legally relevant factors concerning the offender’s criminal and personal history. To capture the severity of the crime several characteristics of the offense were coded, such as the number of charges and the net worth of property violated. The presence of mitigating and aggravating factors concerning the conduct of the crime were coded in two summary factors capturing the amount of mitigating and aggravating evidence. If the description of a case in the indictment and the trial protocols left doubt about the presence of a mitigating or aggravating factor the verdict was used as a reference. Only if the behavior in question was mentioned in the rationale of the verdict was it considered as mitigating or aggravating evidence. Additionally, the presence of a confession and mitigating behavior after the crime, such as remorse, were coded as two separate factors. A further mitigating summary factor coded whether the net worth of property violated was low enough to count as a less severe case (§ 248) and whether the offender had no prior record; these are two characteristics specifically identified by the German penal codes that mitigate the sentence regardless of the overall impact of property violated or of any prior record. Additionally we included three factors concerning legal regulations, such as, for instance, the sentence range applied. Finally, we did not include the recommendation of the defense in the analysis, because in most cases the defendant did not have a defense attorney present during the trial. For most variables a nominal or ordinal level of measurement was assumed. Nominal variables were binary coded, indicating the presence or absence of a factor; ordinal variables were dichotomized by a median split. For the variables number of charges, offenders, and prior convictions, amount of mitigating or aggravating evidence, and net worth of property, an interval scale was assumed. Two independent raters coded the cases. The raters’ agreement was satisfactory on all subjectively rated factors (r = .77, SD = .12). Non-random missing data were analyzed and missing values substituted with the mean of the variable, because no effect on the dependent variable was found and the overall number of cases was rather small. Dependent variables. Dependent variables were the type of sentence (fine or incarceration) and the number and magnitude of daily payments (for fines) and the length of a prison term in months (for incarceration) as recommended by the prosecution and the verdict. According to the German legal system a fine is constructed as a number of daily payments of a certain magnitude. The number is determined in correspondence to the severity of the crime, whereas the magnitude depends on the income of the defendant. As the aim of this study was to compare sentencing for prison terms and fines we focused on the number of daily payments as the dependent variable for fines corresponding to length of prison sentence. The number of daily payments can vary between 5 and 365; more severe offenses are sentenced by incarceration. The dependent variable for incarceration length was number of months sentenced to prison, irrespective of whether the offender was let off with probation. To identify the differences between fines and incarceration, we analyzed the sentences for fines and incarceration separately. Description of the court, the offenses, and the offenders. The Amtsgericht Bad Freienwalde is a small court in the Brandenburg district of Märkisch-Oderland, close to the Polish border under the jurisdiction of the Frankfurt (Oder) district attorney’s office. The city of Bad Freienwalde has a population of 13,000 with an unemployment rate of 12%. Overall, 99 cases of theft, fraud, and forgery were tried in this court during 2003 and 2004. From the 99 cases, 15 were excluded because the major charge was none of the offenses under consideration, juvenile law was applied, or the case did not lead to a conviction. Of the remaining 84 cases, 82% were tried by the same judge. The 84 cases were prosecuted by 45 different attorneys with a maximum of 5 cases by the same attorney. In 49 cases the main charge was theft, in 20 it was fraud, and in 15, forgery. On average, property worth €2,497 was violated (SD = €8,826). The offenders were predominantly German males; 69 were men and 15 women. Eight offenders did not have German citizenship. The mean age of the offender was 36 years, ranging from 20 to 80 years. About half of the offenders were sentenced to a fine (M = 48 days; SD = 27) and half to a prison term (M = 8 months; SD = 6). Model selection. The main goal of our study was to identify the cognitive process underlying sentencing and to determine if a cognitive model of sentencing could predict the magnitude of a sentence. For this purpose we tested which theory describes the sentencing process better: legal policy as modeled by a multiple linear regression model (e.g., Cooksey, 1996) or the mapping model, a cognitive theory for quantitative estimation (Helversen & Rieskamp, in press). Testing these two models on the data of real cases raised two crucial methodological problems: First, real cases involve an enormous number of factors that could potentially predict the sentence. In our cases we recorded 22 factors that could influence the sentencing decision. How can we find out which factors have a substantial effect? One common technique when using regression models for identifying important factors relies on significance tests. In these models the estimated impact of a factor can depend on the other factors included in the regression equation, so that often procedures are performed where factors are step-wise either included or excluded from the regression equation (cf., Cohen et al., 2003). However, when considering a larger number of factors this procedure is very unsatisfying, because factors that were added to the equation at the beginning of a step-wise forward procedure might not have been added had other factors already been included. Therefore, different statistical procedures applied to the same original set of factors often lead to inconsistent results (i.e., different regression equations), which can lead to very different The second methodological problem we faced concerns the models’ complexity, that is, their flexibility in describing different results. In particular, we were interested in testing the regression model against the mapping model; these models differ in their number of free parameters and therefore in their potential to describe different processes. Therefore, we sought a methodology that would take the models’ complexity into account when testing them against each other. To tackle these two methodological problems we followed a Bayesian approach, specifically the Bayesian model averaging (BMA) method (see Raftery, 1995, and also Raftery, Madigan, & Hoeting, 1997). This Bayesian method identifies the model or the models that are most probable given the data. Furthermore, BMA provides reliable estimates of the predictors’ influence on the dependent variable and it allows comparison of models of different complexities by taking the models’ free parameters into account. BMA was proposed especially to examine the uncertainty of parameter estimates and for model selection. To identify the most probable models, the Bayesian method calculates the posterior probability of a model given the observed data. Pragmatically this is performed by determining the Bayesian information criterion (BIC), which approximates the so-called Bayes factor (Raftery, 1995; Schwarz, 1978). The method additionally allows one to specify the probability that a factor will have an impact on the dependent variable: Taking model uncertainty fully into account, the average amount of evidence speaking for an effect of a factor is determined by summing the posterior probabilities of all models that include this factor (for details see Appendix B). The most reliable method for model selection, according to Raftery (1995), is to construct all possible models that can be built with the available factors and then select the models with the highest posterior probability given the data. However, including all candidate predictor variables would result in an enormous number of possible models, as 15 predictor variables already amount to 32,768 models. Thus we reduced the number of factors by first including all factors that substantially correlated with the dependent variable (i.e., showed a value of r > .3) and then additionally adding factors such as confession or remorse that were not necessarily correlated with sentence magnitude but are of special theoretical importance, because they frequently appear as mitigating reasons in the rationale of the verdict. For the fines we included 11 factors and for the incarceration decisions 9 factors (see Tables 2 and 3). Next we calculated the BIC values for all models resulting from all possible combinations of the factors. This amounted to 2,048 models in the case of fining decisions and 512 models in the case of incarceration decisions for each model class, the mapping models and the regression models. We first ran the analysis separately for the two model classes, to investigate if the factors identified by the two types of models would differ. Then we included all models in the comparison to identify which class of models most probably underlies the decision behavior given the observed data. For all of the models we calculated the BIC′ value based on the amount of variance explained (R²) as a measure of goodness-of-fit of the model and the number of free parameters (see Raftery, 1995). Details on the computation and the equations can be found in Appendix B. The BIC′ value gives the odds with which a specific model is preferred to a baseline. In the case of regression, usually a null model is chosen as a baseline model. The null model only includes an intercept (i.e., estimates the mean criterion value for all objects) and no predictor (i.e., free parameter). It explains zero of the variance in the data and its BIC′ is zero (see Equation 2); The BIC [k] ′ of a specific model M [k] is defined so that if the BIC [k] ′ value is positive, the null model is preferred, while a negative BIC [k] ′ value provides evidence for the model M [k] under consideration. The lower the BIC [k] ′ value, the more the model is supported by the data. where is the value of R² for model M [k] , q [k] is the number of free parameters for that model, and n is the number of data points. For the regression models a least squares regression was run with the factors as predictor variables and the sentence recommendation of the prosecution as the dependent variable. For the mapping models the category borders and the typical sentence for each category were estimated from the data. First the perceived factor score was calculated based on range frequency theory with one free parameter for all factors, capturing the relative importance of range and frequency information (for details see Appendix A). Then case seriousness was computed by averaging the factor scores over all factors, the minimum and maximum case seriousness was determined, and the range was divided into seven equally sized categories. For each category the typical sentence was calculated by taking the median of all cases that fell into this category. The typical sentence was estimated for all cases falling into one category and the amount of variance in the sentence recommendation of the prosecution explained (R²) was computed. Based on the BIC′ value we calculated the posterior probability of each model, assuming equal priors for all models. Additionally we computed the probability of each factor being included and an approximation of a Bayesian point estimator of beta weights and standard errors for each factor (see Appendix B). Overall, the more parsimonious mapping models offered the more probable description of the data, but both model types identified the same factors as influencing sentencing. Although sentencing decisions for fines and prison times were both based on the factors net worth of property and number of charges, the role of mitigating and aggravating evidence differed for the two sentence types. Fining decisions were more influenced by aggravating evidence and the number of prior convictions while incarceration length was more affected by mitigating evidence (II). Neither for fines nor for incarceration decisions did extralegal factors such as sex, age, or nationality play a role. In the following we report the results of the analysis for fines and incarceration separately. Magnitude of fines. Overall, the verdict could be almost perfectly predicted by the recommendation of the prosecution (r = .99), as illustrated in Figure 8. Figure 8: Scatter plot of the sentence recommendation for fines by the prosecution and the corresponding verdict by the judge. The magnitude of the fines is given in number of days a payment has to be made. Accordingly, we concentrated on the recommendation of the prosecution as the more interesting dependent variable. The recommended sentence, in turn, correlated significantly with a number of offense and offender characteristics (see Table 13). As expected, the presence of a confession and mitigating evidence II, coding low worth of property violated and no prior record, correlated negatively with the recommended sentence. The net worth of the property violated, the number of prior convictions, the number of charges, and the amount of aggravating evidence correlated positively with the magnitude of the sentence. All other factors did not correlate significantly with sentence length or showed no variation in the sample. │ │Fines (no. of days)│Five best models │Probability│Beta│SD │ │ │ │ │ │ │ │ │ │Pearson correlation├───────────────┬───────────────┬───────────────┬───────────────┬───────────────┼───────────┼────┼───┤ │ │ │Mapping model 1│Mapping model 2│Mapping model 3│Mapping model 4│Mapping model 5│ │ │ │ │ │(p values) │ │ │ │ │ │ │ │ │ │Age │.34 (.02) │○ │○ │○ │○ │○ │.08 │.03 │.11│ │No. of prior convictions │.32 (.03) │● │○ │● │○ │○ │.56 │.19 │.10│ │No. of charges │.36 (.02) │● │● │○ │● │○ │.64 │.16 │.17│ │Net worth of property │.46 (.001) │● │● │● │● │● │.97 │.36 │.11│ │Confession │-.50 (.001) │○ │○ │○ │○ │○ │.15 │-.08│.19│ │Penalty order │.50 (.001) │● │● │● │● │● │.91 │.53 │.14│ │Summary penalty │.53 (.001) │○ │○ │● │○ │● │.46 │.24 │.12│ │Aggravating evidence │.39 (.01) │● │○ │● │○ │○ │.59 │.32 │.13│ │Mitigating evidence II │-.48 (.001) │○ │● │○ │○ │○ │.25 │.15 │.12│ │Remorse │-.20 (.20) │○ │○ │○ │● │○ │.13 │-.16│.11│ │Nationality │.32 (-03) │○ │○ │○ │○ │○ │.14 │.08 │.12│ │PMP │ │.15 │.13 │.09 │.07 │.06 │ │ │ │ │BIC′ │ │-56 │-55 │-55 │-54 │-54 │ │ │ │ │R² │ │.74 │.74 │.73 │.73 │.73 │ │ │ │ Note: N= 44; Probability denotes the probability that the factor had an effect and is given by Equation B3 (Appendix B). BIC′ denotes the Bayesian Information Criterion. PMP denotes posterior model probability. An open circle denotes that a factor was not included in the model; a solid circle denotes that a factor is included in the model. For the analyses, the factors confession, remorse, and mitigating evidence II were recoded so that they correlated positively with sentence magnitude. The five best models all belonged to the class of mapping models Modeling—critical factors. Model analysis showed that a few factors are sufficient to describe the data. BMA for the two model classes gave a similar picture of which factors influence sentencing. Altogether 11 factors were considered (see Table 13), resulting in 2,048 possible models for each model class. Thus the prior probability of a model was about .0005. Of the 2,048 linear regression models under evaluation, 95% had a posterior probability below .002, with the two best models reaching a posterior probability of 5% and 6% and explaining 64% and 68% of the variance in the sentencing recommendations, respectively. There was strong evidence that the factors net worth of property, penalty order, and aggravating evidence affected sentence recommendation. Additionally there was weak evidence for the factors summary penalty and number of prior convictions. The estimated beta weights can be found in Table 2. Applying the BMA method to the class of mapping models similarly led to discarding a large proportion of models: 96% had a posterior probability below .001. However, the two best models reached a posterior probability of 15% and 13%, respectively. They both explained a much higher amount of variance (74%) in the sentence recommendations than the best regression models. Similar to the regression models, there was strong evidence for the factors net worth of property and penalty order. The factors aggravating evidence, number of prior convictions, and number of charges were supported by some evidence. In contrast to the regression model the factor summary penalty received less support. In sum, the BMA analyses of the two model classes rendered that the choice of model had only a slight influence on which factors were identified as important. In both model classes, the most important factors were net worth of property, whether the sentence was recommended by a penalty order or after a trial, and the presence of aggravating evidence. Additionally the number of prior convictions, the number of charges, and if the sentence was a summary penalty played a role, while age, nationality, and a confession or other mitigating evidence did not influence the sentence recommendation. This is clearly inconsistent with the legal requirement that all legally relevant factors be taken into account. Particularly surprising is that confession and remorse were not considered, as they are usually mentioned as extenuating factors in the rationale for the verdict. Model comparison. After examining which factors influenced the sentencing decision in cases punished with a fine, we now tested which type of model was better suited to explain the decision process underlying fining, mapping or regression. For this comparison we included all models and calculated the posterior probabilities, assuming that all models have the same prior probability. This resulted in a comparison of 4,096 models with a prior probability of .0002. Over all models, 17 reached a posterior probability above .01, summing up to a joint probability of .74, compared with a joint probability of .26 for the remaining 4,079 models. All of them belonged to the class of mapping models (see Table 2 for the five best models). Overall, the mapping models reached a much higher posterior probability: The joint posterior probability of all mapping models was .99999 compared to .00001 for the regression models. This is illustrated by Figure 9, showing the posterior probabilities of the best 1,500 models. The majority clearly belong to the mapping model class. Figure 9: The posterior model probability of the best 1,500 of all 4,096 models to describe the fining process, differentiated by model class. Of the 1,500 best models, 99% belong to the class of mapping models and 1% to the class of regression models. Incarceration length. Similar to the fines, the recommendation of the prosecution was the best predictor of sentence length (r = .95). Accordingly, we again focused on the sentence recommendation of the prosecution as the main dependent variable. Altogether we considered nine factors. Seven offense or offender characteristics correlated above .3 with the length of prison sentence. As expected, the factors net worth of property violated, summary penalty, aggravating evidence, number of charges, and number of offenders correlated positively with the sentence length, while the second mitigating factor (coding a low worth of property violated and no prior record) correlated negatively with recommended sentence length (see Table 14). The factor penalty order was not applicable as a sentence by penalty order is not allowed for prison sentences. Somewhat unexpectedly, the presence of a confession and special circumstances leading to diminished capacity correlated positively with sentence length. This effect, however, is probably due to the comparatively serious nature of these cases and does not reflect a negative evaluation of these factors for sentencing. Although remorse did not correlate with sentence length, we additionally included it in the analysis. │ │Incarceration │Five best models │Probability│Beta│SD │ │ │ │ │ │ │ │ │ │(no. of months) │ │ │ │ │ │ │ ├───────────────┬───────────────┬───────────────┬───────────────┬───────────────┼───────────┼────┼───┤ │ │Pearson correlation│Mapping model 1│Mapping model 2│Mapping model 3│Mapping model 4│Mapping model 5│ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │(p value) │ │ │ │ │ │ │ │ │ │No. of charges │.40 (.01) │○ │○ │● │● │● │.32 │.31 │.01│ │Diminished capacity │.41 (.01) │○ │● │● │● │● │.45 │.41 │.01│ │Net worth of property │.62 (.001) │● │● │● │● │○ │.91 │.57 │.01│ │Summary penalty │.65 (.001) │● │○ │○ │○ │○ │.61 │.20 │.02│ │Aggravating evidence │.58 (.001) │● │○ │○ │○ │● │.63 │-.10│.02│ │Mitigating evidence II │-.41 (.01) │● │● │○ │● │○ │.78 │.24 │.01│ │No. of offender │.31(.05) │○ │○ │○ │○ │○ │.04 │-.01│.01│ │Confession │.29 (.07) │○ │○ │○ │○ │○ │.03 │-.09│.01│ │Remorse │-.01 (.98) │● │○ │○ │○ │○ │.58 │.07 │.01│ │PMP │ │.53 │.10 │.10 │.07 │.06 │ │ │ │ │BIC′ │ │-49 │-46 │-46 │-45 │-45 │ │ │ │ │R² │ │.82 │.76 │.76 │.78 │.75 │ │ │ │ Note: N = 40; Probability denotes that the probability that the factor has an effect and is given by Equation B3 (Appendix B). BIC′ denotes the Bayesian Information Criterion PMP denotes the posterior model probability. A solid circle denotes that a factor was included in the model; an open circle denotes that a factor was not included in the model. For the analyses, the factors mitigating evidence II and remorse were recoded so that they correlated positively with sentence magnitude. The five best models all belonged to the class of mapping models Modeling—critical factors. Similarly to the analysis of the fining decisions, we used the BMA method to determine the factors with the highest probability of influencing sentence length. The factors included in the models were number of charges, net worth of property, diminished capacity, mitigating (II) and aggravating evidence, confession, summary penalty, number of offenders, and remorse, resulting in 512 models per model class with a prior probability of .002. Five regression models reached a posterior probability above .05, with the best model clearly superior to the other models with a probability of .28, compared to the second best model with a probability of .11. The best model explained 75% of the variance in the recommended incarceration length and reached a BIC′ value of –41. There was strong evidence for the effect of the factors number of charges, net worth of property, and diminished capacity. Additionally, there was some support that mitigating evidence II influenced sentencing recommendations for prison terms. The corresponding beta weights can be found in Table 14. For the mapping models, also five models reached a probability above .05. The best model reached a probability of .55 and explained 82% of the variance in sentence length, much more than the best regression model or the second best mapping model with a posterior probability of .10 and an r² of .76. However, the factors supported by the mapping models differed from the factors supported by the regression models. Similar to the regression models, net worth of property received strong support, and mitigating evidence II some support. However, there was hardly any evidence for number of charges, and diminished capacity was somewhat less important. Instead, there was additional evidence for the summary penalty, aggravating evidence, and remorse. In sum, the analyses showed consistently that—despite the stipulations of the law—only a few factors were necessary to describe sentencing. However, which factors were considered important differed between the two model classes. Although both model classes supported the factors net worth of property violated, mitigating evidence II, and diminished capacity, applying the regression models provided evidence for the factor number of charges, whereas the mapping models indicated the factors summary penalty, aggravating evidence, and remorse. Model comparison. To find out which class of models was better suited to explain incarceration decisions, we again entered all models in a joint comparison. The final analysis comparing 1,024 models from both model classes supported the mapping model as the superior type of model. The best five models belonged to this model class (see Table 14). The posterior probabilities of these models added up to a joint probability of .86, compared with a probability of .14 for the remaining 1,019 models. Again, the class of mapping models was more strongly supported than the regression models. The posterior probability of all mapping models added up to .96, compared to .04 for the regression models. This is illustrated by Figure 10, depicting the posterior probabilities of the best 100 models. Figure 10: The posterior model probability of the best 100 models describing the incarceration decisions, differentiated by model class. Of the 100 best models, 65% belong to the class of mapping models and 35% to the class of regression models The joint model comparison also supported the evaluation of the factors’ importance by the mapping model (see Table 14). There was strong evidence for the factors net worth of property and mitigating evidence (II), and some support for summary penalty, aggravating evidence, remorse, and diminished capacity. There are two ways in which sentencing decisions can deviate from the law: First, the decision can be based on a different set of factors than required by the law; second, the way these factors lead to a sentence can be inconsistent with the prescribed legal policy. The present article examined both routes by testing two different models of decision making and by identifying the crucial factors influencing sentencing, following a Bayesian approach. The model comparison test allowed us to identify which type of model—one consistent with the legal theory or one derived from cognitive psychology—captured the sentencing decisions best. Furthermore we were able to identify the factors that were crucial for each model class to predict the sentencing. Our results show that the prosecutors neither considered all factors required by law nor exhibited decision processes consistent with the policy assumed by the legal literature. Instead, the decisions of the prosecutors were best described by a heuristic for quantitative estimation, the mapping model (Helversen & Rieskamp, in press). In the following we will first discuss the results on which factors predicted sentence recommendations and differences between fines and incarceration sentences. Then we will turn to the model comparison and the significance of the BMA method for sentencing research. Finally we will discuss limitations of the current study. Predictors of Sentencing Decisions Prosecutors clearly deviated from the law concerning the factors that had an impact on sentencing length. According to the law, all legally relevant factors in the analysis should have affected the sentence recommendation. However, in both types of sentencing decisions only a few factors were sufficient to predict the prosecutors’ recommendations. It is in particular surprising that factors such as confession or remorse did not always lead to lower sentences, as they are usually stated as mitigating factors in the rationale for the verdict. The results are, however, in line with psychological research on judgment and decision making, which has repeatedly shown that humans often lack insight into their judgment policies (Brehmer & Brehmer, 1988) and tend to base their decisions on only a few factors (Brehmer, 1994). Interestingly, the factors influencing fining and incarceration decisions varied substantially. For one, the magnitude of the fine was higher if the sentence was imposed via penalty order than if by trial, whereas incarceration length was influenced by diminished capacity. However, as there were no cases of diminished capacity in the sample receiving fines and sentencing by penalty order is not allowed for incarceration sentences, these differential effects are not very surprising. More interestingly, fines were influenced by prior record and aggravating evidence, but not by mitigating evidence. This suggests that the prosecution, deciding which factors were relevant, might have relied on an image of a “typical” case. Factors that indicated deviation from the norm were considered for the sentence while factors that constituted the “normal” case were not (Mösl, 1983; Tata, 1997). Fines are usually imposed in less serious cases. Thus, in cases punished by a fine the prosecution might have already “used up” the influence of any mitigating information by sparing the offender an incarceration sentence, while in cases punished with incarceration the mitigating information was taken into account, reducing sentence length. Model Comparison In both types of sentencing decisions, our analyses clearly illustrated that cognitively derived mapping model provided a much better explanation for the sentencing process than the regression model that is consistent with legal regulations. For the fining decisions, just about any mapping model was more probable than a regression model. Even in the incarceration decisions the five best models belonged to the mapping model class. These results are in line with those of Helversen and Rieskamp (in press), who demonstrated the success of the mapping model in comparison to the regression model in a laboratory estimation task. Because the regression model was outperformed by the mapping model, this result suggests that prosecutors do not weigh each factor individually and sum up the weighted evidence as one would expect from standard legal procedure. Instead, the cognitive process underlying sentencing decisions was more in line with the mapping model. Therefore, when prosecutors make sentencing decisions they apparently use the evidence provided to group cases of similar seriousness together, where the seriousness of a case depends on its average value on the factors considered relevant. Finally, a typical sentence is stored for each category and used to evaluate a present case. The finding that cognitive models are more suitable to predict legal decision making is consistent with previous findings, indicating that legal decision-making processes often do not concur with the procedures assumed by the law (e.g., Dhami & Ayton, 2001; Hertwig, 2006; Van Duyne, 1987). However, although our study illustrates that a cognitive model was more suitable to predict sentencing than a model consistent with standard legal procedure, we emphasize that following the mapping model to make sentencing decisions does not necessarily represent a case of biased decision making. In contrast, Helversen and Rieskamp (in press) showed that in situations in which the criterion is nonlinearly distributed, the mapping model was more accurate in predicting the criterion than a regression model. Thus, in sentencing situations in which the distribution of the cases’ seriousness is highly skewed, the mapping model might be, in fact, more suitable than a regression model for making sentencing decisions. Particularly in low-level crimes, where legal decision makers operate under severe time constraints, making sentencing decisions according to the mapping model could be an adaptive response. Nevertheless, making a decision according to the mapping model compared to a weighted additive model will often lead to different sentences. This raises the question of which process sentencing should follow. It also resonates with a discussion in the German legal literature in the 1980s. Instigated by a decision of the German Federal Court of Justice, the relevance of “normal” and “average” cases as reference points for sentencing was discussed (see Bruns, 1988; Mösl, 1981, 1983; Theune, 1985a, 1985b). Likewise in England, similarity-based decision aids for sentencing have been under discussion (e.g., Tata, 1998). In principle, because the German penal code does not regulate how the relevant factors should be integrated, processes as assumed by the mapping model might be legally justifiable. Although this is ultimately a legal question, psychological insights into the cognitive processes underlying legal decisions could inform a legal discussion on sentencing laws and might provide valuable input for the development of institutions. Bayesian Approach The way we analyzed the data and tested the two competing models differs substantially from the standard approach taken in policy-capturing research (e.g., Cooksey, 1996). According to the standard approach, one single regression model is estimated by applying a specific statistical test procedure. This approach has the disadvantage that it can lead to rather different results and conclusions depending on the statistical procedure chosen. Moreover, the interpretation of the influence of single factors is rather complicated, because the influence depends on the other factors included in the equation. In contrast, the Bayesian approach we followed led us to consider all possible models that could be constructed with the available predictors and for each model the posterior probability was estimated. The two competing model classes were tested against each other by considering all models of each class and not simply one best model. This model comparison test provided very strong empirical support for the mapping model. Moreover, by considering which factors were included in models with large posterior probabilities, it was possible to provide more reliable conclusion about the factors that are important for sentencing decisions. Limitations of the Study Our study focused on one single German court. This naturally raises the question of how well the results generalize. Many studies have shown the importance of location and the legal culture of a jurisdictional district (e.g., Johnson, 2006; Kautt, 2002; Langer 1994). Especially, which factors influence sentence magnitude could differ between districts and thus our results concerning the importance of factors should be treated with caution. Furthermore, our results were based on a rather small sample, which could reduce the generalizability of the results even within the jurisdictional district. Nevertheless, for the restricted data set we could illustrate the benefits of a cognitively inspired approach to legal decision making. Future research is necessary to test if these results can be replicated with larger samples for a wider range of jurisdictional districts. Although this needs to be tested, we do not have a reason to assume that prosecutors from Brandenburg differ in their cognitive processes from prosecutors in other parts of Germany. If anything, a higher case load and more time pressure should be expected. Even when generalizing outside of Germany, similar results might be anticipated, given that the general features of the task remain the same. That is, as long as the prosecutor or the judge has to integrate several factors to determine a final sentence, the mapping model could offer a valid description of the process. However, legal systems where sentencing is strictly regulated by sentencing guidelines, as, for instance, in the United States, could provide exceptions. Thus further studies investigating the generalizability of the utility of the mapping model to explain sentencing are In a similar vein, it is important to note that this study focused on low-level offenses. It is an open question if the same cognitive processes underlie the sentencing of more severe cases, such as capital crimes. It appears reasonable that for more severe cases more factors are taken into account for sentencing decisions, which therefore might be more in line with legal policy. Conclusion and Outlook This paper provides evidence that in sentencing, cognitive models are necessary to understand the decision process. Our results suggest that the sentence recommendations of prosecutors were not consistent with the requirements of the law; instead, sentence recommendations were well described by the mapping model, a cognitive theory for quantitative estimation (Helversen & Rieskamp, in press). This study joins a growing body of research questioning the ability of decision makers to comply with legal regulations and emphasizes the importance of understanding cognitive processes for the development of institutions. Appendix A Range Frequency Theory According to range frequency theory (Parducci, 1974), human judgments of magnitudes and size are context dependent, that is, they depend on the range of the stimulus values as well as on the frequency with which a stimulus value appears. The judged magnitude J of a stimulus i is given by the weighted sum of the range value R and the frequency value F (cf. Parducci, 1974, p. 209): (A1) J [i] = wR [i] + (1 – w)F [i] [, ] with 0 < w < 1. The range value R represents the proportion of the current range below the current stimulus S [i] : (A2) R [i] = (S [i] – S [min])/( S [max] – S [min]), where S [i] denotes the current stimulus value and S [min] and S [max] are respectively the smallest and the largest stimulus in the set. The frequency value F [i] represents the proportion of all current values below the current stimulus: (A3) F [i] = (r [i] – 1) / (N – 1), where F [i] represent the frequency value of the stimulus i, r [i] is the rank of stimulus i, and N the number of stimuli in the set. Bayesian Model Averaging The Bayesian information criterion (BIC) gives the odds with which a specific model is preferred to a baseline model.To calculate a model’s BIC′ value we compared it with the null model (a baseline model with no independent variables), following Raftery (1995, Equation 26, p. 135): (B1) , where R² [k] is the value of R² for model M [k] , q [k] is the number of free parameters for that model, and n is the number of data points. The gives the BIC value for the null model compared to the model M [k] . The BIC′ of the null model is zero. Accordingly, if the is positive the null model is preferred to the model M [k] . However, if the is negative, model M [k] is preferred to the null model, and the smaller the , the more M [k] is supported by the data. The posterior probability of a model is defined as: (B2) , (cf. Raftery, 1995, Equation 35, p. 145) where p gives the probability of model M [k] given the data D in comparison with all models from set K assuming an equal prior probability of 1/k for all The posterior probability pr that a factor B has an effect (B ≠ 0) is given by the sum of the posterior probabilities of all models that include B, here referred to as model set A: (B3) , (cf. Raftery, 1995, Equation 36, p. 145). The beta weight and the standard error of the beta weights can be estimated by an approximation to a Bayesian point estimator and an analogue of the standard error. Approximations are given by: (cf. Raftery, 1995, Equations 38 and 39, p. 146), where , E denotes the expected value of the beta weight , and is the maximum likelihood estimator of under Model M [k] . Respectively, the standard error can be approximated by: (B5) , where is the standard error of under Model M [k] (cf. Raftery, 1995, p. 146). Bettina von Helversen and Jörg Rieskamp, Max Planck Institute for Human Development, Berlin, Germany. We would like to thank attorney M. Neff and the prosecution authority of Eberswalde for providing us access to the trial records. We are very grateful to Christoph Engel, Stefan Bechthold, Stefan Tontrupp, Andreas van den Eikel, and Tobias Lubitz for their help and advice in devising the categorization system. We gratefully acknowledge Patrizia Ianiro, Daria Antonenko, and Cornelia Büchling’s commitment and helpful ideas in coding and analyzing the data. We would like to thank Anita Todd for editing a draft of this manuscript. This work has been supported by a doctoral fellowship of the International Max Planck Research School LIFE to the first author. Correspondence concerning this article should be addressed to Bettina von Helversen. Bettina von Helversen Max Planck Institute for Human Development Lentzeallee 94, 14195 Berlin, Germany Phone: (+49 30) 82406 699 Fax: (+49 03) 82406 394 Email: vhelvers@mpib-berlin.mpg.de 1. Besides the factors stated in § 46, German law allows sentence adjustments to achieve general prevention as well as specific prevention objectives (Meier, 2001; Schäfer, 2001). Furthermore the sentencing range can be lowered if mitigating reasons as specified in articles 21, 23, and 49, exist. As our sample did not included mitigated sentencing ranges according to these articles, we relied on the sentencing ranges as specified for common and aggravated cases of theft (§242 ff.), fraud (§263), and forgery (§267). 2. To simplify the statistical analysis we inverted all factors that were negatively correlated with sentence magnitude, so that after inversion all factors were positively correlated with sentence magnitude. Please note that this is only a statistical simplification; alternatively the difference between the mean score on aggravating factors and the mean score on mitigating factors could be © Die inhaltliche Zusammenstellung und Aufmachung dieser Publikation sowie die elektronische Verarbeitung sind urheberrechtlich geschützt. Jede Verwertung, die nicht ausdrücklich vom Urheberrechtsgesetz zugelassen ist, bedarf der vorherigen Zustimmung. Das gilt insbesondere für die Vervielfältigung, die Bearbeitung und Einspeicherung und Verarbeitung in elektronische Systeme. DiML DTD Version 4.0 Zertifizierter Dokumentenserver HTML-Version erstellt am: der Humboldt-Universität zu Berlin 07.02.2008
{"url":"http://edoc.hu-berlin.de/dissertationen/von-helversen-bettina-2008-01-18/HTML/chapter4.html","timestamp":"2014-04-20T13:36:47Z","content_type":null,"content_length":"149632","record_id":"<urn:uuid:fd00266f-a21a-44a8-b22d-b70f50fd144f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Adding/Subtracting dB and dBM A few things about working with dB and dBm. • Adding dB values is the same as multiplying with regular numbers. So if you add 10dB to a decibel value it is the same as multiplying a regular number by 10. • Subtracting dB values is the same as dividing with regular numbers. • It is OK to add dB values to an initial dBm value. This is the same as starting with an input power level and adding amplification or subtracting attenuation from that power level. The final answer will be your output power level in dBm. In the figure above we have an input power level of 10dBm to which we add 20dB of amplification. The result is an output power of 30dBm. This is the same as starting with 10mW of input power and multiplying that by a factor of 100, giving an output power of 1,000 mW.
{"url":"http://www.bessernet.com/articles/dBm/Convert020.html","timestamp":"2014-04-17T22:00:34Z","content_type":null,"content_length":"4224","record_id":"<urn:uuid:eae6f50c-d89d-4df7-8fa6-8a674c38efb5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Norwood, NJ Math Tutor Find a Norwood, NJ Math Tutor ...Throughout his career Ken has been particularly good at helping people understand basic principles in chemistry, physics and math and how they are and can be applied to everyday life. Teaching and communications skills were critical in reporting research results, selling products and technology ... 12 Subjects: including trigonometry, differential equations, precalculus, algebra 1 ...Many of my math courses are posted on Youtube by some colleges to help students learn individually. Also, because of my pedagogical tact some people asked me to help them learn and speak French which is my first language. I do it correctly such that learners can speak French perfectly after only three months. 11 Subjects: including SAT math, GMAT, probability, algebra 1 ...Even more exciting is watching that light-bulb go on when I am tutoring. I have experience tutoring students of all ages, (elementary through graduate school) in many subject areas - although my real passion is math. I have a BA in statistics from Harvard and will be starting nursing school shortly. 18 Subjects: including algebra 1, algebra 2, biology, chemistry ...Recently, most of my clients have been teachers. Many of them have struggled with a math course at some point, and my tutoring assistance helped them greatly towards earning their Master's Degree. I have shown proficiency in math since my youth. 22 Subjects: including linear algebra, nutrition, ACT Science, algebra 1 Thank you for taking the time to look into my profile. I am currently a third grade teacher. In the past four years, I have taught a plethora of subjects and levels including tenth grade math, and third, fourth, and fifth grade. 14 Subjects: including algebra 1, algebra 2, prealgebra, precalculus Related Norwood, NJ Tutors Norwood, NJ Accounting Tutors Norwood, NJ ACT Tutors Norwood, NJ Algebra Tutors Norwood, NJ Algebra 2 Tutors Norwood, NJ Calculus Tutors Norwood, NJ Geometry Tutors Norwood, NJ Math Tutors Norwood, NJ Prealgebra Tutors Norwood, NJ Precalculus Tutors Norwood, NJ SAT Tutors Norwood, NJ SAT Math Tutors Norwood, NJ Science Tutors Norwood, NJ Statistics Tutors Norwood, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/norwood_nj_math_tutors.php","timestamp":"2014-04-19T17:37:39Z","content_type":null,"content_length":"23756","record_id":"<urn:uuid:e5db4967-e502-4855-a286-e975b3604ce2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
Prof. Bryan Caplan Econ 812 Week 8: Symmetric Information I. Expected Utility Theory A. How do people choose between gambles? In particular, what is the relationship between the value they put on having x with certainty versus having x with p<1? B. Simplest theory: Expected value maximization. People choose whatever option has the highest average monetary value. 1. Ex: You will be indifferent between ($1000 with p=.01 and $1 with p=.99) and $10.99 with p=1. C. This is highly tractable, but also highly unsatisfactory. Would anyone here really prefer $1 billion with p=.001 to $1 million for sure? D. This suggests a richer theory of choice under uncertainty, known as expected utility theory (aka von Neumann-Morgenstern expected utility theory). Intuition: Instead of maximizing average wealth, let us suppose that people maximize expected utility. E. Three step procedure: 1. Assign numerical weights to various outcomes. 2. Linearly weight outcomes according to their probability. 3. Choose whatever gamble has the highest linearly weight outcome. F. Example. Suppose I have utility of wealth given by U=W^.5. I can either have a 50% chance of $10,000 and a 50% chance of $0, or $2000 with certainty. So my expected utility of the first gamble is .5*10,000^.5+.5*0^.5=50; my expected utility of the second gamble is 1*2000^.5=44.72. Given a choice, then, I would prefer the first option. G. Note: Simple utility functions are invariant to any monotonic transformation. Expected utility functions are not. (Aside: They are invariant to any affine transformation). H. Some implications: 1. Compounding. Consumers are indifferent between a 50% chance of a 50% chance of x and a 25% chance of x. 2. Linearity in probabilities. If you value a 1% chance of something at $10, you value a 100% chance at exactly $1000. 3. This does NOT however mean that you value $1000 one hundred times at much as $10! It is only the probabilities that matter linearly. II. Rational Expectations A. As explained in week 1, there are two different interpretations of probability: objective and subjective. B. Subjective probability is much more generally applicable than objective probability. C. Problem: Subjective probabilities have no necessary connection to reality! This hardly seems satisfactory. There is clearly some connection between the real world and what people believe about D. The leading theoretical effort to formalize the link between subjective probabilities and the real world is known as "rational expectations" or RE. E. Simple characterization: A person has RE if judgments are unbiased (mean error is zero) and mistakes are uncorrelated with "available" information. F. Deeper characterization: A person has RE if his subjective probability distribution is identical to the objective probability distribution. G. Standard modeling technique: everyone is unbiased; information or lack thereof just changes estimates' variance. H. RE in no way rules out error; it does not assume that information or cognition is free. I. Example #1: Attending graduate school. No one knows for sure how they will do. But RE says that on average you correctly estimate how well you will do in the program and what completion will do for you. J. Example #2: Renting a movie. Until you see it, there is no way to know for sure if you will like it. But RE says that on average your prospective ratings equal your retrospective ratings. The same goes for your rankings conditional on e.g. movie genre, stars, directors, etc. K. Example #3: Wittman on pork barrel spending. III. Application: Testing for RE of Economic Beliefs A. RE made its first big splash in macro. In the 1970's, there were many empirical tests performed on e.g. inflation forecasts to check for RE. B. How would you go about this? Try regressing inflation forecasts on a constant and actual inflation: []. RE implies that a=0 and b=1. C. One particularly interesting area to me: RE of beliefs about economics. Most intro econ classes, it seems to me, try to correct students' pre-existing systematic misconceptions. D. The Survey of Americans and Economists on the Economy asked economists and the general public identical questions about economics. E. Natural test of RE: are the average beliefs of economists and the public identical? Run the regression [], where Econ is a dummy variable =1 for economists and 0 otherwise. Does b=0? F. Of course, this only tests for the public's RE if economists themselves have RE! Many critics of the economics profession claim that it is the economists who are biased, either because of self-interest or ideology. G. These claims are however testable using the SAEE. Simply re-run the regression [] controlling for income, job security, ideology, etc, and see if b falls to 0. (It doesn't). IV. Search Theory and Expectational Equilibria A. The Arrow-Debreu interpretation of general equilibrium offers one way for economists to analyze economic uncertainty. But complete contingent claims markets do not seem very realistic. B. Is there any other approach? Yes: there is an extremely general theory of economic action under uncertainty, known as "search theory." C. Basic assumptions of search theory: 1. More time and effort spent "searching" increase your probability of successful discovery. 2. Searching ability differs between people. 3. RE. (This can however be relaxed). D. Main conclusion: People search so that the marginal cost of searching equals the expected marginal gain of searching. 1. Qualification: You need to adjust for a searcher's degree of risk-aversion. E. The (endless) applications: 1. Doing R&D. 2. Hunting and fishing. 3. Prospecting for gold. 4. Looking for investment opportunities. 5. Searching for a job. 6. Dating. 7. Rational amnesia. 8. An economic theory of comedy. F. What if people don't search much for a good price? Then sellers search for consumers. 1. A tale of Istanbul. G. Who is overpaid/underpaid? Look at who is investing more in search. 1. Head-hunters vs. pavement-hitters. H. Main conclusion: If the economics of perfect information doesn't make sense, try search theory. It explains almost everything else. I. Some economists, especially Austrians, resist search theory. Why? As far as I can tell, it just comes back to objections to probability theory, especially claims that probability theory cannot capture "radical uncertainty" or something along those lines. V. Measures of Risk-Aversion A. The difference between expected value maximization and expected utility maximization boils down to taste for risk. B. Suppose you choose between $x with p=1 and $x/q with p=q. Simplest taxonomy 1. If you are indifferent between the sure thing and the gamble, you are risk-neutral. 2. If you prefer the sure thing to the gamble, you are risk-averse. 3. If you prefer the gamble to the sure thing, you are risk-preferring. C. Most economic models assume that actors are risk-averse (though firms are often modeled as risk-neutral). D. Graphing: A risk-averse agent has a concave utility of wealth function. If you draw a line between any two points on the utility function, the utility function is always above that line. This indicates that a certain payoff of ax+(1-a)y is always preferred to the gamble (x with p=a, y with p=1-a). E. Certainty equivalence: If you are indifferent between a gamble and x* with certainty, x* is that gamble's "certainty equivalent." 1. The risk premium, similarly, is the difference between a gamble's expected value and its certainty equivalent. F. There are a number of different ways to quantify risk aversion. Probably the most common is with the coefficient of absolute risk aversion, which is equal to -u''/u'. The higher the coefficient, the more risk-averse you are. G. Example: If u=w^.5, the coefficient of absolute risk aversion is [] In contrast, if u=w, the coefficient is [], indicating that the latter function is risk neutral. H. Note that the coefficient of absolute risk aversion normally decreases with wealth. This captures the intuition that a millionaire worries less about betting $1 than someone on the edge of VI. Demand for Insurance A. A natural application of the preceding analysis is the demand for insurance. B. Specifically, suppose a consumer with EU=w^.8 wants to insure his income, which is $1000 with probability .6 and $0 with probability .4. The insurance company offers i worth of insurance - which pays off if the client's uninsured income is $0 - at price .4xi. (If x=1, then the price is actuarially fair). C. Then the consumer has an EU problem to solve, maximizing wrt i.: D. Simplify, differentiate and set equal to zero to solve: E. This simplifies to: F. Taking the -.2 root of both sides, defining []. and solving: [] G. Interesting implications: If the insurance contract is actuarially fair, [], and i*=$1000; consumers will fully insure. If the actuarially contract is less than fair, optimal i*<$1000. H. If the price of insurance is high enough, then even risk-averse agents want negative insurance. VII. Efficiency Implications of Symmetric Imperfect Information A. Many textbooks state that market outcomes are inefficient if there is "imperfect information." This is a gross over-statement. Market efficiency and imperfect information are often compatible. B. This is particularly clear where there is symmetric imperfect information, where everyone is equally in the dark. C. Suppose for example that I don't know how much I will enjoy my consumption bundle, so U(x,y)=x^ay^1-a + e, where e~N(0,s^2). My optimal decision is still to spend a*I on x and (1-a)*I on y. D. Similarly, suppose I don't know my relative tastes for x and y, so U(x,y)=x^ay^1-a, where a=.5 with p=.6, and a=.9 with p=.4. Then I simply maximize U(x,y)=.6[x^.5y^.5]+.4[x^.9y^.1]. E. General point: Just because you are ignorant does not mean you are stupid. If you are uncertain, you adopt more "general purpose" strategies that take account of all of the possible outcomes.
{"url":"http://econfaculty.gmu.edu/bcaplan/e812/micro6.htm","timestamp":"2014-04-24T06:25:45Z","content_type":null,"content_length":"96953","record_id":"<urn:uuid:efffc46e-8332-427b-8d5c-8271885b6386>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
*Baby filter* Our 11ish month old has had two super weird bedtimes ("happy happy party time! Hey, did you know I can make farting noises with my mouth? woohoo!") this week and I'm trying to figure out what might be going on, and more importantly what our options are for dealing with it. All the gory details inside. [more inside] posted by pennypiper on Feb 21, 2014 - 20 answers MS Excel's regression tools provide 95% lower/upper confidence results but how does one properly interpret and then express those as a single ± (plus/minus) figure? [more inside] posted by kartguy on Jan 26, 2014 - 4 answers I'm a 40 year old man, recently single and spending Christmas with my parents for the first time in two decades. They're excited, help us have a good one. [more inside] posted by Caskeum on Dec 19, 2013 - 17 answers Stats filter. I am doing multivariate regression for the first time and I want to understand what I'm doing, having gone beyond my formal training. I have many possible ways I could formulate the regression (different variables to include) and I want to find a model that is both the best possible fit while using the fewest number of variables. How? [more inside] posted by PercussivePaul on Mar 18, 2012 - 12 answers Statistics filter: If I asses the same paired variables for the same population at multiple points in time, can I integrate the relation into an overall correlation? [more inside] posted by lord_yo on Feb 1, 2012 - 3 answers Excel statistics question (linear regression with weighted data points) [more inside] posted by dabug on Nov 30, 2011 - 9 answers How do I calculate the confidence interval for the mean response of some general nonlinear fit? [more inside] posted by selenized on Nov 7, 2011 - 12 answers How can I test if one predictor variable in a regression is significantly better than another predictor variable? i.e. If I regress X against A I get an r^2 of .99 and when I regress X against B I get .98. I need a test to see whether this difference is statistically significant. [more inside] posted by vegetableagony on Sep 13, 2011 - 16 answers Parentfilter: How do you handle very specific potty training regression issue with a 2.5 year old girl? [more inside] posted by griffey on Aug 6, 2010 - 6 answers In an OLS regression the coefficient is b = cov(X,Y)/var(X). What is the plain english interpretation of this coefficient in terms of variations (ie. b is the variation in Y that can be explained by variation in X)? How does this relate to r^2? posted by bucksox on Apr 8, 2010 - 2 answers Stats filter: Is logistic regression what I need? If so, can I do this in regular SPSS (v17) or do I need the add-on? [more inside] posted by bingoes on Aug 3, 2009 - 9 answers Suggestions for dealing with an increase in potty accidents in an almost-four-year-old? [more inside] posted by leahwrenn on Jun 16, 2009 - 9 answers Can I get Google Spreadsheets to display a scatterplot of data with the regression line overlaid on the plot? How? [more inside] posted by RossWhite on Apr 20, 2009 - 1 answer Ok so I have this huge table of survey data - much of it numerical, much of it binary, some of it from selections from menus of text items (e.g. blue, green, orange etc). Where do I start to find the most noticeable relationships between variables? [more inside] posted by vizsla on May 7, 2008 - 13 answers Looking for free or low cost (under $50) multiple linear regression software which ideally works with Microsoft Excel (but not critical). Any recommendations? posted by vizsla on Apr 17, 2008 - 17 answers Statistics-filter: I need to establish to what extent student performance on a particular standardized test is predicted by each of the following: GPA, standardized test scores and a couple of other miscellaneous numerical factors. How do I go about this? [more inside] posted by perissodactyl on Mar 6, 2008 - 11 answers I've spent the past day or so trying to figure out how to calculate a hidden value in my data. It varies linearly with time, or closely enough that we can make that assumption within any given dataset. Where this goes beyond a simple linear regression is that each datapoint is known to be above or below the hidden value. The data is collected in a C# program and processed in Excel; an Excel function would be the ideal solution. [more inside] posted by bjrubble on Apr 3, 2006 - 14 answers I'm trying to run an ordinal regression in SPSS. Are my independent variables factors or covariates? [more inside] posted by duck on Jun 21, 2005 - 6 answers
{"url":"http://ask.metafilter.com/tags/regression","timestamp":"2014-04-25T04:48:45Z","content_type":null,"content_length":"36389","record_id":"<urn:uuid:d217019d-3443-4d17-a7ca-75525ccfeaa1>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Existence of Solutions of a Riccati Differential System from a General Cumulant Control Problem International Journal of Differential Equations Volume 2011 (2011), Article ID 319375, 13 pages Research Article Existence of Solutions of a Riccati Differential System from a General Cumulant Control Problem ^1Department of Electrical & Computer Engineering, Kettering University, Flint, MI 48504, USA ^2Department of Mathematics, Bradley University, Peoria, IL 61625, USA Received 31 May 2011; Accepted 15 November 2011 Academic Editor: A. M. El-Sayed Copyright © 2011 Stanley R. Liberty and Libin Mou. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We study a system of infinitely many Riccati equations that arise from a cumulant control problem, which is a generalization of regulator problems, risk-sensitive controls, minimal cost variance controls, and -cumulant controls. We obtain estimates for the existence intervals of solutions of the system. In particular, new existence conditions are derived for solutions on the horizon of the cumulant control problem. 1. Introduction Consider a linear control system and a quadratic cost function: where , and are continuous matrix functions for , ( is the set of symmetric matrices.), is the state with known initial state , the control, and a standard Wiener process. Because is completely determined by the first equation in (1.1) in terms of , the cost function is only a function of . For , denote by the (general) expectation of the random variable . For , let be the th cumulant of the cost . Let be a sequence of nonnegative real numbers. Consider the following combination of : The cumulant control problem, considered in [1], is to find a control that minimizes the combined cumulant defined in (1.3). This problem leads to the following system of (infinitely many) equations of Riccati type: where denotes the derivative to , , are as in (1.1), and . and are the unknown matrix functions, which are required to be continuously differentiable for . For convenience, the time variable is often suppressed. System (1.4) will be combined with the following equation If is a given integer and for , then is the -cost cumulant investigated in [2, 3]. In particular, if , then and the cumulant problem is the classical regulator problem that minimizes . If , then the cumulant problem is the minimal cost variance control considered in [4, 5]. Interested readers are referred to [1–6] for the investigations, generalizations, and applications of cumulant controls. Another important cumulant control occurs when for . In this case, is precisely the cumulant generating function, and the cumulant problem is the risk-sensitive control; see, for example, [7]. In this case, (1.4) and (1.5) lead an equation for the matrix function: As shown in [1], the solution of (1.4) is related to by the equation: and the equations in (1.4) for can be obtained by differentiating (1.6) to at . For a feedback control with a given matrix function , it was shown in [8] and [1, Theorem2] that the th cumulant of has the following representation: where is the solution of (1.4). Consequently ( 1.3) can be written as In [1] the cumulant control problem was restated as minimizing in (1.9) with as a control, as a state, and (1.4) a state equation. Furthermore, the following result is proved in [1, Theorem3]. Theorem 1.1. If the control is the optimal feedback control of (1.9), then the solution of (1.4) and must satisfy (1.5). By Theorem 1.1, it is necessary to solve (1.4) and (1.5) in order to find a solution of a cumulant control problem. Because of the nonlinearity of (1.4) and (1.5) in , a global solution may not exist on the whole horizon of the cumulant problem. This can be illustrated by a scalar case of (1.6). Suppose , then (1.6) becomes and . The solution is tan , which is defined on with . So (1.6) has no solution unless . By the local existence theory of differential equations, the solutions and of (1.4) and (1.5) exist on a maximal subinterval . Our interest is to give an estimate for this interval. In particular, we will obtain conditions that guarantee . The idea of our approach is to show that the trace tr of satisfies a scalar differential inequality: with some functions on . A key of the proof is Proposition 2.1 below. It follows that is bounded by the solution of the Riccati equation: where . Consequently, the existence interval of (1.12) gives an estimate for that of system (1.4) and (1.5); see Theorems 2.4 and 3.5 below. By a similar argument, we prove that the cumulant problem is well posed under appropriate conditions; see Theorems 2.3 and 3.4 below. In [9] the norm of a solution of a coupled matrix Riccati equation was shown to satisfy a differential inequality similar to (1.11). Consequently, specific sufficient conditions were derived for the existence of solutions of the Riccati equation in [9]. Estimates for maximal existence interval of a classical Riccati equation had been obtained in [10] in terms of upper and lower solutions. For the coupled Riccati equation associated with the minimal cost variance control, some implicit sufficient conditions had been given in [11] for the existence of a solution. In this paper, we use the trace to bound the solution of system (1.4) and (1.5), which generally leads to a better estimate for the existence interval. 2. Comparison Results for Traces We start with an assumption and some preparations. In this paper we assume that For the sequence in (1.3), we will assume that where Note that the assumption is not essential. The assumption that for and Proposition 2.1 below imply that the matrix in (1.4) is a positive semidefinite series. The requirement that imposes some growth condition for the sequence ; see the proofs of Theorems 2.3 and 2.4. Also note that if , then and for all . Theorem 3.4 below shows that the cumulant control problem is well posed for any sequence with a small . Some examples of are as follows:(i); (ii);(iii). We need the following properties of . Proposition 2.1. Suppose is a solution of (1.4) on some interval with a given , then each for . Proof. The formula of the th cumulant in [8] implies that is nonnegative for all . It follows from the representation (1.9) that must be positive semidefinite. This argument continues to hold with replaced by any . Next we verify some properties related to matrix trace that are needed for our analysis. For , denote by tr the trace of , and and the smallest and largest eigenvalue of , respectively. Proposition 2.2. (a) For all , , . (b) For all , , (c) If , then (d) If are all , then Proof. The properties in (a) are obvious by the definitions of trace and matrix multiplication. Some of the inequalities in (b)–(d) might be known, but the authors were not able to find proofs in existing literature. So we include our proofs of (b)–(d) below for readers’ convenience. To prove (b), let be a unitary matrix such that where is diagonal with eigenvalues of . Then where is the entry of at . Let be the th row of , which is a unit vector. Then . Since we have This implies (2.4). To show (c), use the symmetry of , and ; we get that tr. Inequality (2.6) follows from part (b) and the fact that since . To show (2.7), first note that trtr by (2.4); then it remains to show that . Choose a with such that . By Schwartz inequality, Since , we have and . Therefore, . For (2.8), using notations in the proof of (b), we first get Then (2.8) follows from the inequalities Now we estimate the existence intervals of solutions of (1.4) and (1.5). First, let be given and be the solution of system (1.4). We have the following result, which will be used in the proof of Theorem 3.5 Theorem 2.3. Suppose and in (1.4) is given. Let , , and be functions on satisfying (a) If is a solution of system (1.4), then satisfies the differential inequality (b) If the equation has a solution on , then system (1.4) has a solution on such that is convergent. Proof. (a) Denote . Multiplying the equation in (1.4) for by and sum over , we obtain Taking traces of both sides of (2.17) gives Note that and by Proposition 2.1, , for . Proposition 2.2 (a), (b), and (c) imply that By (1.4) and definition (2.3) of , we have Substituting (2.19) and (2.20) into (2.18) and using the definition of in (2.14) we get (b) Suppose that (2.16) has a solution on . By local existence theory, system (1.4) has a solution on a maximal interval By (a), satisfies inequality (2.15); that is is a lower solution of (2.16). By a comparison theorem of lower-upper solutions, on . Since series (1.10) is positive semidefinite, it follows that and are all bounded and (1.10) is convergent on . Since satisfies system (1.4), each is in fact continuously differentiable on . If , then the local existence theory implies that can be extended further to the left of , a contradiction to the maximality of . Therefore and (1.4) has a solution on , which can be extended to . Now consider system (1.4) and (1.5). We have Theorem 2.4. Denote . Let , and be functions on satisfying (a) Suppose and are solutions of (1.4) and (1.5) on some . Then on satisfies (b) Suppose the equation has a solution on , then system (1.4) and (1.5) have solutions and on . Proof. (a) Substituting into system (2.17) we get where . Taking traces of both sides of (2.25) gives As in the proof of previous theorem, we have Using the fact that and (2.8), we have By combining (2.28), (2.27), and (2.26) we obtain (2.23). (b) Suppose that (2.24) has solution on . By local existence theory, system (1.5) and (1.4) have solutions and on a maximal interval . By part (a), tr satisfies inequality (2.23) on . It follows that tr on , which implies that and are continuously differentiable on If , then local existence theory implies that and can be extended further to the left of , a contradiction to the maximality of . Therefore , and system (1.4) and (1.5) have solutions on . 3. Well-Posedness and Sufficient Existence Conditions In this section we will derive specific conditions that ensure that the scalar equations (2.16) and (2.24) have solutions on . Consequently we will obtain sufficient conditions for the well-posedness of the cumulant control and the existence of solutions of (1.4) and (1.5). First we consider an autonomous scalar equation where is a polynomial with degree . Assume for some that has distinct zeros . Let and . Since is locally Lipschitz, the solution of (3.1) exists and is unique for every for in a maximal interval, say . If for some , then for all . If for some , then for . This implies that for , has the same sign as In particular, as decreases, is strictly increasing if and decreasing if . Denote . Then The following is a well-known fact in stability theory of differential equations. where . Indeed, implies that for and for . In either cases, by (3.2). Consider (1.12) as an example. We have the following. Proposition 3.1. Denote and if . Then (a) for all if and ; (b) if and ; (c) if or and . Proof. If , then , which has root . Since , by (3.3). This shows (a). Next we prove (b) and (c). First assume . If , then for all . So by (3.2). Next consider the case , in which . If , then , which implies that . If , then we have either when or when . In either cases, we have by (3.2). This finishes the proof of (b) and (c) when . If , then consider , which satisfies and , and the conclusions follow from the special case just proved. Write (3.1) as and integrate it against from to , then we get Note that if is finite, then must be a zero of . It follows that and . If is infinite, then must converge because has a degree . In summary, we have the following. Proposition 3.2. Suppose is a polynomial of degree . Then (a) is finite if and only if the solution of (3.1) exists on . (b) if and only the solution of (3.1) exists on a finite maximal interval with length . Applying Proposition 3.2 to (1.12) we obtain the following. Proposition 3.3. (a) If either and , or and , then the solution of (1.12) exists on . (b) If either or and , then the solution of (1.12) exists on a finite interval with Proof. Part (a) directly follows from Proposition 3.1(a) and Proposition 3.2 (a) (b). For part (b), first assume that . Then and , where . So Next assume and . Then and , where . We have Finally when and , we have and Now we show that the cumulant control problem is well posed by Theorem 2.3 and Proposition 3.3. Theorem 3.4. For any number there is such that the series in (1.3) converges for each matrix and sequence with and . Proof. Suppose that is a matrix function with . Choose , , and as follows: Note that tr, which depends only on . In addition, as . It follows that when is sufficiently small, has two real roots with and as . In particular, since , we have and so . Proposition 3.3 implies that So as . In particular, (2.16) has a solution on when is sufficiently small. By Theorem 2.3 (b), system (1.4) has a solution such that converges. Finally we apply Proposition 3.3 to (2.24) to give a sufficient existence condition for (1.4) and (1.5) and the cumulant control problem. Choose Theorem 3.5. System (1.4) and (1.5) have solutions and on if where is defined as in (3.2) with . In particular, system (1.4) and (1.5) have solutions and on if one of the following holds. (a) . (b) , , and , where . Proof. The general conclusion follows directly from Proposition 3.3 and Theorem 2.4. In the case (a), has two roots . Since , . In the case (b), has two solutions . So also holds. The conclusion follows from Propositions 3.1 and 3.2 and Theorem 2.4. Note that in Theorem 3.5 condition (a) holds if has full rank (i.e., ) and is sufficiently small, while condition (b) holds if the system in (1.1) is stable (i.e., ) and the product tr is relatively small. The cumulant control problem has an optimal control under each of these conditions. As an existence theorem, Theorem 3.5 gives one of the very few existence results for a Riccati differential system of infinitely many equations. In terms of the cumulant controls that lead to the system (3) and (4), Theorem 3.5 generalizes the corresponding results in [1, 5] for risk-sensitive controls (where ) and in [2, 4] for finite cumulant controls (where has only finite nonzero components). Numerical examples for risk-sensitive and finite cumulant controls satisfying the conditions in Theorem 3.5 may be found in [3–6]. 4. Conclusions In general it is very difficult to determine the existence interval of a differential Riccati equation (or system). By the approach in this paper, we can at least give an estimate for the existence interval of the Riccati system. Such an estimate leads to sufficient conditions for the existence of solutions to the Riccati system and the cumulant control problem. The authors wish to express their sincere thanks to the reviewers for their valuable comments that helped improve this paper. The second author wishes to acknowledge the support of a Caterpillar Fellowship from Bradley University. 1. L. Mou, S. R. Liberty, K. D. Pham, and M. K. Sain, “Linear cumulant control and its relationship to risk-sensitive control,” in Proceedings of the 38th Allerton Conference on Communication, Control, and Computing, pp. 422–430, 2000. 2. K. D. Pham, S. R. Liberty, and M. K. Sain, “Linear optimal cost cumulant control: a k-cumulant problem class,” in Proceedings of the 36th Allerton Conference on Communication, Control, and Computing, pp. 460–469, 1998. 3. K. D. Pham, M. K. Sain, and S. R. Liberty, “Cost cumulant control: state-feedback, finite-horizon paradigm with application to seismic protection,” Journal of Optimization Theory and Applications , vol. 115, no. 3, pp. 685–710, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 4. M. K. Sain, C. H. Won, and B. F. Spencer Jr., “Cumulant minimization and robust control,” in Stochastic Theory and Adaptive Control, T. E. Duncan and B. Pasik-Duncan, Eds., Lecture Notes in Control and Information Sciences, pp. 411–425, Springer, Berlin, Germany, 1992. 5. M. K. Sain, C. H. Won, B. F. Spencer Jr., and S. R. Liberty, “Cumulants and risk-sensitive control: a cost mean and variance theory with application to seismic protection of structures,” in Advances in Dynamic Games and Applications, J. A. Filar, V. Gaitsgory, and K. Mizukami, Eds., vol. 5 of Annals of the International Society of Dynamic Games, pp. 427–459, Birkhäuser, Boston, Mass, USA, 2000. 6. M. J. Zyskowski, M. K. Sain, and R. W. Diersing, “State-feedback, finite-horizon, cost density-shaping control for the linear quadratic Gaussian framework,” Journal of Optimization Theory and Applications, vol. 150, no. 2, pp. 251–274, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 7. P. Whittle, Risk-sensitive optimal control, John Wiley & Sons, New York, NY. USA, 1990. 8. S. R. Liberty and R. C. Hartwig, “On the essential quadratic nature of LQG control-performance measure cumulants,” Information and Computation, vol. 32, no. 3, pp. 276–305, 1976. View at Zentralblatt MATH 9. G. P. Papavassilopoulos and J. B. Cruz, Jr., “On the existence of solutions to coupled matrix Riccati differential equations in linear quadratic Nash games,” IEEE Transactions on Automatic Control, vol. 24, no. 1, pp. 127–129, 1979. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 10. S. R. Liberty and L. Mou, “Estimation of maximal existence intervals for solutions to a Riccati equation via an upper-lower solution method,,” in Proceedings of the 39th Allerton Conference on Communication, Control, and Computing, pp. 281–282, 2001. 11. G. Freiling, S.-R. Lee, and G. Jank, “Coupled matrix Riccati equations in minimal cost variance control problems,” IEEE Transactions on Automatic Control, vol. 44, no. 3, pp. 556–560, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
{"url":"http://www.hindawi.com/journals/ijde/2011/319375/","timestamp":"2014-04-20T01:53:10Z","content_type":null,"content_length":"673482","record_id":"<urn:uuid:24680d9d-0cce-4c22-b76a-4025577358bc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of infinitesimal in mathematics, a quantity less than any finite quantity yet not zero. Even though no such quantity can exist in the real number system, many early attempts to justify calculus were based on sometimes dubious reasoning about infinitesimals: derivatives were defined as ultimate ratios of infinitesimals, and integrals were calculated by summing rectangles of infinitesimal width. As a result, differential and integral calculus was originally referred to as the infinitesimal calculus. This terminology gradually disappeared as rigorous concepts of limit, continuity, and the real numbers were formulated. Learn more about infinitesimal with a free trial on Britannica.com.
{"url":"http://dictionary.reference.com/browse/infinitesimal","timestamp":"2014-04-21T07:51:48Z","content_type":null,"content_length":"104949","record_id":"<urn:uuid:c4277ec0-8006-4088-b51a-7d0ae620aa43>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate Historical Variance & Return on a Stock When picking investments, knowing the volatility of a stock can help you decide if it's right for your particular strategy. For example, if you're very risk-averse, stocks with low volatility might fit better in your portfolio. If you don't mind taking on extra risk in hopes of extra rewards, a higher volatility might be acceptable. The variance measures the differences between the annual returns of the stock: the higher the variance, the more volatile the stock. In order to calculate the variance, you first have to figure out the annual returns for each year, and then the overall Step 1 Subtract the price at the start of the year from the price at the end of the year to find the raw increase in stock price. For example, if the stock started at $26 and ended the year at $29, the stock increased by $3. Step 2 Divide the increase or decrease by the price at the start of the year. In this example, divide the $3 increase by the $26 starting price to find that the stock increased by 0.1154, or about 11.54 Step 3 Repeat steps 1 and 2 to calculate the return on the stock for each year. For example, if you wanted to know the variance over the past three years, you would calculate the returns for each of those Step 4 Calculate the average return on the stock by adding the annual return and dividing the result by the number of years. In this example, if the stock increased by 11.54 percent in the first year, increased by 5.46 percent in the second year, and lost 2 percent in the third year, add 11.54 plus 5.46 minus 2 to get 15 percent. Then, divide 15 percent by 3 to get an average return of 5 percent, or 0.05. Step 5 Calculate the difference between the average return and each annual return. In this example, the difference between 0.1154 and 0.05 is 0.0654 percent; the difference between 0.0546 and 0.05 is 0.0046, and the difference between minus 0.02 and 0.05 percent is 0.07. Step 6 Square each of the differences. In this example, square 0.0546 to get 0.00298116, square 0.0046 to get 0.00002116 and square 0.07 to get 0.0049. Step 7 Add each of the results. In this example, add 0.00298116 plus 0.00002116 plus 0.0049 to get 0.00790232. Step 8 Divide the sum by the number of years minus 1. In this example, divide 0.00790232 by 2 to find the variance is 0.00395116, or about 0.395 percent. Photo Credits • Thinkstock Images/Comstock/Getty Images
{"url":"http://budgeting.thenest.com/calculate-historical-variance-return-stock-23102.html","timestamp":"2014-04-20T05:42:38Z","content_type":null,"content_length":"45189","record_id":"<urn:uuid:8baafe6d-658b-48cd-8f62-0e190b5ccf85>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Standardized Test Practice: Nine less than half n is equal to one plus the product -1/8th and n. Find the value of n. A. 24. Show work Please (:! Can somebody help because I have no idea how to do Best Response You've already chosen the best response. I can not figure this out. Best Response You've already chosen the best response. n-9=1+(-1/8n) ?? I know you are supposed to take the information and turn it into one equation so you can solve it but i got a little confused at the end. Best Response You've already chosen the best response. ok so from the information we have, \(\large 9 \) less than \(\large \frac{1}{2}n \) is the same as reducing \(\large \frac{1}{2}n \) by \(\ 9 \) which is the same as \(\ large \frac{1}{2}n - 9 \). now we're told that this is equal to \(\large 1 \) plus the product of \(\large \frac{1}{8} \) and \(\large n \). product of that means \(\large \frac{1}{8} \times n\) which is the same as \ (\large \frac{n}{8} \) . so back to the question, \(\ large \frac{n}{2} - 9 \) is equal to \(\large 1 + \frac{n}{8} \) \(\large \frac{n}{2} - 9 \)=\(\large 1 + \frac{n}{8} \) can you solve this equation on your own? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f38309ce4b0fc0c1a0d8fb6","timestamp":"2014-04-21T10:10:56Z","content_type":null,"content_length":"33162","record_id":"<urn:uuid:416940a7-1c78-4b6e-a091-478ca2d28e3a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
| Measures Home Math 6th grade Measures Math 6th grade: Measures Measures: Lengths Choose the right measure (km, m, etc) How much is it (1)? Measures: weights How much is it (2)? Measures: Perimeter and surface What is the perimeter (1)? What is the area (1)? What is the perimeter (2)? What is the area (2)? Convert the measures and calculate the volume in liters What is the area (3)? Calculations with measures How many miles have been travelled? What is the difference in meters?
{"url":"http://www.mathabc.com/math-6th-grade/measures","timestamp":"2014-04-17T15:28:40Z","content_type":null,"content_length":"19315","record_id":"<urn:uuid:37d86387-d268-4672-83d6-8d72f9e2a182>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Our ability to estimate simple statistics: a neglected research area November 15, 2012 By junkcharts (This article was originally published at Numbers Rule Your World, and syndicated at StatsBlogs.) I recently came across a series of papers by Irwin Levin (link), about how well people estimate statistical averages from a given set of numbers. In contrast to the findings of Tversky and Kahneman, Gigerenzer, etc. on probability, it seems like we are able to guess average values pretty well, even in the presence of outliers. It must be said the sample size used in Levin's experiments was tiny (12 students in one case but working with something like 75 sets of numbers). That said, the experimental setup was remarkable. Take this paper as an example. The numbers were either shown in sequence or at the same time. Levin created three types of tasks: a descriptive task in which the goal was to get the average of the numbers presented, including the outliers; an inference task in which the goal was to guess the average of the population of numbers from which the sample was drawn, in which case we expect the subjects to discount the outliers; and a discounting task, in which subjects were presented with data including outliers, but were asked to ignore them. The reason for this post is that Levin's work was done in the 1970s (Levin himself retired this year according to his webpage). There doesn't appear to be much interest in this subject since then. It seems like researchers may find the estimation of summary statistics like means, medians, etc. not interesting enough. All the new research that I know of concerns judging probability distributions, margins of error, variability, etc. However, I'm more interested in point estimates, and I feel that the early research left the question still unsettled. I haven't found any research on how good we are at guessing the median of a set of numbers, or the mode, or trimmed means, or moving averages. If we see repeated numbers, are we likely to use the average, the median or the mode, or some other statistic to summarize that information? Given what we now know about irrationality and biases in judging probabilities, are we able to replicate Levin's finding? Or will we find that his experiment would not hold with better Please comment on the article here: Numbers Rule Your World
{"url":"http://www.statsblogs.com/2012/11/15/our-ability-to-estimate-simple-statistics-a-neglected-research-area/","timestamp":"2014-04-17T21:39:38Z","content_type":null,"content_length":"36537","record_id":"<urn:uuid:879912de-467b-4378-9a0f-198739c549b9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Average employee number August 5th 2010, 08:38 AM #1 Aug 2010 Average employee number This is easy I'm sure, but I can't work it out - have tried several ways, but here's the one that is easiest to explain: Let's say that my company starts the year with 0 employees. At the end it has 202. I recruited 346 new people during the year. I need to work out the percentage that leave so I can predict for next year. So, that means I lost 144 (346-202). The average employee number over the year is 101 ([202-0]/2). So leave percentage is 144/101 = 143% but that's ridiculous... This is easy I'm sure, but I can't work it out - have tried several ways, but here's the one that is easiest to explain: Let's say that my company starts the year with 0 employees. At the end it has 202. I recruited 346 new people during the year. I need to work out the percentage that leave so I can predict for next year. So, that means I lost 144 (346-202). The average employee number over the year is 101 ([202-0]/2). So leave percentage is 144/101 = 143% but that's ridiculous... $\displaystyle \frac {144}{346}\cdot 100\% = 41.62\%$ so u have percent of people leave your company thanks! So now... what about the next year? I start this year with some employees... How do they fit into the formula? eg. I start with 202 employees, recruit 549, end up with 306. What's the leave % then? hmmmm don't really get what u need if u know that would be 306... just put again ... $\displaystyle \frac {306}{548}\cdot100\%$ thanks again... sorry to not explain well - I'm basically trying to work out a formula that would work out Leave % - sometimes we might not recruit anyone, so the above formula of End/Recruits wouldn't work as Recruits=0. So for example, say we start with 1836, end with 1610, no recruits (so we lost 226). So leave % is clearly 226/1836 in this case (12%). So the number at the start is important, but we haven't taken it into account in the previous bits - I need a formula that works in all cases, it looks like it needs Start, End and Recruits in it but not sure how... I'm kind of assuming it needs average employee number in it somewhere. if u need an average number of people that worked in company in one or more years number of people at the end of year let's say B and number at the start of year let's say A.... (that can be on any interval to calculate average of that interval ) average is let's say X $X=\frac {B-A}{2}$ but if u need to know how much of ur people stay (in percentage) u need to know how much did u get to work (W) and how much did stay (S) $Y=\displaystyle \frac{S}{W} \cdot 100\%$ where Y is percentage of those that stay the company or to see how much did leave : $Z(\%) = \frac {H-S}{H}\cdot100\%$ H= how many did u have S= how many did stay August 5th 2010, 08:43 AM #2 August 5th 2010, 08:54 AM #3 Aug 2010 August 5th 2010, 09:00 AM #4 August 5th 2010, 09:15 AM #5 Aug 2010 August 5th 2010, 09:28 AM #6
{"url":"http://mathhelpforum.com/algebra/152853-average-employee-number.html","timestamp":"2014-04-17T08:33:38Z","content_type":null,"content_length":"45773","record_id":"<urn:uuid:a8db9f03-a917-4c0d-93be-40e49aafa079>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
The Stock Market and Economic Growth in Nigeria: An Empirical Investigation The link between capital market and economic growth has been investigated extensively by researchers. Using Granger causality tests to ascertain the causal relationship between financial development and economic growth, Spears (1991) provides evidence of one-way causality from financial markets to economic growth. On the stock market growth nexus, Atje and Jovanovic (1993) find significant correlation between economic growth and the value of stock market trading divided by GDP for 40 countries over the period 1980-88. Similarly, Levine and Zervos (1996) use data on 41 countries over period 1976-1993 and after controlling for other factors that may affect economic growth, conclude that stock market development, remains positively and significantly correlated with long run economic growth. Arestis and Demetriades (1997) show that although, the development of the stock market is correlated with economic growth in Germany, stock market volatility has a negative effect on German output. On the other hand, Arestis and Demetriades (1997) reveal that in the United States, causality runs from output to the financial system and not vice versa. Levine and Zervos (1998) employ cross-country data involving 47 countries from 1976-1993 and find that stock market liquidity is positively and significantly correlated with current and future rate of economic and political factors. They also discover that measures of both stock market liquidity and banking development significantly predict future rate of growth. Filer et al. (1999) also investigate the causality between financial development and economic growth. Using Granger causality test, they find that lagged growth rates are in general, significant predictions of current growth rates. This effect is quite strong for high and middle-income countries and relatively weak in lower income countries, suggesting that macroeconomic conditions are less stable for the less developed countries in our sample. Turning to it financial variable as expected, they find a positive link between market capitalization (normalized for the level of GDP) and future economic growth. The pattern is striking with respect to turnover velocity which is often argued to be better indicator of the effect of stock markets on growth. Results suggest that a higher turnover velocity Granger causes growth but only for high and low income countries group. Furthermore, the location of the effect differs between the high and low-income countries. For high-income countries, the link between turnover velocity and growth is entirely within countries while for lower income countries; the linkage is quite strong and is found between Muslumov and Gursoy (2000) examine causality relationships between stock markets and economic growth based on the time series data compiled from 20 countries for the years 1981 through 1994. Using Sims approach where Granger causality relationship is expressed in two pairs of regression equations, it is revealed a two-way causation exits between stock market development and economic growth. Country analyses on the other hand, could not lead to precise conclusions but suggested a somewhat stronger link between stock market development and economic growth in developing countries. Baier et al. (2004) examine the connection between the creation of stock exchanges and economic growth with a new set of data on economic growth that spans a large time period than generally available. They hypothesize that a stock exchange can affect the growth rate of input by increasing the growth rate of either aggregate input per worker or total factor productivity per worker. Their empirical findings indicate that economic growth increases relative to the rest of the world after a stock exchange opens. Their evidence also suggests that increased growth of productivity is the primary way that a stock exchange increases the growth rate of output, rather growth rate of physical capital. It is also found that financial deepening is rapid before the creation of stock exchange and slower subsequently. While the present study acknowledges the extensive debate on the subject, the inconclusive evidence somewhat gives a room for further research. This study is therefore, a contribution to the existing literature on the link between stock market and economic growth while focusing on Nigeria. Mohtadi and Agarwal (2004) examine the link between stock market development and economic growth in developing countries. Using a panel data approach that covers 21 emerging markets over 21 years (1977-1997), they find that turnover ratio is an important and statistically insignificant determinant of investments by firms and that these investments, in turn are a significant determinant of aggregate growth. Foreign direct investment is also found to have a strong positive influence on aggregate growth. The results also show that the initial level of GDP has strong negative influence on the growth rate which is also consistent with the empirical literature. The results also indicate that despite lagged growth being strongly significant, fitted investment is significant for the first set of regressions. The results indicate the both turnover ratio and market capitalization are important variables as determinants of economic growth. The Nigerian Capital Market (NCM) first came into existence in 1960 with the establishment of the Lagos stock exchange but became operational in 1961. The mission statement of the Federal government for the establishment of the capital market is to promote the Nigerian capital market to respond to the socio-economic development need of the nation. Among the functions of the Nigerian capital market include: • Provide an additional channel for engaging and mobilizing domestic savings for productive investment • Foster the growth of the domestic financial services sector and the various forms of institutional savings such as life insurance and pension • Facilitate the transfer of enterprises from the public sector to the private sector • Provide access to finance for new and smaller companies and encourage institutional development in facilitating the setting up of Nigeria s domestic funds, foreign funds and venture capital funds • Above all, to stimulate industrial growth as well as sustainable economic growth and development of the Nigerian economy However, very little achievements have been recorded by the Nigerian capital market in the actualization of the mentioned objectives. Although, the NCM has witnessed phenomenal growth over the years most especially with the re-emergence of civilian rule in the country and its attendant financial reforms, the growth of the real sector and other key sectors of the economy, however has not been substantial (Ologunde et al., 2006). Pertinent questions then include: • Is there a feedback between stock market development and economic growth such that economic growth drives stock market development and stock market development influences economic growth? • What are the factors affecting the performance of the Nigerian stock market? • How can we promote sustainable development in the Nigerian Stock market such that it becomes growth-oriented? Answering these questions forms the basis for this research. Data: This study utilizes majorly secondary data sources to gather pertinent information for the successful completion of the research. The secondary data sources include: Annual reports of the Nigerian Stock Exchange (NSE) and Securities and Exchange Commission (SEC); Statement of Accounts and Annual Reports of the Central Bank of Nigeria (various issues) and various publications of the Nigerian Bureau of Statistics (formerly Federal Office of Statistic (FOS). The data for the empirical analysis covers the period 1970-2004. The model: Based on the strength of the review of related previous empirical analyses and particularly following the specification of Filer et al. (1999), a simple bilateral causal model is Y[t] represents level of economic growth at time t (i.e., current level of economic growth) while X[t] denotes vector of stock market development indicators at time t (current level of stock market development). In Eq. 1, it can be seen that current level of economic growth depends on past values of itself (i.e., Y[t-j]) as well as that of stock market (i.e., X[t-i]). In the same vein, Eq. 2 postulates that stock market depends on its past values as well as that of economic growth. We have unidirectional causality from X to Y if the estimated coefficients on the lagged Y in Eq. 1 are statistically different from zero (i.e., ∑γ[j] ≠ 0) and the set of coefficients on the lagged X in Eq. 2 is not statistically different from zero (i.e., ∑φ[i] = 0). Conversely, we have unidirectional causality from X to Y if the estimated coefficients on the lagged Y in Eq. 1 are not statistically different from zero (i.e., ∑γ[j] ≠ 0) and the set of coefficients on the lagged X in Eq. 2 is statistically different from zero (i.e., ∑φ[i] ≠ 0). Bilateral causality, however exists when the sets of coefficients on X and Y are statistically and significantly different from zero in both equations. There is no causality when the sets of coefficients on X and Y are not statistically and significantly different from zero in both equations (Gujarati, 2005). In line with Levine and Zervos (1995) and Demirguc-Kunt and Levine (1996), the individual indicators of stock market size and liquidity are used in this study. This is based on the postulation that stock market size and liquidity affect economic growth. The size of the stock market is expected to be positively correlated with ability to mobilize capital and diversity risk. The size of the stock market is therefore, measured using the ratio of market capitalization divided by GDP where market capitalization equals the total value of all listed share. Two measures of the liquidity of the stock market on the other hand are used, namely total value traded ratio and the turnover ratio. The former (ratio of total value traded on Nigerian stock exchange of GDP) measures the value of equity transactions relative to the size of the economy while the latter (the value of total shares traded on the Nigerian stock exchange divided by market capitalization) measures the value of equity transactions relative to the size of the equity market. The liquidity indicators measure the degree of trading, compared with the size of the economy and the market rather than the ease with which agents can buy and sell securities at posted prices. The Stock Market Development Index (SMD1) is obtained by averaging the individual indicators of size and liquidity (i.e., the market capitalization ratio, the total value traded ratio and the turnover ratio) over the relevant period. For economic growth, the value of real GDP growth over the relevant period is employed. Therefore, individual indicators of stock market development are represented by MCR (Market Capitalization Ratio), TVTR (Total Value Traded Ratio) and TR (Turnover Ratio). The null hypothesis is therefore that Y does not Granger cause X in the first regression and that X does not Granger cause Y in the second regression. Estimation technique: This study employs Granger causality test to ascertain whether a unidirectional or feedback causality exists between indicators of stock market development and economic growth. As a precondition for the Granger causality test, the Unit root test is carried out to ascertain the stationarity of the series considered in the model. In addition to the Granger causality test, we also quantify the impact of stock market on economic growth in Nigeria using the least squares technique based on the results obtained from the unit root. In this study, the researchers present and interpret results obtained from the estimation process. Basically, the estimation results were obtained from the Unit root test, the Granger causality lest and Ordinary least square analysis. Unit root test: As a preliminary step to testing for Granger causality, we execute Augmented Dickey-Fuller (ADF) unit root test statistics on the series used. The results are shown in Table 1. The results indicate that all the series are stationary at levels. The null hypothesis of non-stationarity is therefore, rejected. Given the finding that series considered appear to be stationary at a conventional level of statistical significance we are therefore, justified to test the granger causality between stock market development and economic growth. Granger causality test: As earlier stated, the preoccupation of this research research is to examine the causal linkage between stock market development and economic growth using Nigerian data hence, the consideration of Granger Causality (GC) test as the most suitable for the analysis. The indicators of stock market development used for the GC test include market capitalization ratio (XI), total value traded ratio (X2) and turnover ratio (X3) while the growth rate of gross domestic product (Y) was used as a proxy for economic growth. The results of the GC test are shown in Table 2. The result obtained from granger causality test carried out on market capitalization and economic growth as shown in Table 2 suggests that the direction of causality is from market capitalization to economic growth since the estimated F value is statistically significant at the 5%. On the other hand, there is no reverse causation from economic growth to market capitalization since, the F value is statistically insignificant. The result obtained from Granger causality test carried out on total value traded ratio and economic growth as shown in Table 3 suggests that there is no causal linkage between total value traded ratio and economic growth since the estimated F values are statistically insignificant. The result obtained from granger causality test carried out on turnover ratio and economic growth as shown in Table 4 suggests that there is bidirectional causality between turnover ratio and economic growth since the estimated F values are statistically significant at the 5%. Table 1: Results of Unit root test at level *Satistically significant at 0.05%. Y: Economic growth; X1: Market capitalization ratio; X2: Total value traded ratio and X3: Turnover ratio Table 2: Market capitalization and economic growth *Statistically significant at 0.05%; Critical value (5%) = 3.32 Long run regression results: The long run regression results are obtained using the OLS technique since all the series are stationary at level and in fact at the same order of integration. The results of the OLS estimation carried out revealed that the model explains approximately 66% of the total adjusted variations in the level of economic growth in Nigeria between 1970 and 2004. This implies that the independent variables included in the model namely market capitalization ratio, total value traded ratio and turn over ratio, account for 66% of the total adjusted variations in the level of economic growth in Nigeria. (Table 5). Also suggestive from this result is the fact that stock market development account for the said percent of the variations in the level of growth in the country. Corroborating the result of the coefficient of determination is the F value also suggesting that the model has a high goodness of fit. In relation to the magnitude and statistical significance of each of the explanatory variables, the market capitalization ratio gave the most satisfactory result. It has a statistically significant positive effect (t* = 3.40) on economic growth and also contributes approximately 2.5% to the level of economic growth in Nigeria. That is, it is statistically significant at 99% level. Turn over ratio also has a statistically significant positive effect (t* = 2.90) on economic growth and contributes about 1.3% to growth. Total value traded ratio, however although, depicted positive relations, it is not statistically significant at any of the conventional levels of significance (i.e., 5, 10 and 1%) (Table 6). When we estimated the log function of the model, the following was obtained. The estimation of the log function did not seem t give a satisfactory result. Apart from the fact that there is evidence of serial correlation problem, the economic criterion of positive relationship between stock market and economic growth was not fully satisfied. To further ascertain whether the deregulation policy has favoured the capital market particularly in terms of its contribution to growth, we divided the study period into two sub-periods (1970-1985 and 1986-2004) and consequently, we carried out OLS estimation on each sub-period. Table 3: Total value traded ratio and economic growth *Statistically significant at 0.05%; Critical value (5%) = 3.32 Table 4: Turnover ratio and economic growth *Statistically significant at 0.05%; Critical value (5%) = 3.32 Table 5: GDP (Dependent variable) **Significant at 99% Table 6: LogGDP (Dependent variable) *Significant at 95%, **Significant at 99% The results obtained are shown in Table 7. Table 7 is the estimation result in respect of the pre-deregulation period, 1970-1985. All the explanatory variables have the expected positive sign. However, in terms of statistical significance only market capitalization ratio was found, to be statistically significant at 95% level (t* = 2.311416). In terms of the degree of impact, a 1% change in market capitalization ratio, total value traded ratio and turnover ratio increases the growth level by approximately 10, 3 and 0.2%, respectively. Other statistics such adjusted R^2, F-statistics and Durbin-Watson result further confirm the statistical desirability and reliability of the estimation. Even when the post-deregulation period was considered as shown in Table 8, there seems to be no significant difference in the results as compared with what we obtained under the pre-deregulation period. Although, all the explanatory variables have the expected positive sign, only market capitalization ratio was found to be statistically significant (t* = 3.3627). In overall, the result of the GC test is an indication that stock market development drives economic growth. In fact the empirical results generated from our estimation are consistent with some previous empirical studies among which include Mohtadi and Agarwal (2004), Filer et al. (1999) and Baier et al. (2004). For example, Mohtadi and Agarwal (2004) examine the link between stock market development and economic growth in developing countries using a panel data approach that covers 21 emerging markets over 21 years (19771-1997). In their two-staged model, both the market capitalization ratio and turnover ratio have statistically significant impact on growth while the value of shares traded is negative and only marginally significant. Table 7: GDP Pre-Sap 1970-1985 (Dependent variable) *Significant at 95% Table 8: Post-Sap 1986-2004 (Dependent variable: GDP) **Significant at 99% The consistency therefore, in the empirical evidence between stock market development and economic growth points to the importance of capital market in lubricating the productive sector for improved growth and development. This study has examined critically and empirically the causal linkage between stock market development and economic growth in Nigeria between 1970 and 2004. The empirical evidence obtained from the estimation process suggests that stock market development causes growth. The finding is also consistent with previous studies among which include Mohtadi and Agarwal (2004), Filer et al. (1999), Baier et al. (2004) and Oke (2005). In fact, the consistency in the empirical evidence between stock market development and economic growth points to the importance of capital market in lubricating the productive sector for improved growth and development. The Nigerian capital market has witnessed tremendous changes over the years particularly in terms of the number of operators and the public involvements. Dealer/stockbrokers, companies registered and stocks and shares have increased tremendously increased. These have posed various challenges among which include: • Low level of public awareness of the market • Unhealthy and sharp practices by the operators • Policy inconsistencies The capital market in Nigeria is most especially important now that the country is just recovering from its deteriorating state occasioned by the recurring military intervention in government. The government through the regulatory agencies and the capital market actors/ operators should pursue adequate sensitization of the public on the enormous potentialities derivable from investing in the capital market. Also, effective supervisory and regulatory measures should be put in place to prevent/discourage unhealthy/ unethical/sharp practices by the actors/ operators in the capital market. This implies that all aspects of the financial services sector must begin to reflect transparency and honesty in all their dealings.
{"url":"http://www.medwelljournals.com/fulltext/?doi=jeth.2010.65.70","timestamp":"2014-04-19T14:36:56Z","content_type":null,"content_length":"57100","record_id":"<urn:uuid:0033088c-989d-40e8-890a-84a9885296fe>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Electronic calculator Calculating Machines --- Abacus --- Napier's bones --- Slide Rule --- Logarithms --- Calculator Nowadays, nearly anyone who wants a machine to do a calculation uses a calculator. These were originally called electronic calculators, to distinguish them from mechanical calculators, or pocket calculators, to distinguish them from computers! Simple calculator How to use a calculator Do you makes mistakes while using a calculator? Try the following: • Check each number after you've entered it. • If you get confused in the middle of a calculation, use a piece of paper to hide the part of the calculation that you haven't go to yet. If you get confused in the middle of a long number, use paper, or a finger, or a pen, to hide, or point to, the digits of the number. • Check each result by estimating it. Take the first digit of each number and add enough zeroes to make it roughly the right size. Then it is easy enough to estimate the result. (456 x 64) is roughly (400 x 60) or 24000. (3671 + 9642) is roughly (3000 + 9000) or 12000. For a more precise answer, trying rounding up and down alternately. (456 x 64) is roughly (500 x 60) or 30000 (the correct answer is 29184). (3671 + 9642) is roughly (4000 + 9000) or 13000 (the correct answer is 13313). If you get something like -0.0345, then you know you've got it wrong! • Even if you do not feel like estimating, you can judge if the result looks reasonable. If you add, multiply or divide positive numbers, you don't expect a negative result. If you multiply large numbers, you get an enormous number. If you add two positive numbers, you get a number bigger than the biggest of them, but less than twice as big. If you look at the result and think "That can't be right!", then you have probably made a mistake! • Try doing the calculation more than once. If you do the same calculation twice the same way, you might make the same mistake, so try doing it a different way the second time. If you are adding up a string of numbers, first add them from top to bottom, write down the result, then add the numbers from bottom to top, and see if you get the same result! Some calculations are harder to reverse. (3 - 2) is not the same as (2 - 3). It is the same as (-2 + 3). • When doing more than one calculation, use the 'C' key before starting the new calculation, to make sure that no number gets left over from the previous one. Here is some practice in checking if calculations are reasonable: Arithmetic calculator You would think that all calculators would work in the same way for simple calculations. In fact, they have their own peculiarities. If you calculate 1 + 2 x 3 on the grey calculator above, it will produce the answer 9. It adds 2 to 1, then multiplies the whole thing by 3. But this isn't the way that mathematicians do calculations. They work out all multiplications and divisions first, then do the additions and subtractions after. So 1 + 2 x 3 gives the same answer as 2 x 3 + 1, which is 7. You will find that the calculator below does this. It has other helpful features. You can add or subtract a percentage (e.g. 150 + 17.5%). You can reverse the sign on the result or take the reciprocal ('one over ...'), which is often useful when doing calculations. You can display the result as a whole number, or to 2 decimal places (both rounded), and indeed switch between these if you want. Finally, the whole entered calculation is displayed at the bottom, so you can check if you entered something wrong. You can even change this calculation, and click on 'Redo calculation' to re-calculate it. In the bottom display, the reciprocal is shown as 'n' and the divide by '/', as the other symbols are not in the normal character set. Calculators on computers Most computers have a calculator as part of their operating system. To find the PC calculator, click on Start, then Programs, then Accessories, then Calculator. Click on View to choose Standard or Scientific. Try 1 + 2 x 3 on each! Calculators have a special notation if the numbersget too large or too small. They display significant figures and powers of ten. The calculators on this page don't, as they are designed for teaching people how to use a calculator, or for simple use. Use your computer's calculator or your own for more complicated arithmetic.
{"url":"http://gwydir.demon.co.uk/jo/numbers/machine/calculator.htm","timestamp":"2014-04-19T04:19:35Z","content_type":null,"content_length":"20693","record_id":"<urn:uuid:b8c0e639-449a-4118-901d-797e774cb52f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
joule (J) The derived unit of energy in the SI system of units. It is named after the English physicist James Joule. 1 joule is the energy used (or the work done) by a force of one newton in moving its point of application one meter in the direction of the force; also equivalent to the energy dissipated by 1 watt in one second, it equals 10^7 erg in CGS units. In older units, 1 Btu = 1,055 joules; 1 joule per second = 0.737 ft-lb/s. One joule is also the work done when a one ampere current is passed through a resistance of one ohm for one second. Related category
{"url":"http://www.daviddarling.info/encyclopedia/J/joule_unit.html","timestamp":"2014-04-18T00:16:53Z","content_type":null,"content_length":"6075","record_id":"<urn:uuid:7096a71b-d75f-4f27-b50c-b054a289560c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Bisimulation from Open Maps Results 1 - 10 of 80 , 2000 "... In the semantics of programming, nite data types such as finite lists, have traditionally been modelled by initial algebras. Later final coalgebras were used in order to deal with in finite data types. Coalgebras, which are the dual of algebras, turned out to be suited, moreover, as models for certa ..." Cited by 298 (31 self) Add to MetaCart In the semantics of programming, nite data types such as finite lists, have traditionally been modelled by initial algebras. Later final coalgebras were used in order to deal with in finite data types. Coalgebras, which are the dual of algebras, turned out to be suited, moreover, as models for certain types of automata and more generally, for (transition and dynamical) systems. An important property of initial algebras is that they satisfy the familiar principle of induction. Such a principle was missing for coalgebras until the work of Aczel (Non-Well-Founded sets, CSLI Leethre Notes, Vol. 14, center for the study of Languages and information, Stanford, 1988) on a theory of non-wellfounded sets, in which he introduced a proof principle nowadays called coinduction. It was formulated in terms of bisimulation, a notion originally stemming from the world of concurrent programming languages. Using the notion of coalgebra homomorphism, the definition of bisimulation on coalgebras can be shown to be formally dual to that of congruence on algebras. Thus, the three basic notions of universal algebra: algebra, homomorphism of algebras, and congruence, turn out to correspond to coalgebra, homomorphism of coalgebras, and bisimulation, respectively. In this paper, the latter are taken "... The theme of this paper is profunctors, and their centrality and ubiquity in understanding concurrent computation. Profunctors (a.k.a. distributors, or bimodules) are a generalisation of relations to categories. Here they are first presented and motivated via spans of event structures, and the seman ..." Cited by 263 (33 self) Add to MetaCart The theme of this paper is profunctors, and their centrality and ubiquity in understanding concurrent computation. Profunctors (a.k.a. distributors, or bimodules) are a generalisation of relations to categories. Here they are first presented and motivated via spans of event structures, and the semantics of nondeterministic dataflow. Profunctors are shown to play a key role in relating models for concurrency and to support an interpretation as higher-order processes (where input and output may be processes). Two recent directions of research are described. One is concerned with a language and computational interpretation for profunctors. This addresses the duality between input and output in profunctors. The other is to investigate general spans of event structures (the spans can be viewed as special profunctors) to give causal semantics to higher-order processes. For this it is useful to generalise event structures to allow events which “persist.” - In Proc. of CONCUR 2000, 2000. LNCS 1877 , 2000 "... . The dynamics of reactive systems, e.g. CCS, has often been de ned using a labelled transition system (LTS). More recently it has become natural in de ning dynamics to use reaction rules | i.e. unlabelled transition rules | together with a structural congruence. But LTSs lead more naturally to beha ..." Cited by 116 (14 self) Add to MetaCart . The dynamics of reactive systems, e.g. CCS, has often been de ned using a labelled transition system (LTS). More recently it has become natural in de ning dynamics to use reaction rules | i.e. unlabelled transition rules | together with a structural congruence. But LTSs lead more naturally to behavioural equivalences. So one would like to derive from reaction rules a suitable LTS. This paper shows how to derive an LTS for a wide range of reactive systems. A label for an agent a is de ned to be any context F which intuitively is just large enough so that the agent Fa (\a in context F ") is able to perform a reaction. The key contribution of this paper is a precise de nition of \just large enough", in terms of the categorical notion of relative pushout (RPO), which ensures that bisimilarity is a congruence when sucient RPOs exist. Two examples | a simpli ed form of action calculi and term-rewriting | are given, for which it is shown that su- cient RPOs indeed exist. The thrust of thi... , 1998 "... . The notion of bisimulation as proposed by Larsen and Skou for discrete probabilistic transition systems is shown to coincide with a coalgebraic definition in the sense of Aczel and Mendler in terms of a set functor. This coalgebraic formulation makes it possible to generalize the concepts to a ..." Cited by 75 (15 self) Add to MetaCart . The notion of bisimulation as proposed by Larsen and Skou for discrete probabilistic transition systems is shown to coincide with a coalgebraic definition in the sense of Aczel and Mendler in terms of a set functor. This coalgebraic formulation makes it possible to generalize the concepts to a continuous setting involving Borel probability measures. Under reasonable conditions, generalized probabilistic bisimilarity can be characterized categorically. Application of the final coalgebra paradigm then yields an internally fully abstract semantical domain with respect to probabilistic bisimulation. Keywords. Bisimulation, probabilistic transition system, coalgebra, ultrametric space, Borel measure, final coalgebra. 1 Introduction For discrete probabilistic transition systems the notion of probabilistic bisimilarity of Larsen and Skou [LS91] is regarded as the basic process equivalence. The definition was given for reactive systems. However, Van Glabbeek, Smolka and Steffen , 1999 "... In this dissertation we investigate presheaf models for concurrent computation. Our aim is to provide a systematic treatment of bisimulation for a wide range of concurrent process calculi. Bisimilarity is defined abstractly in terms of open maps as in the work of Joyal, Nielsen and Winskel. Their wo ..." Cited by 45 (19 self) Add to MetaCart In this dissertation we investigate presheaf models for concurrent computation. Our aim is to provide a systematic treatment of bisimulation for a wide range of concurrent process calculi. Bisimilarity is defined abstractly in terms of open maps as in the work of Joyal, Nielsen and Winskel. Their work inspired this thesis by suggesting that presheaf categories could provide abstract models for concurrency with a built-in notion of bisimulation. We show how - Acta Informatica , 1998 "... This paper combines and extends the material of [GG-a/c/d/e], except for the part in [GG-c] on refinement of transitions in Petri nets and the discussion of TCSP-like parallel composition in [GG-e]. An informal presentation of some basic ingredients of this paper appeared as [GG-b]. Among others, th ..." Cited by 36 (1 self) Add to MetaCart This paper combines and extends the material of [GG-a/c/d/e], except for the part in [GG-c] on refinement of transitions in Petri nets and the discussion of TCSP-like parallel composition in [GG-e]. An informal presentation of some basic ingredients of this paper appeared as [GG-b]. Among others, the treatment of action refinement in stable and non-stable event structures is new. The research reported here was supported by Esprit project 432 (METEOR), Esprit Basic Research Action 3148 (DEMON), Sonderforschungsbereich 342 of the TU Munchen, ONR grant N00014-92-J-1974 and the Human Capital and Mobility Cooperation Network EXPRESS (Expressiveness of Languages for Concurrency). Contents , 1996 "... This paper investigates presheaf models for process calculi with value passing. Denotational semantics in presheaf models are shown to correspond to operational semantics in that bisimulation obtained from open maps is proved to coincide with bisimulation as defined traditionally from the operat ..." Cited by 33 (18 self) Add to MetaCart This paper investigates presheaf models for process calculi with value passing. Denotational semantics in presheaf models are shown to correspond to operational semantics in that bisimulation obtained from open maps is proved to coincide with bisimulation as defined traditionally from the operational semantics. Both "early" and "late" semantics are considered, though the more interesting "late" semantics is emphasised. A presheaf model and denotational semantics is proposed for a language allowing process passing, though there remains the problem of relating the notion of bisimulation obtained from open maps to a more traditional definition from the operational semantics. , 2001 "... In this paper we present history-dependent automata (HD-automata in brief). They are an extension of ordinary automata that overcomes their limitations in dealing with history-dependent formalisms. In a history-dependent formalism the actions that a system can perform carry information generated i ..." Cited by 29 (8 self) Add to MetaCart In this paper we present history-dependent automata (HD-automata in brief). They are an extension of ordinary automata that overcomes their limitations in dealing with history-dependent formalisms. In a history-dependent formalism the actions that a system can perform carry information generated in the past history of the system. The most interesting example is -calculus: channel names can be created by some actions and they can then be referenced by successive actions. Other examples are CCS with localities and the history-preserving semantics of Petri nets. Ordinary - In CONCUR'98, volume 1466 of LNCS , 1998 "... . We recast dataflow in a modern categorical light using profunctors as a generalisation of relations. The well known causal anomalies associated with relational semantics of indeterminate dataflow are avoided, but still we preserve much of the intuitions of a relational model. The development fits ..." Cited by 28 (13 self) Add to MetaCart . We recast dataflow in a modern categorical light using profunctors as a generalisation of relations. The well known causal anomalies associated with relational semantics of indeterminate dataflow are avoided, but still we preserve much of the intuitions of a relational model. The development fits with the view of categories of models for concurrency and the general treatment of bisimulation they provide. In particular it fits with the recent categorical formulation of feedback using traced monoidal categories. The payoffs are: (1) explicit relations to existing models and semantics, especially the usual axioms of monotone IO automata are read off from the definition of profunctors, (2) a new definition of bisimulation for dataflow, the proof of the congruence of which benefits from the preservation properties associated with open maps and (3) a treatment of higherorder dataflow as a biproduct, essentially by following the geometry of interaction programme. 1 Introduction A
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.16.5460","timestamp":"2014-04-19T18:15:40Z","content_type":null,"content_length":"37020","record_id":"<urn:uuid:df6f9e8e-6bd0-4816-aece-682a040239dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Initial impressions of RangeLab December 30, 2011 By Derek-Jones I was rummaging around in the source of R looking for trouble, as one does, when I came across what I believed to be a less than optimally accurate floating-point algorithm (function R_pos_di in src/ main/arithemtic.c). Analyzing the accuracy of floating-point code is notoriously difficult and those having the required skills tend to concentrate their efforts on what are considered to be important questions. I recently discovered RangeLab, a tool that seemed to be offering painless floating-point code accuracy analysis; here was an opportunity to try it out. Installation went as smoothly as newly released personal tools usually do (i.e., some minor manual editing of Makefiles and a couple of tweaks to the source to comment out function calls causing link errors {mpfr_random and mpfr_random2}). RangeLab works by analyzing the flow of values through a program to produce the set of output values and the error bounds on those values. Input values can be specified as a range, e.g., f = [1.0, 10.0] says f can contain any value between 1.0 and 10.0. My first RangeLab program mimicked the behavior of the existing code in R_pos_di: f=[1.0, 10.0]; res = 1.0; if n < 0, n = -n; f = 1 / f; while n ~= 0, if (n / 2)*2 ~= n, res = res * f; n = n / 2; if n ~= 0, f = f*f; and told me that the possible range of values of res was: ans = float64: [1.000000000000001E-10,1.000000000000000E0] error: [-2.109423746787798E-15,2.109423746787799E-15] Changing the code to perform the divide last, rather than first, when the exponent is negative: f=[1.0, 10.0]; res = 1.0; is_neg = 0; if n < 0, n = -n; is_neg = 1 while n ~= 0, if (n / 2)*2 ~= n, res = res * f n = n / 2; if n ~= 0, f = f*f if is_neg == 1, res = 1 / res end and the error in res is now: ans = float64: [1.000000000000000E-10,1.000000000000000E0] error: [-1.110223024625156E-16,1.110223024625157E-16] Yea! My hunch was correct, moving the divide from first to last reduces the error in the result. I have reported this code as a bug in R and wait to see what the R team think. Was the analysis really that painless? The Rangelab language is somewhat quirky for no obvious reason (e.g., why use ~= when everybody uses != these days and if conditionals must be followed by a character why not use the colon like Python does?) It would be real useful to be able to cut and paste C/C++/etc and only have to make minor changes. I get the impression that all the effort went into getting the analysis correct and user interface stuff was a very distant second. This is the right approach to take on a research project. For some software to make the leap from interesting research idea to useful tool it is important to pay some attention to the user interface. The current release does not deserve to be called 1.0 and unless you have an urgent need I would suggest waiting until the usability has been improved (e.g., error messages give some hint about what is wrong and a rough indication of which line the problem occurs on). RangeLab has shown that there is simpler method of performing useful floating-point error analysis. With some usability improvements RangeLab would be an essential tool for any developer writing code involving floating-point types. for the author, please follow the link and comment on his blog: The Shape of Code » R daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/initial-impressions-of-rangelab/","timestamp":"2014-04-21T14:43:54Z","content_type":null,"content_length":"44665","record_id":"<urn:uuid:8226c017-721b-45db-84b1-bd533d14aedc>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 186 , 1996 "... We first exhibit in the commutative case the simple algebraic relations between the algebra of functions on a manifold and its infinitesimal length element ds. Its unitary representations correspond to Riemannian metrics and Spin structure while ds is the Dirac propagator ds = ×— × = D −1 where D i ..." Cited by 264 (14 self) Add to MetaCart We first exhibit in the commutative case the simple algebraic relations between the algebra of functions on a manifold and its infinitesimal length element ds. Its unitary representations correspond to Riemannian metrics and Spin structure while ds is the Dirac propagator ds = ×— × = D −1 where D is the Dirac operator. We extend these simple relations to the non commutative case using Tomita’s involution J. We then write a spectral action, the trace of a function of the length element in Planck units, which when applied to the non commutative geometry of the Standard Model will be shown (in a joint work with Ali Chamseddine) to give the SM Lagrangian coupled to gravity. The internal fluctuations of the non commutative geometry are trivial in the commutative case but yield the full bosonic sector of SM with all correct quantum numbers in the slightly non commutative case. The group of local gauge transformations appears spontaneously as a normal subgroup of the diffeomorphism - Comm. Math. Phys "... We give new examples of noncommutative manifolds that are less standard than the NC-torus or Moyal deformations of R n. They arise naturally from basic considerations of noncommutative differential topology and have non-trivial global features. The new examples include the instanton algebra and the ..." Cited by 127 (19 self) Add to MetaCart We give new examples of noncommutative manifolds that are less standard than the NC-torus or Moyal deformations of R n. They arise naturally from basic considerations of noncommutative differential topology and have non-trivial global features. The new examples include the instanton algebra and the NC-4-spheres S4 θ. We construct the noncommutative algebras A = C ∞ (S4 θ) of functions on NCspheres as solutions to the vanishing, chj(e) = 0,j < 2, of the Chern character in the cyclic homology of A of an idempotent e ∈ M4(A), e2 = e, e = e ∗. We describe the universal noncommutative space obtained from this equation as a noncommutative Grassmanian as well as the corresponding notion of admissible morphisms. This space Gr contains the suspension of a NC-3-sphere intimately related to quantum group deformations SUq(2) of SU(2) but for unusual values (complex values of modulus one) of the parameter q of q-analogues, q = exp(2πiθ). We then construct the noncommutative geometry of S4 θ as given by a spectral triple (A, H,D) and check all axioms of noncommutative manifolds. In a previous paper it was shown that for any Riemannian metric gµν on S4 whose volume form √ g d4x is the same as the one for the round metric, the corresponding Dirac operator gives a solution to the following quartic equation, e − 1 , 2001 "... We exhibit large classes of examples of noncommutative finitedimensional manifolds which are (non-formal) deformations of classical manifolds. The main result of this paper is a complete description of noncommutative three-dimensional spherical manifolds, a noncommutative version of the sphere S 3 d ..." Cited by 89 (12 self) Add to MetaCart We exhibit large classes of examples of noncommutative finitedimensional manifolds which are (non-formal) deformations of classical manifolds. The main result of this paper is a complete description of noncommutative three-dimensional spherical manifolds, a noncommutative version of the sphere S 3 defined by basic K-theoretic equations. We find a 3-parameter family of deformations of the standard 3-sphere S 3 and a corresponding 3-parameter deformation of the 4-dimensional Euclidean space R 4. For generic values of the deformation parameters we show that the obtained algebras of polynomials on the deformed R 4 u are isomorphic to the algebras introduced by Sklyanin in connection with the Yang-Baxter equation. Special values of the deformation parameters do not give rise to Sklyanin algebras and we extract a subclass, the θ-deformations, which we generalize in any dimension and various contexts, and study in some details. Here, and - ACTA NUMERICA , 2006 "... ..." - J. Math. Physics , 2000 "... We give a survey of selected topics in noncommutative geometry, with some emphasis on those directly related to physics, including our recent work with Dirk Kreimer on renormalization and the Riemann-Hilbert problem. We discuss at length two issues. The first is the relevance of the paradigm of geom ..." Cited by 44 (2 self) Add to MetaCart We give a survey of selected topics in noncommutative geometry, with some emphasis on those directly related to physics, including our recent work with Dirk Kreimer on renormalization and the Riemann-Hilbert problem. We discuss at length two issues. The first is the relevance of the paradigm of geometric space, based on spectral considerations, which is central in the theory. As a simple illustration of the spectral formulation of geometry in the ordinary commutative case, we give a polynomial equation for geometries on the four dimensional sphere with fixed volume. The equation involves an idempotent e, playing the role of the instanton, and the Dirac operator D. It expresses the gamma five matrix as the pairing between the operator theoretic chern characters of e and D. It is of degree five in the idempotent and four in the Dirac operator which only appears through its commutant with the idempotent. It determines both the sphere and all its metrics with fixed volume form. We also show using the noncommutative analogue of the Polyakov action, how to obtain the noncommutative metric (in spectral form) on the noncommutative tori from the formal naive metric. We conclude on some questions related to string theory. I - COMUN.MATH.PHYS , 1999 "... The Dirac q-monopole connection is used to compute projector matrices of quantum Hopf line bundles for arbitrary winding number. The Chern-Connes pairing of cyclic cohomology and K-theory is computed for the winding number −1. The non-triviality of this pairing is used to conclude that the quantum p ..." Cited by 37 (18 self) Add to MetaCart The Dirac q-monopole connection is used to compute projector matrices of quantum Hopf line bundles for arbitrary winding number. The Chern-Connes pairing of cyclic cohomology and K-theory is computed for the winding number −1. The non-triviality of this pairing is used to conclude that the quantum principal Hopf fibration is non-cleft. Among general results, we provide a left-right symmetric characterization of the canonical strong connections on quantum principal homogeneous spaces with an injective antipode. We also provide for arbitrary strong connections on algebraic quantum principal bundles (Hopf-Galois extensions) their associated covariant derivatives on projective modules. - J. Reine Angew. Math , 1998 "... Contents 3. The NC-affine space and Feynman-Maslov operator calculus. 4. Detailed study of algebraic NC-manifolds. 5. Examples of NC-manifolds. The term “noncommutative geometry ” has come to signify a vast framework of ideas ..." Cited by 26 (0 self) Add to MetaCart Contents 3. The NC-affine space and Feynman-Maslov operator calculus. 4. Detailed study of algebraic NC-manifolds. 5. Examples of NC-manifolds. The term “noncommutative geometry ” has come to signify a vast framework of ideas - JHEP 0106 "... Preprint typeset in JHEP style.- PAPER VERSION ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=459478","timestamp":"2014-04-17T08:11:34Z","content_type":null,"content_length":"33899","record_id":"<urn:uuid:c5dffa88-f270-4b7e-b20f-66e753de6d68>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Jefferson Park, Chicago, IL Chicago, IL 60642 Former teacher tutors math, physics & engineering ...As a result I have extensive experience in physics, statics, dynamics, heat & mass transfer, thermodynamics and controls, as well as the mathematics that support these topics (linear algebra, I, II and III, differential equations). During my masters degree... Offering 10+ subjects including calculus
{"url":"http://www.wyzant.com/Jefferson_Park_Chicago_IL_Calculus_tutors.aspx","timestamp":"2014-04-16T09:16:55Z","content_type":null,"content_length":"62105","record_id":"<urn:uuid:1252be51-4137-4dfe-bad8-02b5637fb018>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Sequence Labelling SVMs Trained in One Pass Antoine Bordes, Nicolas Usunier and Léon Botou ECML PKDD 2008, Part I Volume LNAI 5211, , 2008. ISSN 1611-3349 This paper proposes an online solver of the dual formulation of support vector machines for structured output spaces. We apply it to sequence labelling using the exact and greedy inference schemes. In both cases, the per-sequence training time is the same as a perceptron based on the same inference procedure, up to a small multiplicative constant. Comparing the two inference schemes, the greedy version is much faster. It is also amenable to higher order Markov assumptions and performs similarly on test. In comparison to existing algorithms, both versions match the accuracies of batch solvers that use exact inference after a single pass over the training examples. PDF - PASCAL Members only - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00004775/","timestamp":"2014-04-18T21:06:10Z","content_type":null,"content_length":"7226","record_id":"<urn:uuid:1fbcf885-9a3d-499e-bf63-f309ff1f1be0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00361-ip-10-147-4-33.ec2.internal.warc.gz"}