content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Strict applications of deformation theory in which to dip one's toe
up vote 19 down vote favorite
I hesitate to ask a question like this, but I really have tried finding answers to this question on my own and seemed to come up short. I readily admit this is due to my ignorance of algebraic
geometry and not knowing where to look... Then I figured, that's what this site is for!
Here's the short of it:
What are some examples of strict applications of deformation theory? That is, what are examples of problems that can be stated without mentioning deformation theory or moduli spaces and one of
whose solutions uses deformation theory? Please state the problem precisely in your answer, and provide a reference if at all possible :)
Here's the long of it:
I really want to swim in the Kool-Aid fountain of deformation theory and taste of its sweet, sweet purple love, but I'm having trouble. When I wanted to learn about K-theory, I learned about it
through the solution to the Hopf invariant one problem, the solution to the vector fields on spheres problem, and through the Adams conjecture. When I wanted to learn some equivariant stuff, it was
nice to have the solution to the Kervaire invariant one problem as a guiding force. I have trouble learning things in a bubble; I need at least a slight push.
Now, I know that deformation theory is useful for building moduli spaces, but the trouble is that, aside from the ones that appear in homotopy theory, I haven't fully submerged in this sea of
goodness either. The exception would be any example of a strict application that used deformation theory to construct some moduli space and then used this space to prove some tasty fact.
To give you all an idea, here are the only examples I have found (from asking around) that fit my criteria:
1. Shaferavich-Parshin. Let $B$ be a smooth, proper curve over a field and fix an integer $g \ge 2$. Then there are only finitely many non-isotrivial (i.e. general points in base have non-isomorphic
fibers) families of curves $X \rightarrow B$ which are smooth and proper and have fibers of genus $g$.
2. Given $g\ge 0$, then every curve of genus $g$ has a non-constant map to $\mathbb{P}^1$ of degree at most $d$ whenever $2d - 2 \ge g$.
3. There are finitely many curves of a given genus over a finite field.
4. The solution to the Taniyama-Shimura conjecture uses deformations of Galois reps.
1, 2, and 3 are stolen from Osserman's really great note: https://www.math.ucdavis.edu/~osserman/classes/256A/notes/deform.pdf
I really like the theme of 'show there are finitely many gadgets by parameterizing these gadgets by a moduli space with some sort of finite type assumption, then showing no point admits nontrivial
deformations.' Any examples of this sort would be doubly appreciated. (I guess Kovács and Lieblich have an annals paper where they do something along these lines for the higher-dimensional version of
the Shaferavich conjecture, but since they end up counting deformation types of things instead of things, it doesn't quite fit the criteria in my question... but it's still neat!)
Galois representations are definitely a huge thing, and I'd be grateful for any application of their deformation theory that's more elementary than, say... the Taniyama-Shimura conjecture.
So yeah, that's it. Proselytize, laud, wax poetic- make Pat Benatar proud.
The work of Robinson and Angeltveit on the multiplication of Morava $K$-theory is done via a form of obstruction theory. Also, the Goerss-Hopkins-Miller theorem is done this way as well. I believe
1 Lurie's proof is along these lines too. Or at least, that is how I would phrase the results. Baker-Richter have papers on Gamma-cohomology which is intimately related, as is the work of
Basterra-Mandell on TAQ. I recommend a perusal of Mathscinet as you may stumble on other gems while you are digging (I always do whenever I look at any of the papers by any of those authors). –
Sean Tilson Jun 20 '13 at 6:23
Silly of me to forget to mention Goerss-Hopkins obstruction theory!! And of course I'm omitting the whole derived story- that's what I really want to get at, but I was hoping to get some of the
more classical or algebraic story for the purposes of this question. In any event some of the other references I hadn't heard of- looks like lots of fun! Thanks Sean :) – Dylan Wilson Jun 20 '13
at 6:34
1 What do you mean by point (3)? – Dan Petersen Jun 20 '13 at 11:09
2 I kind of despise these types of questions. Isn't there some kind of tag, like "soft question", to flag this kind of question. – Jason Starr Jun 20 '13 at 11:52
1 @Jon: I know absolutely zero things about that, so in particular I dunno if it fits my criteria either :) – Dylan Wilson Jun 20 '13 at 16:08
show 5 more comments
2 Answers
active oldest votes
One of my favourite examples is the following theorem, due to S. Mori:
Theorem A. Let $X$ be a smooth complex projective variety such that $-K_X$ is ample. Then $X$ contains a rational curve. In fact, through any point $x \in X$ there is a rational curve
$D$ such that $$ 0 < -(D \cdot K_X )\leq \dim X+1.$$
In other words, smooth Fano varieties over $\mathbb{C}$ are uniruled.
The proof of this beautiful result uses deformation theory in a very striking way. The idea is the following. One first take any map $f \colon C \to X$, where $C$ is a smooth curve with a
marked point $0$ such that $f(0)=x$.
Now by deformation theory of maps one knows that, if one requires that the image of $0 \in C$ is fixed, the morphism $f$ has a deformation space of dimension at least $$h^0(C, f^*T_X)-h^1(C,
f^*T_X) - \dim X = -((f_*C) \cdot K_X)-g(C) \cdot \dim X.$$
So, whenever the quantity $-((f_*C) \cdot K_X)-g(C) \cdot \dim X$ is positive, there must be a non-trivial family of deformations of the map $f \colon C \to X$ keeping the image of $0$
fixed. Then, by another result of Mori known as bend and break, one is able to show that at some point the image curve splits in several components and that one of them is necessarily a
rational curve passing through $x$.
up vote Instead, when $-((f_*C) \cdot K_X)-g(C) \cdot \dim X$ is not positive we are in trouble. But here comes another brilliant idea of Mori: let's pass to positive characteristic! In fact, in
22 down positive characteristic we may compose $f \colon C \to X$ with (some power of) the Frobenius endomorphism $F_p \colon C \to C$. This increases the quantity $-((f_*C) \cdot K_X)$ without
vote changing $g(C)$ and allows us to obtain a deformation space which has again strictly positive dimension. So, using the argument above (deformation theory of maps + bend and break), for any
prime integer $p$ we are able to find a rational curve through $x_p \in X_p$, where $X_p$ is the reduction of $X$ modulo $p$ (for the sake of simplicity I'm assuming that $X$ is defined over
the integers).
Finally, a straightforward argument using elimination theory shows that if $X_p$ admits a rational curve through $x_p$ for every prime $p$, then $X$ admits a rational curve through $x$, too.
It is worth remarking that no proof of Theorem $A$ avoiding the characteristic $p$ reduction is currently known.
This kind of argument was first used by Mori in order to prove the following theorem, which settles a conjecture due to Hartshorne:
Theorem B. If $X$ is a smooth complex projective variety of dimension $n$ with ample tangent bundle, then $X \cong \mathbb{P}^n.$
See [S. Mori, Projective manifolds with ample tangent bundle, Ann. of Math. 110 (1979)].
More details about Theorem $A$ (as well as its complete proof) can be found in the books [Debarre: Higher-dimensional algebraic geometry] and [Kollar-Mori, Birational geometry of algebraic
This is very pretty! And exactly the sort of example I was looking for. Thanks! – Dylan Wilson Jun 20 '13 at 15:02
you are welcome – Francesco Polizzi Jun 20 '13 at 15:13
I guess one could mention some other results in a similar vein: rational connectedness of Fano varieties (Koll\'ar--Miyaoka--Mori), existence of sections for rationally connected
fibrations over curves (Graber--Harris--Starr). – Artie Prendergast-Smith Jun 20 '13 at 16:32
add comment
Here are few well-known examples which are not of algebro-geometric nature, where a problem was solved via a reduction to a deformation problem/moduli space problem:
1. Donaldson's work on intersection forms of smooth simply-connected 4-manifolds (definite forms must be diagonalizable); the moduli space in this case is the space of instantons
(self-dual connections).
2. Thurston's work on hyperbolization of Haken 3-manifolds. The moduli space in question was the character variety, i.e., moduli space of $SL(2,C)$-representations of the fundamental
group. The problem of hyperbolization was reduced by Thurston to a certain fixed-point problem (actually, two slightly different problems depending on existence of fibration over the
circle) for a weakly contracting map and solved this way.
up vote 5
down vote 3. Margulis' arithmeticity theorem: Every rreducible lattice in a higher rank semisimple Lie group $G$ is arithmetic. The very first step of the proof (actually, due to Selberg) is to look
at the character variety, which is defined over $Z$ and observe that isolated points are fixed by a finite index subgroup of the absolute Galois group. This implies that the lattice is
conjugate to a subgroup of $G(F)$, where $F$ is a number field. The moduli space in this case is again a character variety.
4. Any of hundreds (if not thousands) of papers on application of gauge theory to low-dimensional ($\le 4$) topology, or even higher-dimensional topology as in Ciprian Manolescu's recent
disproof of the triangulation conjecture.
add comment
Not the answer you're looking for? Browse other questions tagged deformation-theory ag.algebraic-geometry reference-request soft-question or ask your own question. | {"url":"http://mathoverflow.net/questions/134217/strict-applications-of-deformation-theory-in-which-to-dip-ones-toe","timestamp":"2014-04-17T04:18:10Z","content_type":null,"content_length":"72035","record_id":"<urn:uuid:39247cf9-1a9b-4091-86e9-af68186fb60d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
'Slippy Numbers' printed from http://nrich.maths.org/
The number 10112359550561797752808988764044943820224719 is called a 'slippy number' because, when the last digit 9 is moved to the front, the new number produced is the slippy number multiplied by 9.
Find slippy numbers ending in 4 (a small one) and in 2 and 3 (larger ones).
Explain why the slippy number ending in 9 has a unique sequence of digits; can there be more than one slippy number ending in 9?
You might like to write a short program to find other slippy numbers. | {"url":"http://nrich.maths.org/751/index?nomenu=1","timestamp":"2014-04-19T15:25:32Z","content_type":null,"content_length":"3499","record_id":"<urn:uuid:475e4ec8-471f-4947-9fc7-5c103fb89310>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: ' . . big picture view of the universe, built of space-time, '
Replies: 1 Last Post: Sep 2, 2012 9:52 AM
Messages: [ Previous | Next ]
' . . big picture view of the universe, built of space-time, '
Posted: Sep 1, 2012 7:27 AM
' . . big picture view of the universe, built of space-time, '
Have Three Little Photons Broken Theoretical Physics?
My comment.
' A world without masses, without electrons, without an
electromagnetic field is an empty world. Such an empty
world is flat. But if masses appear, if charged particles
appear, if an electromagnetic field appears then our world
becomes curved. Its geometry is Riemannian, that is,
non- Euclidian. '
/ Book ' Albert Einstein ' The page 116 . by Leopold Infeld. /
Universe as a whole without masses, without electrons,
without an electromagnetic field is an empty world.
Such an empty world is flat ( infinite flat ).
But if masses appear, if charged particles appear,
if an electromagnetic field appears ( in local places )
then our world becomes curved ( in local places stars
and planets were created ).
From outside point of view it is local Riemannian geometry,
non- Euclidian geometry .
Date Subject Author
9/1/12 ' . . big picture view of the universe, built of space-time, ' socratus@bezeqint.net
9/2/12 Re: ' . . big picture view of the universe, built of space-time, ' socratus@bezeqint.net | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2399523&messageID=7882675","timestamp":"2014-04-18T16:20:49Z","content_type":null,"content_length":"18272","record_id":"<urn:uuid:339f2e07-275e-4383-b14e-8ddbf7524884>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
limit of fraction with summation
August 25th 2012, 06:37 AM
limit of fraction with summation
it has been some time since i last saw calculus and I'm stuck at one excercise. The point is just to find limit of
$\frac{ln(x)}{\sum_{k=1}^n \frac{1}{k}}$
What I would do is just solving the summation as definite integral and then differentiating the fraction via l'Hospital rule. Is that an ok way to go?
August 25th 2012, 06:42 AM
Re: limit of fraction with summation
But the summation is in terms of k ,, and the numerator is in term of x !
August 25th 2012, 06:48 AM
Re: limit of fraction with summation
This question does not make sense. Where is the limit? $x\to?$ or $n\to\infty~?$
What is the exact wording of the question?
August 25th 2012, 06:51 AM
Re: limit of fraction with summation
oh .. sorry my bad, there is supposed to be n instead of x and the limit is in +inf
August 25th 2012, 07:08 AM
Prove It
Re: limit of fraction with summation
It is well known that the Harmonic series is divergent, so the denominator tends to \displaystyle \begin{align*} \infty \end{align*}, and the numerator also tends to \displaystyle \begin{align*}
\infty \end{align*}. Since this goes to \displaystyle \begin{align*} \frac{\infty}{\infty} \end{align*}, you should be able to apply L'Hospital's Rule.
August 25th 2012, 07:13 AM
Re: limit of fraction with summation
August 25th 2012, 09:15 AM
Re: limit of fraction with summation
There is another way to do it but it requires a piece of knowledge. The Euler-Mascheroni constant is defined as
$\gamma = \lim_{n\to \infty} (H_n - \ln n)$
where $H_n$ is the $n$-th Harmonic number. Now we have
$\lim_{n\to \infty} \frac{\ln n}{H_n} =\lim_{n\to \infty} \frac{\ln n - H_n + H_n}{H_n} = \lim_{n\to \infty} \frac{\frac{\ln n - H_n}{H_n} + 1}{1}$
What's the next step? If you want to see, click "show" | {"url":"http://mathhelpforum.com/calculus/202536-limit-fraction-summation-print.html","timestamp":"2014-04-17T21:00:31Z","content_type":null,"content_length":"12615","record_id":"<urn:uuid:80e1f729-f023-4e3d-829a-a7580b28a56f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
basis for the eigenspace corresponding
Do you understand that many people will not just open "Word" files from people they dont' know for fear of viruses. Also since the federal government prevented Microsoft from giving "Word" away with
their operating systems, many people do not have it. If you want people to take the trouble to answer your questions you should at least take the trouble yourself to type them in. More importantly,
you just post the problems with no attempt yourself to do them. I, and many people here, will be happy to help you but not do the problem for you. And to help we have to know what you can do, what
you know, and where you have difficulty. The first problem gives you a 3 by 3 matrix and an eigenvalue and asks for the eigen-space for that eigenvalue. Do you know what saying that "3" is an
eigenvalue means? What equations does that result in. This problem reduces to solving an underdetermined system of 3 equations. Have you got the equations yet? The second problem requires you to do
the same thing for both eigenvalues of a 2 by 2 matrix. Have you found the eigenvalues? | {"url":"http://mathhelpforum.com/advanced-algebra/200222-basis-eigenspace-corresponding.html","timestamp":"2014-04-17T01:49:33Z","content_type":null,"content_length":"35280","record_id":"<urn:uuid:f68ec066-778e-4e99-a73e-32cf10beb901>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posted by Setareh on Tuesday, November 20, 2007 at 9:52pm.
Craftsmen install 500 sq ft of ceramic tile and 200 sq ft of vinyl tile in one day. An apprentice installs 100 sq ft of ceramic tile and 200 sq ft of vinyl tile in one day. The firm has a job that
requires 2400 sq ft if ceramic tile and 1600 sq ft of vinyl tile. Tessa pays craftsmen $200 per day and apprentices $120 per day.
We have to figure out the choices that will optimize the profit and minimize her cost.
thank you for the help!!!
• Math(Algebra2) - bobpursley, Tuesday, November 20, 2007 at 10:13pm
Cost of ceramic tile laying
Craftsman: 500/200 or 2.5 tiles/dollar
Apprentice: 100/120 or .85 tiles/dollar.
Cost of Vinyl:
Craftsman: 200/200= 1 tile/dollar
Apprentice: 200/120= 1.66 tiles/dollar
So looking at productivity, Craftsman can lay ceramic tile well, and apprentice can do the leftover vinyl
I wonder if you really mean AND in the data, or meant OR
If you really meant AND, then start here:
1) Hire 5 craftsman: They lay 2400 ceramic tile AND 1000 sq ft of Vinyl.
Hire three apprentices, they lay 600 square feet of vinyl
Cost of this option: 5*200 + 3*120=1360
Compare this option to ...
2) hire 4 craftmen to do 2000 ft^2 of ceramic tile, and 800 sqft of vinyl
and hire 4 apprentice to do 400 sqft of ceramic, and 800 sqft of vinyl
Cost of this option is 4*200 + 4*120= 1280
compare this option to
2) Hire 3 craftsmen to do 1500 sqft ceramic and 600 vinyl, and
Hire 10 apprentices to do 900 sqft of ceramic tile and 1000 sqft of vinyl
Cost= 3*200 + 10*120= 1800 dollars.
It pays to use the most productive folks, option 2
If you meant lays 500 sqft ceramic OR 200 sqft vinyl, that is a different solution.
Related Questions
Math - Craftsmen install 500 sq ft of ceramic tile and 100 sq ft of vinyl tile ...
Math - Craftsmen install 500 sq ft of ceramic tile and 100 sq ft of vinyl tile ...
Math - Craftsmen install 500 sq ft of ceramic tile and 100 sq ft of vinyl tile ...
Algebra 2 - A construction firm employs two levels of title installers: ...
MATH Word Problem - A pedestal for a statue is in the shape of a hexagon formed ...
Math - Calculate the area of the base of a fountain if the distance from the ...
algebra 2 - Minnie and Steve have arectangular swimming pool that is 12 ft. wide...
algebra1 - Betty wishes to bulid a rectangular dog runalong the side of her ...
algebra - If you need 3 pounds of fertilizer for 100 square feet of soil, how ...
Algebra - Sam is installing ceramic tiles in his bathroom. The area of each tile... | {"url":"http://www.jiskha.com/display.cgi?id=1195613521","timestamp":"2014-04-17T16:12:01Z","content_type":null,"content_length":"9853","record_id":"<urn:uuid:d8ec56bc-5773-448f-ade6-a118c6b8134c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
2012 Ribenboim Prize in Number Theory
This year's recipient of the Ribenboim prize in number theory is Dragos Ghioca from the University of British Columbia.
Dragos Ghioca is one of the most energetic, productive, and influential researchers of his generation in the field of arithmetic algebraic geometry. His research is at the interplay of Number Theory,
Algebraic Geometry and Discrete Dynamical Systems. Within the seven years since his PhD from Berkeley Dragos has established himself as one of the leading experts in two very important areas of
current research: the theory of Drinfeld modules and the field of arithmetic dynamics.
In a series of papers, Dragos proves Drinfeld module analogues of classical theorems from Diophantine geometry. Such proofs are generally not direct translations of the classical proofs; they require
significant new ideas, not to mention a deep understanding of the theory. Among Dragos’s results are Drinfeld module analogues of the Mordell–Lang theorem, Lehmer’s conjecture, the Mordell–Weil
theorem, equidistribution results for torsion points, and estimates for integral points.
Recently Dragos has turned to the comparatively new field of arithmetic dynamics. The starting point is an algebraic variety $X$ and a (non-linear) self-map $\phi : X \rightarrow X.$ Classical
(discrete) dynamics is the study of orbits $O_\phi(x) = \{x, \phi(x), \phi^2(x), . . .\}$ of points $x \in X$ under iteration of $\phi$. Arithmetic dynamics is the study of arithmetic properties of
orbits. The subject is driven by a number of deep conjectures, many of them dynamical analogues of classical theorems and conjectures in arithmetic geometry, such as the celebrated Mordell-Lang
conjecture. Here Dragos and his co-workers found fundamental results, for instance, a surprising counterexample to a conjecture of Zhang. This led to reformulations and also to some positive results,
i.e. proofs of certain other natural analogues which survived the counterexample.
The prize presentation and lecture will happen during CNTA XII in Lethbridge, on Wednesday June 20 at 9:00 AM in PE250. | {"url":"http://www.cs.uleth.ca/~cnta2012/CNTA12-RibenboimAnnounce.html","timestamp":"2014-04-18T10:51:42Z","content_type":null,"content_length":"3026","record_id":"<urn:uuid:32847660-abf0-4743-8f9e-c20ee0b24353>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oaks, PA Algebra 2 Tutor
Find an Oaks, PA Algebra 2 Tutor
...I look forward to tutoring your chemistry student! I obtained a history minor while at the University of Delaware. I also took the AP exam in European history in high school and scored a 4 on
the exam.
14 Subjects: including algebra 2, chemistry, physics, geometry
...SAT/ACT Math: just as important as knowing the content, is having a strategy to complete these sections. When to do a problem, when not to. When to guess, when not to.
35 Subjects: including algebra 2, chemistry, English, reading
...I am graduated from Drexel university last year majoring in Mechanical Engineering and minored in Business Administration. I am currently employed with a company as design engineer but want to
fill my free time with something productive and at the same time earn a second income to pay off my hea...
8 Subjects: including algebra 2, algebra 1, precalculus, trigonometry
...I have all the necessary study materials, including half a dozen actual ACT tests, so you will not need to purchase extra books. The ACT science section covers basic chemistry, physics, and
biology. Although technically, you don't actually need to know much about these subjects to be successful on the test.
34 Subjects: including algebra 2, English, writing, physics
...Her goal is to practice pediatric medicine in inner city poverty stricken communities. Latoya has been heavily involved in advocating education and excellence in young people through various
avenues. These avenues include FOCUS--Facilitating Opportunity and Climate for Under-represented Student...
13 Subjects: including algebra 2, chemistry, geometry, biology
Related Oaks, PA Tutors
Oaks, PA Accounting Tutors
Oaks, PA ACT Tutors
Oaks, PA Algebra Tutors
Oaks, PA Algebra 2 Tutors
Oaks, PA Calculus Tutors
Oaks, PA Geometry Tutors
Oaks, PA Math Tutors
Oaks, PA Prealgebra Tutors
Oaks, PA Precalculus Tutors
Oaks, PA SAT Tutors
Oaks, PA SAT Math Tutors
Oaks, PA Science Tutors
Oaks, PA Statistics Tutors
Oaks, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/oaks_pa_algebra_2_tutors.php","timestamp":"2014-04-17T13:03:39Z","content_type":null,"content_length":"23697","record_id":"<urn:uuid:de14a882-e2ad-46b5-8ba0-c06d94db6b74>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
differential equations homework urgent!
December 23rd 2008, 01:27 AM #1
Dec 2008
differential equations homework urgent!
Solve the homogenous differential equation (x^2 – xy + y^2)dx + x^2dy = 0 by completing the steps below
a. define a new variable in terms of the existing variables. State the resulting differential equation
b. using part a, substitute into the given differential equation. Then simplify. Show your work.
c. Integrate the resulting equation. Show your work
d. Using the definition from part a, rewrite your answer in terms of the original variables. Show your work.
Divide top & bottom by $x^2$ and then put $y=xz.$ (This seems to be pure homework, I hope you can do something by your own.)
December 23rd 2008, 03:33 PM #2 | {"url":"http://mathhelpforum.com/differential-equations/65885-differential-equations-homework-urgent.html","timestamp":"2014-04-18T13:42:16Z","content_type":null,"content_length":"33854","record_id":"<urn:uuid:6c7de698-f397-4904-8ebf-7b5800feb0ea>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fort Myer, VA Precalculus Tutor
Find a Fort Myer, VA Precalculus Tutor
...There, I tutored calculus to several students. Post-undergraduate, I began as a volunteer group tutor before also becoming a private one-on-one tutor as well. I enjoy every minute of it, and
it's been one of the most rewarding experiences of my life so far, one that has inspired me to become a secondary math teacher.
15 Subjects: including precalculus, chemistry, calculus, geometry
...They need to be challenged with rich problems, while being provided with the tools to tackle those problems with creativity and confidence. With several years of experience teaching math and
tutoring, I know how to help students build both their conceptual understanding of mathematics and their ...
16 Subjects: including precalculus, English, writing, calculus
...For students who don’t play math games and compete with siblings over who can compute the tax and tip the fastest at dinner, I am here to help with just about any math subject. I offer
tutoring for any high school math subject up to and including AP Calculus AB and BC. I also help students improve their scores for the quantitative portions of the SAT and ACT.
11 Subjects: including precalculus, calculus, geometry, algebra 1
...I teach by example and practice.I was an elementary school principal for about 10 years. I am able to tutor young children in most areas of mathematics. I consider myself to be patient.
12 Subjects: including precalculus, calculus, geometry, algebra 1
...I take time to explain, use graphics, examples and make you work during the lesson. Also, you will have to do some homework and we will review it together. This tutoring will help you to
understand and be fluent with all pre-algebra topics.
6 Subjects: including precalculus, Spanish, calculus, prealgebra
Related Fort Myer, VA Tutors
Fort Myer, VA Accounting Tutors
Fort Myer, VA ACT Tutors
Fort Myer, VA Algebra Tutors
Fort Myer, VA Algebra 2 Tutors
Fort Myer, VA Calculus Tutors
Fort Myer, VA Geometry Tutors
Fort Myer, VA Math Tutors
Fort Myer, VA Prealgebra Tutors
Fort Myer, VA Precalculus Tutors
Fort Myer, VA SAT Tutors
Fort Myer, VA SAT Math Tutors
Fort Myer, VA Science Tutors
Fort Myer, VA Statistics Tutors
Fort Myer, VA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Arlington, VA precalculus Tutors
Brentwood, MD precalculus Tutors
Chevy Chase Village, MD precalculus Tutors
Chevy Chs Vlg, MD precalculus Tutors
Colmar Manor, MD precalculus Tutors
Cottage City, MD precalculus Tutors
Crystal City, VA precalculus Tutors
Dunn Loring precalculus Tutors
Fairmount Heights, MD precalculus Tutors
Forest Heights, MD precalculus Tutors
Glen Echo precalculus Tutors
Martins Add, MD precalculus Tutors
Martins Additions, MD precalculus Tutors
Rosslyn, VA precalculus Tutors
Somerset, MD precalculus Tutors | {"url":"http://www.purplemath.com/fort_myer_va_precalculus_tutors.php","timestamp":"2014-04-19T02:18:58Z","content_type":null,"content_length":"24430","record_id":"<urn:uuid:76337943-a32b-4ce5-9a6e-0e48949ce113>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Institute for Mathematics and its Applications (IMA)
- Hot Topics Workshops and Special Events:
Analysis and Computation of Coherent Structures
Grégory Faye University of Minnesota, Twin Cities
David J.B. Lloyd University of Surrey
Theoretical and computational aspects of applied mathematical research on coherent structures are relevant to subjects as diverse as ecology, chemistry, material sciences, sociology, pattern forming
systems, and neurosciences, where remarkable agreement between theory and experiments can be claimed in many of these fields. The aim of this one-day workshop is to present an overview of some of the
techniques, such as spatial dynamics, normal forms, geometric singular perturbation theory and numerical continuation, developed in the study of coherent structures in reaction-diffusion equations,
neural field equations, and the prototypal Swift-Hohenberg equation. The program will consist of tutorial presentations and research talks on subjects ranging from mathematical and computational
analysis to concrete applications. | {"url":"http://www.ima.umn.edu/2012-2013/SW5.28.13/index.html","timestamp":"2014-04-18T08:04:39Z","content_type":null,"content_length":"20870","record_id":"<urn:uuid:8306f047-0bd6-44ca-84ad-049d9969ead9>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
Light Bulb Resistance
Yenka Plugin
To view this content you need the Yenka plug-in. If you don't have it already, it's quick and easy to download!
• Home >
• Light Bulb Resistance
Light Bulb Resistance
Added by Craig Napier on Dec 15, 2008
This activity uses circuit models to show that a bulb's resistance will increase with increasing current, and explains that this is due to the bulb filament warming up. Pupils should be able to
analyse their results by scaling and plotting a line graph, and describe a graph with a dependent variable that has a decreasing rate of increase.
This item teaches:
• about the relationship between the resistance of a bulb and the current flowing through it
• The pupils should know that a bulb's resistance increases with increasing current.
• The pupils should know that the resistance increase is due to the bulb filament warming up.
• The pupils should be able to scale and plot a line graph from a table.
• The pupils should be able to describe a graph with a decreasing rate of increase of the dependent variable.
This item is not part of any lists. | {"url":"http://www.yenka.com/content/item.action?quick=1hm","timestamp":"2014-04-17T15:26:14Z","content_type":null,"content_length":"22049","record_id":"<urn:uuid:5a7fe574-51af-4706-b25e-4781d0fd5cd3>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reed Solomon decoder
naliali wrote:
> I suppose to implement a Reed Solomon decoder for Inmarsat video
> receiver, but I know very little about its specification.
> unfortunately I couldn't find any useful information on the net
> about FEC used in Inmarsat.
> I know the following information about this RS :
> - it is over GF(32) by primitive polynomial p(x) = x^5+x^2+1 = 37
> - Data length is 15 and parity length is 16, so having RS(31,15, 37)
> but the major problem is that I don't know it's generator polynomial
> g(x). using default Matlab RS encoder, I found that Matlab uses g(x)
> = (x+a^1)(x+a^2)...(x+a^16) as generator polynomial for rs(31,15). but
> I'm not sure it's the same as g(x) which used in Inmarsat standard.
This has nothing to do with the C language, and is off-topic here
on c.l.c. Try comp.programming. F'ups set.
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
Posted via a free Usenet account from | {"url":"http://www.velocityreviews.com/forums/t527513-reed-solomon-decoder.html","timestamp":"2014-04-18T08:17:44Z","content_type":null,"content_length":"42191","record_id":"<urn:uuid:72c796ba-fddc-45e7-8e4e-bd15fffe1367>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Realizations in Biostatistics
Biostatistics, clinical trial design, critical thinking about drugs and healthcare, skepticism, the scientific process.
Monday, July 7, 2008
The mysterious
has posted a (slightly enhanced, but presumably not in a data-changing fashion) graph summarizing a subgroup of an analysis on ENHANCE, and it showed that adding ezetimibe to a statin regimen didn't
do any good. At all. The one apparently statistically significant good result for the 2nd quartile
looks spurious to me and really makes no sense except as a Type I error. Conversely, the statistically significantly bad result for the 3rd quartile looks like another Type I error. Overall, this
look like nothing, or if anything a modestly negative effect.
Sunday, July 6, 2008
So, continuing in a series on blinding (i.e. the hiding of treatment group assignments from study personnel until all the data have been collected and cleaned and analysis programs written), I will
talk about possible ways to do so-called "functional unblinding" -- that is, effectively getting an answer on a drug's treatment effect before treatment codes are released for analysis. Here I will
assume that treatment codes are actually remaining sealed (so we aren't talking about "cheating"). This kind of cheating is a serious breach of ethics and merits its own discussion, but I'm saving
that for another time.
Also, this post was inspired by the ENHANCE trial and the fallout from it, but I'm not going to make any further comment about it except to say that there are a lot of other features to that
situation that make it, at best, appear suspicious. (And to the wrong people.)
So, on to the question, "Is it possible to determine a treatment effect without unblinding the trial?" My answer: a very risky yes, and in some circumstances. I think it is going to be very difficult
to show that no treatment effect exists, while a huge treatment effect will be clear. Since I'll be throwing several statistical tools at the problem, this post will only be the first in a series.
The first method is graphical and is called
kernel density estimation
. This method has the nice feature that it can quickly be done in
(and I think most other statistical packages) and shows nice graphs. Here I simulated 3 possible drug trials. The first one, the treatment had no effect whatsoever. In the second one, the drug had
what I would consider a moderate effect (equal to the standard deviation of the outcome). In the third one, the drug had a huge effect (and probably what would not commonly be seen in drug trials
today--3 times the standard deviation of the outcome). I ran the default kernel density estimate in R (using the
density() function with defaults), and came up with the image accompanying the post. The top graph looks like a normal distribution graph, as one would expect. The middle graph also looks like a
normal distribution, but it is more spread out than the top one. The third one clearly shows two groups.
Identifying huge effects seems to be pretty easy, at least by this method. Identifying moderate effects is a whole lot harder, and distinguishing them from no effect is a bit risky.
However, this isn't the only method of analyzing this problem, and so I will talk about some other methods next time.
Wednesday, July 2, 2008
So, I've been meaning to discuss this for some time, and will do so, but I will note that Sen. Grassley thinks
blinding doesn't matter
on the ENHANCE trial, that simulations could have been run to assess statistical significance on the basis of blinded data.
Of course, this is disturbing on several levels. I'm going to argue that kind of analysis is possible but risky. At the same time, this will make blinding arguments much weaker. As it stands now,
anyone who lays eyes on an unsealed randomization schedule, the results of an unblinded analysis, or any summary that might involve unblinded analysis is considered unblinded and therefore should not
make decisions that influence further conduct of the study. The worst case scenario of this new argument is that anybody with blinded data and the potential knowledge of how to assess statistical
significance based on blinded data will be considered unblinded.
Now, we're getting into murky territory. | {"url":"http://realizationsinbiostatistics.blogspot.com/2008_07_01_archive.html","timestamp":"2014-04-17T09:52:13Z","content_type":null,"content_length":"161875","record_id":"<urn:uuid:43c41f6c-ed5c-47fe-a069-cc4827008418>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 30
- ACM Transactions on Graphics , 1999
"... this paper, we present a system that exploits object-space, rayspace, image-space and temporal coherence to accelerate ray tracing. Our system uses per-surface interpolants to approximate
radiance, while conservatively bounding error. The techniques we introduce in this paper should enhance both int ..."
Cited by 53 (5 self)
Add to MetaCart
this paper, we present a system that exploits object-space, rayspace, image-space and temporal coherence to accelerate ray tracing. Our system uses per-surface interpolants to approximate radiance,
while conservatively bounding error. The techniques we introduce in this paper should enhance both interactive and batch ray tracers.
- Reliable Computing , 1998
"... . The expansion of complicated functions of many variables in Taylor polynomials is an important problem for many applications, and in practice can be performed rather conveniently (even to high
orders) using polynomial algebras. An important application of these methods is the field of beam physics ..."
Cited by 34 (2 self)
Add to MetaCart
. The expansion of complicated functions of many variables in Taylor polynomials is an important problem for many applications, and in practice can be performed rather conveniently (even to high
orders) using polynomial algebras. An important application of these methods is the field of beam physics, where often expansions in about six variables to orders between five and ten are used.
However, often it is necessary to also know bounds for the remainder term of the Taylor formula if the arguments lie within certain intervals. In principle such bounds can be obtained by interval
bounding of the (n+1)-st derivative, which in turn can be obtained with polynomial algebra; but in practice the method is rather inefficient and susceptible to blow-up because of the need of repeated
interval evaluations of the derivative. Here we present a new method that allows the computation of sharp remainder intervals in parallel with the accumulation derivatives up to order n. The method
is useful for a...
- Computer Graphics Forum , 1996
"... . We discuss adaptive enumeration and rendering methods for implicit surfaces, using octrees computed with affine arithmetic, a new tool for range analysis. Affine arithmetic is similar to
standard interval arithmetic, but takes into account correlations between operands and sub-formulas, generally ..."
Cited by 30 (15 self)
Add to MetaCart
. We discuss adaptive enumeration and rendering methods for implicit surfaces, using octrees computed with affine arithmetic, a new tool for range analysis. Affine arithmetic is similar to standard
interval arithmetic, but takes into account correlations between operands and sub-formulas, generally providing much tighter bounds for the computed quantities. The resulting octrees are accordingly
much smaller, and the rendering faster. We also describe applications of affine arithmetic to intersection and ray tracing of implicit surfaces. keywords: cellular models, interval analysis,
rendering, implicit surfaces. 1 Introduction Implicit surfaces have recently become popular in computer graphics and solid modeling. In order to exploit existing hardware and algorithms, it is often
necessary to approximate such surfaces by models with simpler geometry, such as polygonal meshes or voxel arrays. Let S be a surface defined implicitly by the equation h(x; y; z) = 0. A simple and
general techn...
, 2001
"... We provide an overview of Granular Computing- a rapidly growing area of information processing aimed at the construction of intelligent systems. We highlight the main features of Granular
Computing, elaborate on the underlying formalisms of information granulation and discuss ways of their developme ..."
Cited by 19 (0 self)
Add to MetaCart
We provide an overview of Granular Computing- a rapidly growing area of information processing aimed at the construction of intelligent systems. We highlight the main features of Granular Computing,
elaborate on the underlying formalisms of information granulation and discuss ways of their development. We also discuss the concept of granular modeling and present the issues of communication
between formal frameworks of Granular Computing. © 2007 World Academic Press, UK. All rights reserved.
- In Graphics Interface , 1996
"... We describe a variant of a domain decomposition method proposed by Gleicher and Kass for intersecting and trimming parametric surfaces. Instead of using interval arithmetic to guide the
decomposition, the variant described here uses affine arithmetic, a tool recently proposed for range analysis. Aff ..."
Cited by 18 (7 self)
Add to MetaCart
We describe a variant of a domain decomposition method proposed by Gleicher and Kass for intersecting and trimming parametric surfaces. Instead of using interval arithmetic to guide the
decomposition, the variant described here uses affine arithmetic, a tool recently proposed for range analysis. Affine arithmetic is similar to standard interval arithmetic, but takes into account
correlations between operands and sub-formulas, generally providing much tighter bounds for the computed quantities. As a consequence, the quadtree domain decompositions are much smaller and the
intersection algorithm runs faster. keywords: surface intersection, trimming surfaces, range analysis, interval analysis, CAGD.
, 1998
"... Mean value analysis (MVA) is a well-known solution technique for separable closed queueing networks used in performance modeling of computer and communication systems. In many cases, like for
sensitivity analysis or with inaccurate model input parameters, intervals are more appropriate as model inpu ..."
Cited by 14 (12 self)
Add to MetaCart
Mean value analysis (MVA) is a well-known solution technique for separable closed queueing networks used in performance modeling of computer and communication systems. In many cases, like for
sensitivity analysis or with inaccurate model input parameters, intervals are more appropriate as model inputs than single values. This paper presents a version of the MVA algorithm for separable
closed queueing networks with one customer class consisting of load-independent queueing centers as well as delay devices, which accepts both single values and intervals as input parameters in
arbitrary combination. Monotonicity of the model outputs with respect to all input parameters is proved and these monotonicity properties are used to construct a low cost intervalversion of the MVA
algorithm providing exact output intervals as results. Thus, dependency problems commonly arising with the interval evaluation of arithmetic expressions are avoided without significant increase in
computation costs. Addit...
- Reliable Computing , 2001
"... Abstract. Recently, an alternative interval approximation F ( X) for enclosing a factorable function f(x) in a given box X has been suggested. The enclosure is in the form of an affine interval
function n i=1 F ( X) = a X + B where only the additive term B is an interval, the coefficients ai being r ..."
Cited by 14 (6 self)
Add to MetaCart
Abstract. Recently, an alternative interval approximation F ( X) for enclosing a factorable function f(x) in a given box X has been suggested. The enclosure is in the form of an affine interval
function n i=1 F ( X) = a X + B where only the additive term B is an interval, the coefficients ai being real i i numbers. The approximation is applicable to continuously differentiable, continuous
and even discontinuous functions. In this paper, a new algorithm for determining the coefficients ai and the interval B of F(X) is proposed. It is based on the introduction of a specific generalized
representation of intervals which permits the computation of the enclosure considered to be fully automated. 1.
- IN LOGIC PROGRAMMING: PROCEEDINGS OF THE 1994 INTERNATIONAL SYMPOSIUM , 1994
"... Existing interval constraint logic programming languages, such as BNR Prolog, work under the framework of interval narrowing and are deficient in solving linear systems, which constitute an
important class of problems in engineering and other applications. In this paper, an interval linear equality ..."
Cited by 13 (3 self)
Add to MetaCart
Existing interval constraint logic programming languages, such as BNR Prolog, work under the framework of interval narrowing and are deficient in solving linear systems, which constitute an important
class of problems in engineering and other applications. In this paper, an interval linear equality solver, which is based on generalized interval arithmetic and Gaussian elimination, is proposed. We
show how the solver can be adapted to incremental execution and incorporated into a constraint logic programming language already equipped with a non-linear solver based on interval narrowing. The
two solvers interact and cooperate during computation, resulting in a practical interval constraint arithmetic language CIAL. A prototype of CIAL, based on CLP(R), is constructed and compared
favourably against several major constraint logic programming languages.
- IN PROCEEDINGS OF THE TWELFTH INTERNATIONAL CONFERENCE ON LOGIC PROGRAMMING, LOGIC PROGRAMMING , 1994
"... We propose the use of the preconditioned interval Gauss-Seidel method as the backbone of an efficient linear equality solver in a CLP(Interval) language. The method, as originally designed,
works only on linear systems with square coefficient matrices. Even imposing such a restriction, a naive incor ..."
Cited by 12 (1 self)
Add to MetaCart
We propose the use of the preconditioned interval Gauss-Seidel method as the backbone of an efficient linear equality solver in a CLP(Interval) language. The method, as originally designed, works
only on linear systems with square coefficient matrices. Even imposing such a restriction, a naive incorporation of the traditional preconditioning algorithm in a CLP language incurs a high
worst-case time complexity of O(n^4), where n is the number of variables in the linear system. In this paper, we generalize the algorithm for general linear systems with m constraints and n
variables, and give a novel incremental adaptation of preconditioning of O(n 2 (n + m)) complexity. The efficiency of the incremental preconditioned interval Gauss-Seidel method is demonstrated using
large-scale linear systems. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1041448","timestamp":"2014-04-21T08:29:56Z","content_type":null,"content_length":"37200","record_id":"<urn:uuid:91d75e38-6e10-41ce-81a7-92794b1945e1>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
Partial differential equations with minimal smoothness and applications
B. Dahlberg
In recent years there has been a great deal of activity in both the theoretical and applied aspects of partial differential equations, with emphasis on realistic engineering applications, which
usually involve lack of smoothness. On March 21-25, 1990, the University of Chicago hosted a workshop that brought together approximately fortyfive experts in theoretical and applied aspects of these
subjects. The workshop was a vehicle for summarizing the current status of research in these areas, and for defining new directions for future progress - this volume contains articles from
participants of the workshop.
We haven't found any reviews in the usual places.
Weakly elliptic systems with obstacle constraints 1
Some remarks on Widders theorem and uniqueness 15
On null sets of Pharmonic measures 33
5 other sections not shown
Bibliographic information | {"url":"http://books.google.com/books?id=QEPvAAAAMAAJ&q=conjecture&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-20T09:03:53Z","content_type":null,"content_length":"106115","record_id":"<urn:uuid:0cb5dc8d-fb6d-44a3-90d3-cb97a0b18ad0>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
Objective NHL
other day
, I wrote about how, in any particular season, the sum of a team's shooting and save percentage is correlated with how much time that team spent playing with the lead, and how this relationship is,
in turn, related to shot differential.
The purpose of this post is to elaborate upon that.
Firstly, the issue of causation. While it goes without saying that correlation does not imply causation, I think it's reasonable to assume that there some sort of causal relationship here.
I think that the arrow of causation is bi-directional. For one, a team that is lucky or good with the percentages when the score is tied will, on average, tend to play with the lead more. In this
sense, having a good team PDO number causes a team to play more with the lead.
On the other hand, however, I think that playing with lead is, in and of itself, beneficial to shooting and save percentage. I'm basing this assumption on the fact that
shot ratios
are subject to the leading/trailing effect. I suspect that there's some sort of trade off involved whereby the leading team's advantage in shot ratio is met with a corresponding disadvantage in the
Thus, good percentages leads to playing with the lead more, which in turn begets good percentages.
Secondly, my prediction is that playing with the lead accounts for the fact that the spread in even strength shooting percentage is somewhat larger than what would be predicated by chance alone.
Here is what has demonstrated thus far:
The distribution of team EV S% when the score is tied is
entirely random
There are no
'real effects'
with respect to EV S% when the score is tied. That is to say, it has no sustain.
Some of the variation in overall EV S% at the team level is
. That is to say, there is more variation than what would be predicted from chance alone.
This being the case, the logical implication is that the playing to score effect is one of - perhaps the only - non-random contributions to EV S%.
As a preliminary test for this hypothesis, I looked at the relationship between [minutes played with the lead - minutes played trailing] and various even strength variables for the 2008-09 season.
The results are contained below:
While the results are not unequivocally supportive, I think it tends to accord with my theory.
The teams that do better with the percentages when the score is tied at EV tend to play more with the lead overall - that's not unexpected. Moreover, and perhaps more importantly, teams that did
better with the percentages at EV when the score wasn't tied tended to play more with the lead as well.
Of course, I'll refrain from saying anything with confidence until further analysis is performed.
Depicted above is a graph showing the relationship between playing with the lead and PDO number at the team level for last season. The teams coded in black are teams that had an aggregate shot
differential greater than 100 last season. Teams coded in red are teams that had a negative shot differential less than -100. Teams coded in white are teams that had a shot differential between 100
and -100.
Team PDO is defined as the sum of team shooting percentage and team save percentage. Unlike conventional PDO numbers, these figures are not solely for even strength play - special teams play is
included. The same is true for the minutes played data. However, empty netters have been excluded in calculating each team's shooting and save percentage.
Playing with the lead is favorable to the percentages. The relationship is quite strong, too - the correlation between [Minutes played leading - Minutes played trailing] and Team PDO was 0.63 for
last season. This is similar to the correlations observed in other seasons.
Only 2007-08 is anomalous. And even then, the correlation is positive.
What's interesting, however, is how the relationship varies according to shot differential.
I've long been opposed to the idea that there exists a relationship between shot totals and the percentages at the team level. I've been particularly opposed to the idea that there is a relationship
between goaltender save percentage and number of shots faced. Now, in fairness, there isn't much of a relationship between the two in general. Shown below is the correlation between Team PDO and team
shot differential for every season since 2002-03.
Thus, only in 2007-08 was there anything of a relationship. The correlations for every other season are insignificantly different from zero. On a related note, I did the same thing for shots against
and goaltender save percentage in a previous post and obtained similar results.
Of course, the teams that get outshot over the course of a season tend to be the teams that are consistently playing from behind. (The correlation is approximately 0.5-0.6).
Once this fact that is controlled for, a positive relationship between Team PDO and shot differential emerges. That is to say, teams with negative differentials tend to do much better in terms of the
percentages than what would otherwise be predicted on the basis of their [Minutes played leading - Minutes played trailing] differential.
To illustrate this, I assigned each team an expected PDO number based upon its [Minutes played leading - Minutes played trailing] differential. I then determined the correlation between expected PDO
and shot differential for each of the involved seasons.
While the strength of the correlation varies from year to year, it's apparent that having a negative shot differential allows a team to outperform it's expected PDO.
I suspect that this is true for the following reasons:
The team that plays with the lead will tend to have a higher scoring chance/shot ratio than a team that plays from behind. This is because a team that plays from behind is forced to take more chances
in an attempt to tie the score.
However, a team that has a good shot differential will tend to get the better of the play regardless of whether it is leading or trailing. Likewise, a team with a poor shot differential will tend to
get dominated territorially regardless of goal state.
To use a concrete example, if San Jose is playing Florida, and San Jose is winning, San Jose is still likely getting the better of the play. The puck will tend to spend much more time in Florida's
end than in San Jose's. Therefore, while San Jose will surely still end up outchancing the Panthers, it is likely that the Panthers will end up with the better scoring chance/shots ratio on account
of generating more of its shots through odd man rushes and the like (rather than, say, shots from the periphery of the offensive zone that are generated through periods of sustained pressure).
Anyway, I plan to analyze the data in more detail in the future. I think it might go a long way in accounting for some of the more anomalous teams over the past few years (2006-07 Predators, 2006-07
Sabres, 2007-08 Canadiens, and so forth). I also think that it also might have some utility in terms of goaltender analyis.
One more thing: Intuitively, I would expect that the leading-trailing effect would be most pronounced at even strength.
As much as I would have liked to confine the data to even strength play only, that wasn't possible. Granted, the correlation between leading-trailing differential and leading-trailing differential at
even strength is bound to be quite high.
I've been doing a bit of work with the Zone Shift stat as of late.
For those unfamiliar, Zone Shift is a stat conceptualized by Vic Ferrari, who has from time-to-time discussed the metric at his blog.
For individual players, Zone Shift is calculated as follows:
[EV Shifts Started in the Defensive Zone - EV Shifts Started in the Offensive Zone] -
[EV Shifts Ended in the Defensive Zone - EV Shifts Ended in the Offensive Zone]
What Zone shift is essentially measuring, albeit somewhat crudely, is the ability of the player to move the puck in the right direction - a valuable, if underrated, asset to have as a player.
Having said that, in browsing through the data, I couldn't help but notice that the players with the best Zone Shift numbers tended to take a large proportion of defensive zone draws relative to
their teammates.
In order to quantify the effect, I calculated each team's aggregate zone shift ratio - that is, EV Defensive Zone draws/EV Offensive Zone Draws - and multiplied that ratio by one hundred. This stat
can be termed 'TEAM ZONE RATIO.' To give a concrete example, the Thrashers were destroyed territorial this year at EV and took roughly 1.34 EV Defensive Zone draws for each Offensive Zone draw, thus
giving them a TEAM ZONE RATIO figure of approximately 134.
I then figured out the exact same stat for all players - that is, for all EV faceoffs that the player was on the ice for when his shift BEGAN - in the league that were on the ice for at least 50 EV
faceoffs in all three zones (Defensive, Offensive, Neutral). We'll call this figure PLAYER ZONE RATIO STARTING.
I then subtracted this figure from the TEAM ZONE RATIO of that player's team. This stat can be called 'PLAYER ZONE DIFFERENTIAL.'
Again, to give a concrete example, Colby Armstrong took approximately 1.51 EV Defensive Zone draws for each EV Offensive Zone Draw, therefore giving him a PLAYER ZONE RATIO STARTING figure of around
151, and a PLAYER ZONE DIFFERENTIAL of 17 (151-134=17).
I then figured out each player's zone ratio for all shifts that ended with him on the ice. We'll term this PLAYER ZONE RATIO ENDING. Going back to Armstrong again, he ended 1.16 shifts in his own
zone for every faceoff ended in other team's end of the rink, therefore giving him a PLAYER ZONE RATIO ENDING number of 116.
Finally, I subtracted each player's ZONE RATIO ENDING number from his ZONE RATIO STARTING number in order to produce a ZONE SHIFT number. Armstrong's was around 35, which is pretty good - one of the
best in the league, in fact.
It appears that starting a high proportion of your EV faceoffs in your own zone relative to your team average - in other words, having a high PLAYER ZONE DIFFERENTIAL - is pretty favorable toward
ZONE SHIFT. Among all players on the ice for at least 50 EV faceoffs in each zone, the correlation was 0.80. Moreover, each unit increase in PLAYER ZONE DIFFERENTIAL is worth approximately a 0.88
increase in ZONE SHIFT. In other words, the effect is considerable.
To further illustrate this, consider the top ten players in unadjusted ZONE SHIFT during the 2008-09 season: Shultz, Sauer, Veilleux, Smithson, (Ryan) Johnson, Zigomanis, Hall, (Zybynek) Michalek,
McClement - all of these players took a much higher percentage of defensive zone draws than their teammates.
Long story short: It's easier to have a good Zone Shift number if you're starting more in your own end of the rink relative to your teammates, and if the metric is to be worth anything at all, this
ought to be corrected for.
And I've attempted to do exactly that. Contained below is a listing of the league's best and worst players in ADJUSTED ZONE SHIFT - adjusted because the stat attempts to control for the above bias.
I've also included the unadjusted ZONE SHIFT numbers as well.
This stat is, of course, imperfect, and further corrections are probably necessary, which is something I intend to look at in the near future. I just figured I'd throw this up in the interim. | {"url":"http://objectivenhl.blogspot.com/2009_07_01_archive.html","timestamp":"2014-04-16T16:15:51Z","content_type":null,"content_length":"60795","record_id":"<urn:uuid:35bf6953-e7ee-48c3-855e-1bdd3d8492c4>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
5.3 Objectives
There are long lists of objectives (22 in BC/Year 1, 6 in Year 2, 8 in Year 3). Out of this total of 36, 29 concern mathematical knowledge, and are phrased as such:
- construct geometrical shapes, angles and lines
- use statistical tools of analysis
- solve simultaneous linear equations
In the 6 objectives related to pedagogic content and curricular knowledge, students are required to:
- review and critique the primary maths syllabus, relate it to theories of learning, and formulate teaching strategies (Year 1).
- In Year 2 they are to construct and use mathematical models for teaching, and relate games to mathematics teaching and learning.
- The Year 3 syllabus has no clear relevance to teaching, although it includes conducting a qualitative research project of an unspecified nature. | {"url":"http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0muster--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-help---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&c=muster&cl=CL2.1&d=HASH018b330f91c5ab3d3451f249.9.3","timestamp":"2014-04-18T11:18:57Z","content_type":null,"content_length":"16667","record_id":"<urn:uuid:cf2ad37e-c546-4527-b06e-441c1607f9fc>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
What Is Molar Absorptivity?
In chemistry, molar absorptivity is defined as a measure of a chemical's ability to absorb light at a specified wavelength. The molar absorptivity coefficient, ε, depends on the chemical species;
actual absorption depends on chemical concentration and the path length. These variables are used in the Beer-Lambert Law. Molar absorptivity also is known as the molar extinction coefficient and the
molar absorption coefficient.
The Beer-Lambert Law is an equation relating absorption to chemical concentration, path length and molar absorptivity. Mathematically, the Beer-Lambert Law can be expressed as A = εcl. The most
common units for the molar absorptivity coefficient are M^-1cm^-1, although the units can be different depending on the units used for chemical concentration and path length. The International System
of Units (SI) for this measurement are m^2/mol.
Different chemical species usually have different molar absorptivity coefficients. These specific values for different chemicals at specified wavelengths of light can be found in chemical reference
manuals. In case the absorptivity values are not listed or cannot be found, they can be determined experimentally by measuring the absorbance of several solutions of the chemical at known
Determining the molar absorptivity of a chemical species can be accomplished by measuring the absorption of varying solution concentrations with a spectrometer. The spectrometer measures the total
absorbance of the solution, which increases as the chemical concentration increases. Many spectrometers measure transmittance, which is the inverse of absorbance. Absorbance must be used for
Beer-Lambert's Law; if transmittance is displayed, the inverse must be found first.
In a mixture of chemical species, each component contributes to the mixture's overall absorbance. The Beer-Lambert Law can be expanded for solutions with multiple components and can be expressed as A
= (e[1]c[1] + ... + e[n]c[n])l, with the subscript n denoting the number of species present. This expanded equation applies to the absorbing species in the solution.
The molar absorption coefficient is related to the absorption cross section, σ, via Avogadro's constant, N[A]. If the units of the molar absorption coefficient are taken to be L mol^-1cm^-1 and the
units of the absorption cross section are in cm2, then σ = 1000ln(10) x ε/N[A], or 3.82 x 10-21 x ε. The absorption cross section is related to the probability of an absorption process in a solution.
Molar absorptivity is particularly useful in spectrometry for measuring the concentration of chemical solutions. Measuring absorbance is a very fast method of determining chemical concentrations,
although the specific chemical species in the solution must be known. Other methods of measuring concentration, such as titration, can take more time and may require additional chemicals. | {"url":"http://www.wisegeek.com/what-is-molar-absorptivity.htm","timestamp":"2014-04-21T03:57:36Z","content_type":null,"content_length":"65757","record_id":"<urn:uuid:b84ec851-fbb4-4dbc-b925-d19e8809626f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
stracts of the workshop: groupoids in operator algebras and
noncommutative geometry
Abstracts of the workshop: groupoids in operator algebras and noncommutative geometry
(February 26 - March 2, 2007)
Alex Furman: Rigidity of group actions.
I. Introduction to Superrigidity
II. Orbit Equivalence in Ergodic Theory
III. Measurable Group theory
IV. Actions on manifolds
The general theme of these lectures is the "rigidity" that a structure of a group can impose on its actions in various settings: linear representations, actions on manifolds, orbit structures of
measurable actions etc. Our goal is to describe the general landscape of the questions, results and (some of the) ideas which appear in this area.
Titles and abstracts of talks:
Johannes Aastrup: Boutet de Monvel's algebra in noncommutative geometry
Paul Baum: Cosheaf homology
Given a local system on a topological space X (i.e. a representation of the fundamental groupoid of X) both the homology and the cohomology of X with coefficients in the local system are defined.
This talk considers what happens when the defining conditions for a local system are relaxed to obtain sheaves and cosheaves. With sheaves one can do cohomology and with cosheaves homology. Cosheaves
arise naturally from group actions on topolgical spaces, and enter into Chern character problems relevant to the Baum-Connes conjecture. Recent results of C. Voigt relate cosheaf homology to Bredon
homology and to periodic cyclic homology.
Moulay-Tahar Benameur: Homotopy invariance of Higher signatures in Haefliger cohomology
We shall explain in this talk some recent results obtained in collaboration with James Heitsch and related with index theory for foliations. We shall focus on the definition of the leafwise signature
in Haefliger cohomology and sketch a new geometric proof of the leafwise homotopy invariance of higher signatures with coefficients in leafwise flat bundles.
Paulo Carrillo: Analytical indices for Lie groupoids
For a Lie groupoid there is an analytic index morphism taking values in the K-theory of the C*-algebra associated to the groupoid. This is a good invariant, but extracting numerical invariants from
it, with the existent tools, is very difficult. In this talk, we will explain how to define another analytic index morphisms associated to the Lie groupoid. These ones take values in some groups that
allow us to do pairings with cyclic cocycles. We obtain some abstract index formulas.
Claire Debord: Poincaré Duality for stratified pseudomanifolds
We associate to a stratified pseudomanifold X a differentiable groupoid T^{S}X which plays the role of the tangent space to X. We construct a Dirac element. Thanks to a recursive process on the depth
of the stratification together with the stability of the constructions which are involved we show that the Dirac element induces a K-duality between the C*-algebras C*(T^{S}X) and C(X).
Alexander Gorokhovsky: Deformation quantization of gerbes on etale groupoids
This is a joint work with P.Bressler, R. Nest and B. Tsygan. We will discuss problems in which formal deformations of etale groupoids and gerbes on them arise and give an explicit description of the
differential graded Lie algebra which controls this deformation theory. Deformation quantization of gerbes on etale groupoids,
Eli Hawkins: A groupoid approach to quantization
I define a notion of "polarization" for Lie groupoids. Using polarized symplectic groupoids, I present a general strategy for constructing C*-algebras that quantize Poisson manifolds. This unifies
previous constructions including classical geometric quantization of a symplectic manifold and the C*-algebra of a source- connected Lie groupoid.
Adrian Ioana: Orbit inequivalent actions for groups containing a copy of F_2
I will prove that any countable discrete group G which contains a copy F_2 admits uncountably many non orbit equivalent actions.
Steven Hurder: Index theory and LS category for Riemannian foliations
Alex Kumjian: Fell bundles associated to groupoid morphisms
Given a continuous open surjective morphism $\pi :G \to H$ of \'etale groupoids with amenable kernel, we construct a Fell bundle $E$ over $H$ and prove that its C*-algebra $C^*_r(E)$ is isomorphic to
$C^*_r(G)$. This is related to results of Fell concerning C*-algebraic bundles over groups. The case $H=X$, a locally compact space, was treated by Ramazan. We conclude that $C^*_r(G)$ is strongly
Morita equivalent to a crossed product, the C*-algebra of a Fell bundle arising from an action of the groupoid $H$ on a C*-bundle over $H^0$. We apply the theory to groupoid morphisms obtained from
extensions of dynamical systems and from morphisms of directed graphs with the path lifting property. This is joint work with Valentin Deaconu and Birant Ramazan.
Klaas Landsman and Rogier Bos: Continuous representations of Lie groupoids
The notion of a representation of a locally compact groupoid G with Haar system on a measurable field of Hilbert spaces has been developed by Jean Renault and has the advantage of yielding a
connection with the representation theory of the associated C*- algebra C*(G). However, in the case of Lie groupoids it is worth studying representations on continuous fields of C*-algebras (i.e. on
Hilbert C*-modules over C_0(M), where M is the base space of G). Using ideas from geometric quantization, we present a method to construct such representations, as well as an associated "quantization
commutes with reduction" theorem in case that G is proper.
Jean-Marie Lescure: An index theorem for conical pseudomanifolds
We define an analytical index map and a topological index map for conical pseudomanifolds. These constructions generalize the analogous constructions used by Atiyah and Singer in the proof of their
index theorem for a smooth, compact manifold M. A main ingredient is a non-commutative algebra that plays in our setting the role of C_0(T*M). We prove a Thom isomorphism between non-commutative
algebras which gives a new example of wrong way functoriality in K-theory. We then give a new proof of the Atiyah-Singer index theorem using deformation groupoids and show how it generalizes to
conical pseudomanifolds. We thus prove a topological index theorem for conical pseudomanifolds.
Ieke Moerdijk: Subgroupoids of Lie groupoids
Bertrand Monthubert: Boutet de Monvel's Calculus and Groupoids
Can Boutet de Monvel's algebra on a compact manifold with boundary be obtained as the algebra Psi^0(G) of pseudodifferential operators on some Lie groupoid G? If it could, the kernel {\mathcal G} of
the principal symbol homomorphism would be isomorphic to the groupoid C*-algebra C*(G). While the answer to the above question remains open, we exhibit a groupoid G such that C*(G) possesses an ideal
I isomorphic to {\mathcal G}. In fact, we prove first that {\mathcal G}\simeq\Psi\otimes{\mathcal K} with the C*-algebra \Psi generated by the zero order pseudodifferential operators on the boundary
and the algebra $\mathcal K$ of compact operators. As both \Psi\otimes {\mathcal K} and I are extensions of C(S*Y)\otimes {\mathcal{K}} by {\mathcal{K}} (S*Y is the co-sphere bundle over the
boundary) we infer from a theorem by Voiculescu that both are isomorphic.
Victor Nistor: A topological index theorem for manifolds with corners
We show that the analytic and topological index for the groupoid G(M) associated to a compact manifolds with corners coincide. When the faces of M are contractible, these indices are isomorphisms
from K^*(TM) to K_*(G(M)) := K_*(C^*G(M)). This is joint work with Bertrand Monthubert.
Alan L. T. Paterson: The E-theoretic descent functor for groupoids
The descent functor enables us to go from equivariant asymptotic morphisms to asymptotic morphisms of crossed product C*-algebras. It is important in a number of contexts, in particular for the
Baum-Connes conjecture and the topological index. The functor for locally compact groups was established in their memoir by Guentner, Higson and Trout, and the talk will discuss what can be said in
the groupoid case. (Earlier work on this was done by R. Popescu in his thesis.) There seem to be technical difficulties with establishing in complete generality the descent functor for groupoids, but
we prove its existence under certain conditions which apply in a number of cases that arise in practice.
Mikael Pichot: The space of triangle buildings
Paolo Piazza: Foliated rho-invariants
I will present some ongoing work in collaboration with Moulay Benameur about the definition of a foliated rho invariant and the proof of some of its stability properties. This invariant generalizes
to measured foliations the classical Cheeger-Gromov rho-invariant on Galois coverings. I will first recall work of Keswani and Piazza-Schick dealing with the homotopy invariance of the Cheeger-Gromov
rho invariant on Galois coverings under a Baum-Connes assumption on the maximal group C*-algebra of the covering group; I will then move on and explain how these results can be generalized to
measured foliations, assuming the bijectivity of the Baum-Connes map for foliations. I shall also explain how index theoretic considerations play a crucial role throughout.
Sorin Popa: On the superrigidity of group actions in ergodic theory and von Neumann algebras
I will explain how the unlikely combination of deformation and rigidity properties of a measure preserving action of a group on a probability space can make it recognizable by merely knowing the
orbit equivalence class of the action, or just its von Neumann algebra.
Ian Putnam: A homology theory and C*-algebras for chaotic dynamical systems
Smale spaces were defined by David Ruelle as an abstract approach to the chaotic dynamical systems in Smale's program. He also described how operator algebras may be constructed from such systems. We
will describe a kind of homology theory for these systems and how it may be used to compute the K-theory of the C*-algebras.
Anton Savin: Homotopy classification of elliptic operators on manifolds with corners
This is joint work with V.E. Nazaikinskii and B.Yu. Sternin. We compute the group of stable homotopy classes of elliptic operators on manifolds with corners. It turns out that this group is
isomorphic to the analytic K-homology group of a certain explicitly constructed C*-algebra.
Stefaan Vaes: Computations of automorphisms and finite index subfactors of certain II_1 factors
We present the first concrete examples of II_1 factors without non-trivial finite index subfactors. We also present relatively easy examples of II_1 factors with trivial outer automorphism group.
Stéphane Vassout: Uniform non amenability and the first l^2 Betti number
In this talk, I will expose a recent work in collaboration with Mikael Pichot. We define uniform Cheeeger isoperimetric constant for a given finitely generated r-discrete measured groupoid of type
II_1, and show that it is bounded below by the first Betti number. In particular we derive an invariant for equivalence relations and define a notion of ergodic uniform non amenability for finitely
generated groups which is weaker than the usual one.
IHP, , , LMAM, GDR géométrie non commutative. | {"url":"http://poncelet.sciences.univ-metz.fr/~tu/IHP/abstracts.html","timestamp":"2014-04-18T18:11:22Z","content_type":null,"content_length":"13693","record_id":"<urn:uuid:ecb7c44f-1812-4d95-80cd-c396ef71f803>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Trouble producing population standard deviations with collapse (
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Trouble producing population standard deviations with collapse (sd)
From Steve Samuels <sjsamuels@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Trouble producing population standard deviations with collapse (sd)
Date Fri, 13 Jul 2012 19:33:54 -0400
If your subset "represents 100% of the population", the implication is that you have survey data, and must,
therefore, have a weight variable e.g. "final_wt". Add [pw = final_wt] before the comma. To compute
standard errors, -svyset- your data and use Stata's survey commands.
On Jul 13, 2012, at 4:36 PM, Matt Vivier wrote:
Hello all,
My specs are:
Stata/IC 12.1 for Windows (64-bit x86-64)
Revision 06 Feb 2012
I am having some trouble getting collapse to produce the result I am
looking for. I have done some searching, and have not found anyone
else presenting this problem. I need it to generate the population
standard deviation for a large number of subsets of data, but it
appears the command I am using is generating a sample standard
deviation. Each subset of the data represents 100% of the population I
am interested in the SD for. Is there a simple way to do this, or will
I need to compute the SD using another process?
My code is as follows:
collapse (sum) TotCost TotAcute TotPostAcute ReadmissionCost
ReadmissionClaims count (sd) SDTotCost=TotCost
SDTotPostAcute=TotPostAcute SDReadmissionCost=ReadmissionCost (mean)
AvgTotCost=TotCost AvgTotPostAcute=TotPostAcute
AvgReadmissionCost=ReadmissionCost, by(drg_condition TriggerProvider
Thank you for your help,
The information contained in this message may be privileged, confidential,
and protected from disclosure. If you are not the intended recipient, or an
employee, or agent responsible for delivering this message to the intended
recipient, you are hereby notified that any dissemination, distribution, or
copying of this communication is strictly prohibited. If you have received
this communication in error, please notify us immediately by replying to
the message and deleting from your computer.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2012-07/msg00471.html","timestamp":"2014-04-21T12:20:10Z","content_type":null,"content_length":"9796","record_id":"<urn:uuid:5d2aba1b-0882-41e9-a623-f5330dbba797>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number of results: 642
Are these correct? The beginning of the sentences before "the skiers" are they still considered part of the noun phrase, or are they considered a prepositional phrase, or something different? I
believe they are part of the noun phrase, but not sure. The part in the [ ] is what...
Sunday, November 16, 2008 at 8:51pm by Megan
determine if the following lines are parallel, perpendicular or neither. Explain your reasoning. -2x+3y=3 2x+3y=3 Rewrite them in y = mx + b form to get the slopes, m. The lines are parallel of the
slopes are the same. They are perpendicular if the product of the slopes is -1 ...
Wednesday, January 10, 2007 at 7:01pm by britteny
Are the slopes the same? Did you solve for the slope? I don't think the slopes are equal.
Wednesday, June 18, 2008 at 12:12am by DrBob222
Examine the slopes of the straight lines obtained from the graphs of p versus 1/V. Why are the slopes different?
Tuesday, December 18, 2012 at 4:47pm by Anonymous
true slopes of perpendicular lines are negative reciprocals of each other, or the product of their slopes = -1 (1/2)(-2) = -1
Saturday, November 2, 2013 at 10:54pm by Reiny
compute the side lengths and slopes. If the sides are all equal, and the slopes are parallel, it's a square.
Monday, August 13, 2012 at 1:58pm by Steve
take the slopes of lines AB, AC, and BC. Are any of those 3 slopes opposite reciprocals of each other?
Thursday, April 15, 2010 at 11:56pm by Reiny
If two lines are perpendicular, then their slopes are negative reciprocals of each other, or another way of looking at it, when their slopes are multiplied the result is -1 Is (-2/3)(3/2) = -1 ???
Saturday, July 14, 2012 at 9:35pm by Reiny
Suppose the correlation between two variables is -0.57. If each of the y-values is multiplied by -1, which of the following is true about the new scatterplot? It slopes up to the right, and the
correlation is -0.57 It slopes up to the right, and the correlation is +0.57 It ...
Thursday, November 8, 2012 at 9:35am by John
Suppose the correlation between two variables is -0.57. If each of the y-values is multiplied by -1, which of the following is true about the new scatterplot? It slopes up to the right, and the
correlation is -0.57 It slopes up to the right, and the correlation is +0.57 It ...
Wednesday, November 14, 2012 at 1:02pm by joey
Suppose the correlation between two variables is -0.57. If each of the y-values is multiplied by -1, which of the following is true about the new scatterplot? It slopes up to the right, and the
correlation is -0.57 It slopes up to the right, and the correlation is +0.57 It ...
Thursday, November 15, 2012 at 10:26am by joey
Suppose the correlation between two variables is -0.57. If each of the y-values is multiplied by -1, which of the following is true about the new scatterplot? It slopes up to the right, and the
correlation is -0.57 It slopes up to the right, and the correlation is +0.57 It ...
Friday, November 16, 2012 at 10:34am by John
Suppose the correlation between two variables is -0.57. If each of the y-values is multiplied by -1, which of the following is true about the new scatterplot? It slopes up to the right, and the
correlation is -0.57 It slopes up to the right, and the correlation is +0.57 It ...
Sunday, November 18, 2012 at 10:23am by John
Suppose the correlation between two variables is -0.57. If each of the y-values is multiplied by -1, which of the following is true about the new scatterplot? It slopes up to the right, and the
correlation is -0.57 It slopes up to the right, and the correlation is +0.57 It ...
Monday, December 17, 2012 at 11:14am by Jill
Suppose the correlation between two variables is -0.57. If each of the y-values is multiplied by -1, which of the following is true about the new scatterplot? It slopes up to the right, and the
correlation is -0.57 It slopes up to the right, and the correlation is +0.57 It ...
Tuesday, December 18, 2012 at 10:56am by Jill
m1 = -A/B = -4/5. m2 = -A/B = -5/4. They are not parallel, because the slopes are not =. They are not perpendicular, because the slopes are not neg. reciprocals. m2 would have to be +5/4.
Sunday, October 31, 2010 at 2:26pm by Henry
slopes of parallel lines are equal. slopes of perpendicular lines are negative reciprocals. What do you think?
Saturday, July 14, 2012 at 6:39pm by Steve
algebra again
Determine the slopes (m) of the lines by putting the equations in the form y = mx + b. If the product of the two slopes is -1, the lines are perpendicular.
Sunday, May 2, 2010 at 12:13pm by drwls
also you can first find y in the equation(s), and then find the slopes, and finally multiply the slopes.
Tuesday, April 15, 2008 at 4:55pm by Jeremy
since the slopes are not the same, not parallel since the slopes are not negative reciprocals, not perpendicular
Friday, November 16, 2012 at 7:09pm by Steve
A demand curve that is unit elastic everywhere is a. linear and slopes downward b. linear and slopes upward c. vertical d. horizontal e. nonlinear
Tuesday, January 19, 2010 at 12:52am by Rosa
Very easy Recall that the slopes of perpendicular lines have slopes that are negative reciprocals of each other, or in other words, the product of their slopes is -1 Your first line is x + 3y = 12 So
the perpendicular line must be 3x - y = c , the only thing we don't know is ...
Monday, October 15, 2007 at 10:39pm by Reiny
Perpendicular lines have slopes that are negative reciprocals of one another (multiply the two slopes together and their product has to be a negative 1). In y=mx+b, m is the slope.
Monday, February 22, 2010 at 2:06pm by FiestadeNoche
Calculate the slopes of each of the lines between the pair of points. Hint: The slope of the line that is the first side is (y2-y1)/(x2-x1) =(6-3)/(11-2) = 1/3 If the product of the two slopes is -1,
they are perpendicular. If not, they aren't.
Wednesday, November 21, 2007 at 11:47pm by drwls
9th grade math
find a system where the slopes are different. ANy two lines with different slopes intersect at exactly one point. Looks like (C)
Wednesday, December 5, 2012 at 9:26pm by Steve
in y = mx + b, the slop is m for parallel lines the slopes must be the same for perpendicular lines the slopes are negative reciprocals of each other. I see the first slope as +9 and the second as -9
so which fits?
Sunday, May 16, 2010 at 11:07pm by Reiny
If the slopes of the lines are equal, the lines are parallel if the slopes are negative reciprocals of each other, the lines are perpendicular, ( e.g. 5/3 vs -3/5)
Saturday, September 25, 2010 at 7:27am by Reiny
Erosion is worse on slopes because the water flows faster in heavy rains. Trees that are planted to hold the soil in place will help for a 20-100 years, but eventually they tilt and fall on steep
slopes, tearing out the root ball and making erosion worse. I have been fighting ...
Monday, March 1, 2010 at 7:10pm by drwls
Math - number of solution
Transform each equation to the form y=mx+b where m=slope and b=y-intercept. If the slopes (m) are distinct (different), then there is one solution. If the slopes are identical, then check the
y-intercepts. If the y-intercepts are different, there is no solution (lines are ...
Saturday, August 28, 2010 at 4:55am by MathMate
The slope of the given equation is 5/7 the slopes of perpendicular lines have slopes that are opposite reciprocals of each other. so the perpendicular line has slope -7/5
Wednesday, March 17, 2010 at 7:45pm by Reiny
just figure their slopes. If the slopes are the same, L1║L2. If their product is -1, then L1┴L2. Otherwise, neither.
Thursday, February 20, 2014 at 6:47pm by Steve
for Ax + By = C the slope = - A/B parallel lines have equal slopes perpendicular lines have slopes that are opposite reciprocals of each other. let me know what you concluded.
Wednesday, January 19, 2011 at 8:21pm by Reiny
Why do north facing slopes on mountains support glaciers today while south facing slopes do not?
Wednesday, November 12, 2008 at 12:19am by Charlie
The slopes of the tangent lines to the graph of the function f(x) increase as x increases. At what rate do the slopes of the tangent lines increase? f(x) = x2 – 1 Thanks for the help.. really need
Tuesday, September 21, 2010 at 12:32pm by john
Eq1: 4x + 6y = 12. m1 = -A/B = -4/6 = -2/3. Y-int. = C/B = 12/6 = 2. Eq2: 2x + 3y = 6. m2 = -2/3. Y-int. = 6/3 = 2. Since the slopes and Y-intercepts are both equal, the Eqs represent the same line.
Therefore, we have an infinite number of solutions. Multiply both sides of Eq2...
Tuesday, April 2, 2013 at 5:31pm by Henry
The slopes of the tangent lines to the graph of the function f(x) increase as x increases. At what rate do the slopes of the tangent lines increase? f(x) = x2 – 6 PLZZ help
Tuesday, September 21, 2010 at 12:51pm by Amanda
A Trapezoid has 2 parallel sides which have equal slopes, and 2 non-parallel sides with unequal slopes. So we calculate the slope of all 4 sides and make comparisons: AB. m = (6-2) / (4-2) = 4/2 = 2.
BC. m=(-3-6) / (4-4)=-9/0 = undefined. CD. m = (-1-(-3)) / (2-4) = 2/-2 = -1...
Sunday, April 10, 2011 at 12:00pm by Henry
You must have written the second equation incorrectly. It needs a "y" term. It the productsa of the slopes, not the slopes themselves that must be -1. Picking an ordered pair is just a matter of
picking x and computing y, for any x that you want. Two ordered pairs define a ...
Wednesday, March 4, 2009 at 12:54am by drwls
Determine which two equations represent perpendicular lines. a) y=2x - 6 b) y=1/2x + 6 c) y= -1/2x + 6 d) y= 1/2x - 6 I know that perpendicular lines have negative recipricals, but I'm having trouble
with this problem. I think the answer is b & c because they are both 1/2 and ...
Monday, July 30, 2007 at 8:35pm by lacy
The slopes of the lines connecting each pair of the points is different: -1/5, -1 and -3/7 They points cannot be on the same line. The three slopes would be the same if they were on the same line.
Wednesday, April 22, 2009 at 7:59am by drwls
if lines are parallel, they have the same slope so, the first step is to convert each equation to the slope-intercept form: y = -1/4 x + 2/3 y = k/6 x - 5/6 If the slopes are the same, then -1/4 = k/
6 k = -3/2 If the lines are perpendicular, their slopes are negative ...
Thursday, November 17, 2011 at 6:43pm by Steve
the slopes of your lines are determined by the coefficients of the x terms of your equations. They are 3 for the first one, and -1/2 for the second. Since their product is not equal to -1, they
cannot be perpendicular no matter what the value of a is The 5a term has nothing to...
Sunday, January 27, 2008 at 10:16pm by Reiny
To be parallel to the xz plane, y must be constant. In this case, the constant is y = 3. There is more than one line parallel to the xz plane that passes through that point. If the z vs x slopes of
the various possible slopes in the y=3 plane are denoted by m, z - 4 = m (x - 2)
Wednesday, December 19, 2012 at 1:45am by drwls
The slopes of the tangent lines are equal to the derivative of the parabola at the points. y ' = 2x, so y'(a) = 2a and y'(-a) = -2a. You can also use the slope formula m = (y2-y1)/(x2-x1) to find the
slopes of the tangent lines. Therefore m = (a^2+3-(-6))/(a-0)=(a^2+9)/a. ...
Monday, June 17, 2013 at 3:45pm by Joe
Best I can come up with is: (py - by)/(px - bx) = -1 [(px - ax) / (py - ay)] It relates the slopes of two perpendicular lines: each line's slope is the negative reciprocal of the others. The problem
arises when one of the slopes is 0, because then the (negative) reciprocal is ...
Tuesday, May 5, 2009 at 11:51pm by RickP
math helper plese
2 isosceles triangles have the same height. The slopes of the sides of triangle A are double the slopes of the corresponding sides of triangle B. How do the lengths of their bases compare? A. The
base of A is quadruple that of B. The base of A is double that of B. C. The base ...
Monday, January 17, 2011 at 9:23pm by Kelly
y = mx + b In the above Eq, m is the slope and b is the y-int. When 2 lines are perpendicular, their slopes are negative reciprocals of each other: y = (1/2)x + 7 and y = -2x + 7. Two lines are per.
if their slopes are reciprocals AND their signs are opposite. These 2 Eq meet ...
Monday, August 30, 2010 at 11:00pm by Henry
a) Find the slope of the average cost (0.16). Transform the equation for the price to y=0.14x-0.95 from which the slope can be found. If the slopes are different, the two lines will intersect. If the
product of the two slopes is -1, the two lines are perpendicular. b) see a) ...
Sunday, August 29, 2010 at 11:37pm by MathMate
easy, after making sketch, show that the slopes of opposite sides are equal. Then draw the diagonals and find their slopes show that the slope of one diagonal is the opposite reciprocal of the slope
of the other. Be careful in your slope calculations with the subtraction of ...
Thursday, December 8, 2011 at 11:57am by Reiny
algebra 1
Slope between two points P1(x1,y1), P2(x2,y2) is given by: m=(y2-y1)/(x2-x1) So the sum of the slopes would be the sum of the slopes LM, MN, and NL. For LM, m=(0-0)/(10-4)=0 I am sure you can
complete the rest.
Friday, December 10, 2010 at 6:12pm by MathMate
Algebra 1 ( bobpursley)
#1 ok you can avoid solving for b by using the point-slope form: y-0 = 2/5 (x-5) #2 how could you get #1 and not get #2? m = -6/7 point-slope form: y-0 = -6/7 (x-0) y = -6x/7 all this means is that b
=0 #3 1st has slope -2 the others all have slope 2 #4 only the 1st two. The ...
Tuesday, December 3, 2013 at 11:24am by Steve
Ok, so you know that parallel lines have a special trick with their slopes. When you multiply the two slopes of the lines together, you should get -1. So what number could you multiply 3 by to get a
-1? -1/3! So far, your line equation should be y=-1/3 x + b. How do you get b ...
Tuesday, February 19, 2008 at 10:17pm by Rhonda
consistent if their slopes are different x+y=5 2x-3y=6 dependent if one equation is a multiple of the other. 2x+3y=9 4x+6y=18 inconsistent if the slopes are the same, but one equation is not a
multiple of the other. 3x-y=8 9x-3y=5
Thursday, November 15, 2012 at 3:51pm by Steve
Change into the standard equation for a straight line of y = mx + b 2x+5y=7 and 5y = -2x+7 then divide by 5 to obtain y=(-2/5)x + 7/5 Second equation is 5x-2y=8 -2y=-5x+8 multiply by -1 2y=5x-8 y=(5/
2)x-4 Lines are parallel if the slopes are equal. Lines are perpendicular if ...
Tuesday, June 3, 2008 at 10:54am by DrBob222
you need to get it into slope intercept form--y=mx+b (m being the slope) to figure out whether they are parallel (having equal slopes) or perpendicular (slopes are negative reciprocals) 2y-x=2 2y=x+2
y=1/2x+1 slope=1/2 y+2x=4 y=-2x+4 slope=-2 -2 and 1/2 are negative ...
Tuesday, February 8, 2011 at 7:46pm by hw
Best I can come up with is: (py - by)/(px - bx) = -1 [(px - ax) / (py - ay)] It relates the slopes of two perpendicular lines: each line's slope is the negative reciprocal of the others. The problem
arises when one of the slopes is 0, because then the (negative) reciprocal is ...
Tuesday, May 5, 2009 at 11:51pm by RickP
y = 0.5x + 6 makes an angle arctan 0.5 = 26.57 degrees with the x axis. It slopes upward. y = -.75x -1 makes and angle arctan -0.75 = -36.87 degrees with the x axis (It slopes downward) The sum of
those two angles is the acute angle between them. 63.44 degrees. Round it to 63 ...
Tuesday, May 10, 2011 at 11:37am by drwls
First you have to put your line into y = mx + b format. 9x - 6y = -3 -6y = -9x -3 y = 9/6 x - 3 y = 3/2 x - 3 Parallel lines have the same slope. m represents you slope. Perpendicular lines have
slopes that are negative reciprocals of each. flip the fraction and include the ...
Monday, February 18, 2013 at 5:20pm by JJ
Geometry HELP
If two lines are parallel then their slopes are equal If two lines are perpendicular, their slopes are negative reciprocals of each other (opposite in sign and the fraction is flipped) for the first
one, find slope(AB) and slope(CD), and decide second is very easy slope of ...
Tuesday, June 8, 2010 at 11:05pm by Reiny
i am not going to solve the problem for you but i will tell you how to first you have to convert both equations to y=mx+b form. You can do it by isolating the y variable to one side. If Y has a
coeffienct then divide it by the coefficent and do the same thing on the other side...
Thursday, April 16, 2009 at 5:32pm by math pro
Algebra 2
arrange the equations into the form ax + by = c 1. -14x = -632 - 26y ---> 14x - 26y = 632 slope = 14/26 = 7/13 y = (7/13)x - 316/13 ---> 7x - 13y = 316 slope = -7/-13 = 7/13 since the slopes are
equal, they must be parallel. To be perpendicular, their slopes must be ...
Tuesday, March 17, 2009 at 10:10pm by Reiny
The solution to a system of linear Eqs is the point where they intesect. If the lines are parallel, they cannot intersect and, therefore, have no solution. If we solve the Eqs of two parallel lines,
the answer will not make sense. For example, we could get -15 = 30 which is ...
Wednesday, March 21, 2012 at 12:27pm by Henry
Perpendicular lines have slopes that are negative reciprocals. You have to find the slope of the given line by putting the equation in y = mx + b form 4x + 5y = 4 5y = -4x + 4 y = (-4/5)x + 4/5 m =
-4/5 so the slope of the line perpendicular to this line is 5/4 (Note the two ...
Sunday, March 10, 2013 at 7:15pm by Dr. Jane
Math - PreCalc (12th Grade)
Which statement is true for any linear system of two equations in two variables? A) The system is independent if the slopes of the two lines are equal. B) The system is inconsistent if the
y-intercepts of the two lines are equal. C) The system is independent if the graphs of ...
Thursday, March 20, 2014 at 12:24pm by Shawna
For the floor plans given in exercise27, determine whether the side through the points (2, 3) and (11, 6) is perpendicular to the side through the points (2, 3) and (-3, 18). Compute the slopes of
the two lines. If the product of the slopes is -1, then te lines are ...
Wednesday, February 14, 2007 at 8:58pm by stacy
Can someone show me how to solve this? Someone tried to explain it to me-I didn't get it.A line contains the points (3, -4) and (5, 2). Another line graphed in the same coordinate plane contains the
points (0, 5) and (-2, -1). Based on the slopes of these lines are they ...
Tuesday, July 24, 2007 at 12:38pm by Jim
A. Y = -2x+1, and Y = -2x+3. m1 = m2 = -2. The lines are parallel, because their slopes are equal. B. x+y/3+5 = 0, and 2y+6x = 1 x+y/3=-5, and 6x+2y = 1. 3x+y = -15, and 6x+2y = 1 m1 = -A/B = -3/1 =
-3 m2 = -6/2 = -3 The slopes are equal; therefore, the lines are parallel.
Sunday, August 11, 2013 at 12:23pm by Henry
compare slopes
Tuesday, September 4, 2007 at 1:38pm by manny
how do i make slopes?
Wednesday, December 10, 2008 at 5:58pm by khadidrah
rate of change / slopes
(5,5) (5,6) (5,7) 0
Friday, May 27, 2011 at 8:44am by Henry
The question of how many solutions, if any,usually comes up when solving 2 equations simutaneously. First, we need to define solution: The solution is the point where the two lines inter- sect. I F
they do not INTERSECT, there is no solution. Parallel lines do not INTERSECT ...
Thursday, August 5, 2010 at 12:04pm by Henry
math b
A median is a line joining the a vertex to the mid-point of the opposite side. If the median is also an altitude to side BC, then the median should be perpendicular to BC. Let D be the mid-point of
BC. The coordinates of D should be rather obvious, being the mid-point of ...
Monday, May 25, 2009 at 1:44pm by PC
what are the slopes of the asymptotes of the hyperbola: (x^2)/9 - (y^2)/4 = 1
Sunday, February 24, 2008 at 9:46pm by Kristen
The product of the two slopes is -1.
Tuesday, March 8, 2011 at 8:04pm by MathMate
the slopes of the lines are 1 and -1, so they are perpendicular.
Tuesday, August 6, 2013 at 12:12pm by Steve
I have a few questions to answer and I don't know how to start answering them. 1. On a position time graph, compare the instantaneous velocities of an object when the tangent to the curve slopes
upward to the right, when the tangent slopes downward to the right, and when the ...
Thursday, February 3, 2011 at 10:33pm by Sups
I NEED HELP AND FAST!
can anyone help me on finding slopes of a line?
Wednesday, February 11, 2009 at 7:44pm by anonymous
The slopes of the mountain are covered with trees.
Thursday, November 11, 2010 at 11:20am by neshia
yes, the slopes are negative reciprocals of each other
Thursday, September 29, 2011 at 9:51pm by Reiny
the lines have slopes -1 and 1. So, they are perpendicular.
Wednesday, August 7, 2013 at 4:29pm by Steve
slopes AB: 3/2 CD: -2/3 so, perpendicular
Wednesday, September 4, 2013 at 3:52pm by Steve
this problem seems with too many numbers that i don't know where to start. or even how.I have to answer the problem on top not the one that says number 27. The problem: Geometry. For the floor plans
given in exercise 27, determine whether the side through the points (2,3) and...
Friday, December 22, 2006 at 8:40pm by jasmine20
Math - Slopes
Sunday, November 1, 2009 at 11:09am by bobpursley
y = -x+8 and x+y=7 OR y = -x +7 Line are parallel, both with slopes = -1 y = -x+8 y = -x+7 There is no solution
Thursday, May 17, 2012 at 7:28pm by Rezu
Parallel lines have equal slopes.
Wednesday, November 20, 2013 at 6:27pm by Henry
Parallel lines have equal slopes.
Wednesday, November 20, 2013 at 6:29pm by Henry
Parallel lines have equal slopes.
Wednesday, November 20, 2013 at 6:30pm by Henry
math, slope
strange question! depends what you mean by "all of these". If you just want the slopes of the 6 consecutive line segments, it wouldn't be so bad take slope between (0.0) and (20,.166) then the slope
between (20,.166) and (40,.181) etc. add up the 6 slopes, then divide by 6 If ...
Monday, October 5, 2009 at 1:01pm by Reiny
Nope. Check this page. You'll find the answer fairly quickly. http://www.google.com/search?source=ig&hl=en&rlz=1G1GGLQ_ENUS314&q=Gentle+slopes+and+rounded+mountains+&aq=f
Tuesday, February 17, 2009 at 1:27pm by Ms. Sue
I think I got it! Are the slopes of the given and parallel lines = ?
Wednesday, June 17, 2009 at 1:42pm by Beth
One. You can tell without calculating because the slopes are different.
Tuesday, October 20, 2009 at 3:12pm by jim
Put x,y in each and find out which equations are equal.
Monday, February 1, 2010 at 10:42am by bobpursley
how do you find the perpendicular lines if on the graph are undefied and 0 slopes?
Wednesday, February 2, 2011 at 9:03pm by Jezebell
slopes are different, so the lines must intersect: consistent
Wednesday, January 2, 2013 at 10:57am by Steve
Y = (1/4)x + 2 Y = (1/4)x m1 = m2 = 1/4 The lines are parallel, because their slopes are equal.
Friday, January 17, 2014 at 7:13pm by Henry
Gr 10 math - finding slopes in shapes
Thank you, Steve.
Sunday, March 23, 2014 at 8:38pm by iza
The curves intersect where a/(x-8) = (x-8)^2 a = (x-8)^3 x = 8 + cbrt(a) Now, we need to find a such that the curves are perpendicular. slope of x^2 - 16x + 64 = 2x - 16 slope of a/(x-8) = -a/(x-8)^2
perpendicular slope = (x-8)^2/a So, when x = 8 + cbrt(a), and the slopes are ...
Friday, October 28, 2011 at 2:41pm by Steve
You and your friend are sledding on two sides of a triangle-shaped hill. On your side, the hill slopes up at 30.0° from the horizontal; on your friend's side, it slopes down at the same angle. You do
not want to climb up the hill, so you tell your friend to thread a rope ...
Thursday, September 15, 2011 at 9:51pm by Anonymous
Why do the production possibility frontier curve slopes downward and why it could be a line?
Friday, January 11, 2008 at 3:58am by Dorin
perpendicular lines have slopes that are the negative reciprocal of each other.
Tuesday, April 7, 2009 at 7:51pm by bobpursley
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=Slopes","timestamp":"2014-04-17T22:01:56Z","content_type":null,"content_length":"37567","record_id":"<urn:uuid:45143047-c027-41a5-afc1-2edc1be99b2a>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
try this on a calc.
1. type how many drinks youd like a week
2. Then, times by 2 and add 5.
3. Times it all by 50.
4. If youve had a birthday this year, add 1755, if not, add 1754
5. Take away the year you were born in.
Then , the first numbers are your age, and the rest are your drinks.
So say you wanted 16 a week and you were 12, you would end up with 1612.
work that out then!!! | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=5329","timestamp":"2014-04-18T13:21:28Z","content_type":null,"content_length":"18772","record_id":"<urn:uuid:43cbe337-32d1-4541-9bff-9dceefac2b28>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
URSS.ru - Buy the book: Korotayev A., Malkov A., Khaltourina D. / Introduction to Social Macrodynamics: Secular Cycles and Millennial Trends / Korotayev A., Malkov A., Khaltourina D. / ISBN 5-484-00559-0
Korotayev A., Malkov A., Khaltourina D.
Introduction to Social Macrodynamics: Secular Cycles and Millennial Trends
Paperback. 176 pp.(English). 13.9 EUR
ISBN 5-484-00559-0
You can always remove it later
Human society is a complex nonequilibrium system that changes and develops constantly. Complexity, multivariability, and contradictions of social evolution lead researchers to a logical conclusion
that any simplification, reduction, or neglect of the multiplicity of factors leads inevitably to the multiplication of error and to significant misunderstanding of the processes under study. The
view that any simple general laws are not observed at all with respect to social evolution has become totally dominant within the academic community, especially among those who specialize in the
Humanities and who confront directly in their research the manifold unpredictability of social processes. A way to approach human society as an extremely complex system is to recognize differences of
abstraction and time scale between different levels. If the main task of scientific analysis is to detect the main forces acting on systems so as to discover fundamental laws at a sufficiently coarse
scale, then abstracting from details and deviations from general rules may help to identify measurable deviations from these laws in finer detail and shorter time scales. Modern achievements in the
field of mathematical modeling suggest that social evolution can be described with rigorous and sufficiently simple macrolaws.
The first book of the {it Introduction (Compact Macromodels of the World System Growth}. Moscow: URSS, 2006) discusses general regularities of the World System long-term development. It is shown that
they can be described mathematically in a rather accurate way with rather simple models. In this book the authors analyze more complex regularities of its dynamics on shorter scales, as well as
dynamics of its constituent parts paying special attention to «secular» cyclical dynamics. It is shown that the structure of millennial trends cannot be adequately understood without secular cycles
being taken into consideration. In turn, for an adequate understanding of cyclical dynamics the millennial trend background should be taken into account.
Introduction: Millennial Trends
Chapter 1. Secular Cycles
Chapter 2. Historical Population Dynamics in China: Some Observations
Chapter 3. A New Model of Pre-Industrial Political-Demographic Cycles (by Natalia Komarova and Andrey Korotayev)
Chapter 4. Secular Cycles and Millennial Trends
Appendix 1. An Empirical Test of the Kuznets--Kremer Hypothesis
Appendix 2. Compact Mathematical Models of the World System's Development and Macroperiodization of the World System's History
First and foremost, our thanks go to the Institute for Advanced Study, Princeton. Without the first author's one-year membership in this Institute this book could hardly have been written. We are
especially grateful to the following professors and members of this institute for valuable comments on the first sketches of this monograph: Patricia Crone, Nicola Di Cosmo, John Shepherd, Ki Che
Angela Leung, and Michael Nylan. We are also grateful to the Russian Science Support Foundation and the Russian Foundation for Basic Research for financial support of this work (Projects ##
06--06--80459 and 04--06--80225).
We would like to express our special gratitude to Gregory Malinetsky, Sergey Podlazov (Institute for Applied Mathematics, Russian Academy of Sciences), Robert Graber (Truman State University), Victor
de Munck (State University of New York), Diana Pickworth (Aden University, Yemen), Antony J.Harper (New Trier College), Duran Bell, Donald Saari, and Douglas R.White (University of California,
Irvine) for their invaluable help and advice.
We would also like to thank our colleagues who offered us useful comments and insights on the subject of this book: Herbert Barry III (University of Pittsburgh), Yuri Berezkin (Kunstkammer,
St.Petersburg), Svetlana Borinskaya (Institute of General Genetics, Russian Academy of Sciences), Dmitri Bondarenko (Institute for African Studies, Russian Academy of Sciences), Robert L.Carneiro
(American Museum of Natural History, New York), Henry J.M.Claessen (Leiden University), Dmitrij Chernavskij (Institute of Physics, Russian Academy of Sciences), Marat Cheshkov (Institute of
International Economics, Russian Academy of Sciences), Georgi and Lubov Derlouguian (Northwestern University, Evanston), William T.Divale (City University of New York), Timothy K.Earle (Northwestern
University), Carol and Melvin Ember (Human Relations Area Files at Yale University), Leonid Grinin (Center for Social Research, Volgograd), Sergey Nefedov (Russian Academy of Sciences, Ural Branch,
Ekaterinburg), Nikolay Kradin (Russian Academy of Sciences, Far East Branch, Vladivostok), Vitalij Meliantsev (Institute of Asia and Africa, Moscow State University), Akop Nazaretyan (Oriental
Institute, Russian Academy of Sciences), Nikolay Rozov (Novosibirsk State University), Igor Sledzevski (Institute for African Studies, Moscow), Peter Turchin (University of Connecticut, Storrs), and
Paul Wason (Templeton Foundation). We would also like to thank Tatiana Shifrina, the Director of the "Khalturka-Design" Company, for the design of the cover of this monograph.
Needless to say, faults, mistakes, infelicities, etc., remain our own responsibility.
Andrey Korotayev is Director and Professor of the "Anthropology of the East" Center, Russian State University for the Humanities, Moscow, as well as Senior Research Fellow of the Institute for
Oriental Studies and the Institute for African Studies of the Russian Academy of Sciences. He also chairs the Advisory Committee in Cross-Cultural Research for "Social Dynamics and Evolution" Program
at the University of California, Irvine. He received his PhD from Manchester University, and Doctor of Sciences degree from the Russian Academy of Sciences. He is author of over 200 scholarly
publications, including Ancient Yemen (Oxford University Press, 1995), Pre-Islamic Yemen (Harrassowitz Verlag, 1996), Social Evolution (Nauka, 2003), World Religions and Social Evolution of the Old
World Oikumene Civilizations: a Cross-Cultural Perspective (Mellen, 2004), Origins of Islam (OGI, 2006). He is a laureate of the Russian Science Support Foundation Award in "The Best Economists of
the Russian Academy of Sciences" nomination (2006).
Artemy Malkov is Research Fellow of the Keldysh Institute for Applied Mathematics, Russian Academy of Sciences from where he received his PhD. His research concentrates on the modeling of social and
historical processes, spatial historical dynamics, genetic algorithms, cellular automata. He has authored over 35 scholarly publications, including such articles as "History and Mathematical
Modeling" (2000), "Mathematical Modeling of Geopolitical Processes" (2002), "Mathematical Analysis of Social Structure Stability" (2004) that have been published in the leading Russian academic
journals. He is a laureate of the 2006 Award of the Russian Science Support Foundation.
Daria Khaltourina is Research Fellow of the Center for Regional Studies, Russian Academy of Sciences (from where she received her PhD) and Associate Professor at the Russian Academy for Civil
Service. Her research concentrates on complex social systems, countercrisis management, cross-cultural and cross-national research, demography, sociocultural anthropology, and mathematical modeling
of social processes. She has authored over 40 scholarly publications, including such articles as "Concepts of Culture in Cross-National and Cross-Cultural Perspectives" (World Cultures 12, 2001),
"Methods of Cross-Cultural Research and Modern Anthropology" (Etnograficheskoe obozrenie 5, 2002), "Russian Demographic Crisis in Cross-National Perspective" (in Russia and the World. Washington, DC:
Kennan Institute, forthcoming). She is a laureate of the Russian Science Support Foundation Award in "The Best Economists of the Russian Academy of Sciences" nomination (2006).
Review of Andrey Korotayev, Artemy Malkov, and Daria Khaltourina, Introduction to Social Macrodynamics (Three Volumes). Moscow: URSS, 2006.
Robert Bates Graber
Professor Emeritus of Anthropology Division of Social Science Truman State University
(published in Social Evolution & History. Vol. 7 (2008). Issue 2. Forthcoming)
This interesting work is an English translation, by the authors and in three brief volumes, of an amended and expanded version of their Russian work published in 2005. Andrey Korotayev is Director of
the "Anthropology of the East" Center at the Russian State University for the Humanities; Artemy Malkov is Research Fellow of the Keldysh Institute for Applied Mathematics; and Daria Khaltourina is
Research Fellow of the Center for Regional Studies. By way of full disclosure, I should state that I have enjoyed not only making the acquaintance of the first and third authors at professional
meetings, but also the opportunity to offer comments on earlier versions of some parts of this English translation. In terms coined recently by Peter Turchin, the first volume focuses on "millennial
trends," the latter two on "secular cycles" a century or two in duration.
The first volume's subtitle is Compact Models of the World System Growth (CMWSG hereafter). Its mathematical basis is the standard hyperbolic growth model, in which a quantity's proportional (or
percentage) growth is not constant, as in exponential growth, but is proportional to the quantity itself. For example, if a quantity growing initially at 1 percent per unit time triples, it will by
then be growing at 3 percent per unit time. The remarkable claim that human population has grown, over the long term, according to this model was first advanced in a semi-serious paper of 1960
memorably entitled "Doomsday: Friday, 13 November, A.D. 2026" (von Foerster, Mora, and Amiot, 1960). Admitting that this curve notably fails to fit world population since 1962, chapter 1 of CMWSG
attempts to salvage the situation by showing that the striking linearity of the declining rates since that time, considered with respect to population, can be identified as still hyperbolic, but in
inverse form. Chapter 2 finds that the hyperbolic curve provides a very good fit to world population since 500 BCE. The authors believe this reflects the existence, from that time on, of a single,
somewhat integrated World System; and they find they can closely simulate the pattern of actual population growth by assuming that although population is limited by technology (Malthus), technology
grows in proportion to population (Kuznets and Kremer). Chapter 3 argues that world GDP has grown not hyperbolically but quadratically, and that this is because its most dynamic component contains
two factors, population and per-capita surplus, each of which has grown hyperbolically. To this demographic and economic picture chapter 4 adds a "cultural" dimension by ingeniously incorporating a
literacy multiplier into the differential equation for absolute population growth (with respect to time) such that the degree to which economic surplus expresses itself as population growth depends
on the proportion of the population that is literate: when almost nobody is literate, economic surplus generates population growth; when almost everybody is literate, it does not. This allows the
authors' model to account nicely for the dramatic post-1962 deviation from the "doomsday" (hyperbolic) trajectory. It also paves the way for a more specialized model stressing the importance, in the
modern world, of human-capital development (chapter 5). Literacy's contribution to economic development is neatly and convincingly linked, in chapter 6, to Weber's famous thesis about Protestantism's
contribution to the rise of modern capitalism. Chapter 7 cogently unravels and elucidates the complex role of literacy male, female, and overall in the demographic transition. In effect, the
"doomsday" population trajectory carried the seeds of its own aborting:
the maximum values of population growth rates cannot be reached without a certain level of economic development, which cannot be achieved without literacy rates reaching substantial levels. Hence,
again almost by definition the fact that the [world] system reached the maximum level of population growth rates implies that . . . literacy [had] attained such a level that the negative impact of
female literacy on fertility rates would increase to such an extent that the population growth rates would start to decline (CMWSG: 104).
The second volume is subtitled Secular Cycles and Millennial Trends (SCMT hereafter). Chapter 1 stresses that demographic cycles are not, as often has been thought, unique to China and Europe, but
are associated with complex agrarian systems in general; and it reviews previous approaches to modeling such cycles. Due to data considerations, the lengthy chapter 2 focuses on China. In the course
of assessing previous work, the authors, though writing of agrarian societies in particular, characterize nicely what is, in larger view, the essential dilemma reached by every growing human
In agrarian society within fifty years such population growth [0.6 percent per year] leads to diminishing of per capita resources, after which population growth slows down; then either solutions to
resource problems (through some innovations) are found and population growth rate increases, or (more frequently) such solutions are not found (or are not adequate), and population growth further
declines (sometimes below zero) (SCMT: 61-62).
(Indeed, for humans, technological solutions that raise carrying capacity are always a presumptive alternative to demographic collapse; therefore, asserting or even proving that a particular
population "exceeded its carrying capacity" is not sufficient to account logically for the collapse of either a political system or an entire civilizations.) Interestingly, the authors find evidence
that China's demographic cycles, instead of simply repeating themselves, tended to increase both in duration and in maximum pre-collapse population. In a brief chapter 3 the authors present a
detailed mathematical model which, while not simulating these trends, does simulate (1) the S-shaped logistic growth of population (with the effects of fluctuating annual harvests smoothed by the
state's functioning as a tax collector and famine-relief agency); (2) demographic collapse due to increase in banditry and internal warfare; and (3) an "intercycle" due to lingering effects of
internal warfare. Chapter 4 offers a most creative rebuttal of recent arguments against population pressure's role in generating pre-industrial warfare, arguing that a slight negative correlation, in
synchronic cross-cultural data, is precisely what such a causal role would be expected to produce (due to time lags) when warfare frequency and population density are modeled as predator and prey,
respectively, using the classic Lotka-Volterra equations. Chapter 4 also offers the authors' ambitious attempt to directly articulate secular cycles and millennial trends. Ultimately they produce a
model that, unlike the basic one in chapter 3, simulates key trends observed in the Chinese data in chapter 2:
the later cycles are characterized by a higher technology, and, thus, higher carrying capacity and population, which, according to Kremer's technological development equation embedded into our model,
produces higher rates of technological (and, thus, carrying capacity) growth. Thus, with every new cycle it takes the population more and more time to approach the carrying capacity ceiling to a
critical extent; finally it "fails" to do so, the technological growth rates begin to exceed systematically the population growth rates, and population escapes from the "Malthusian trap" (SCMT: 130).
The third volume is subtitled Secular Cycles and Millennial Trends in Africa (SCMTA hereafter).It is divided into two parts, the first of which is devoted to Egypt in the 1st through 18th centuries
CE (chapters 1-6); the second, to postcolonial tropical Africa (chapters 7-8). The first part argues that while Egypt's population probably increased over the period in question, the increase was
modest compared to that of other agrarian societies. This modesty the authors ascribe to the remarkable brevity of Egypt's political-demographic cycles, which they estimate at averaging around ninety
years little more than half as long as China's. With such brief cycles, collapse repeatedly occurred long before carrying capacities were approached. Strongly inspired by Peter Turchin's work but
hewing more closely to insights of the anachronistic 14th-century cultural evolutionist Ibn Khaldun, the authors find that these brief cycles can be modeled by including climatic fluctuation and,
especially, the rapid reproduction of high-consumption elites due to polygyny. They estimate the annual growth rate for Egyptian elites at 4 percent per year, the rate for commoners (monogamous) at
only 1 percent per year a recipe for rapid political-demographic crisis and collapse, since elites of course depend on the taxation of commoners!
The second part of SCMTA describes the impact of modernization on political-demographic cycles. The authors find that low nutrition predicts political instability and civil war in African nations;
for prevention, they recommend especially the diversification of national economies, and the fostering of education to promote economic development. Concerning the underlying causes of historical
events, they quote John Maynard Keynes writing in 1920:
The great events of history are often due to secular changes in the growth of population and other fundamental economic causes, which, escaping by their gradual character the notice of contemporary
observers, are attributed to the follies of statesmen or the fanaticism of atheists (quoted in SCMTA: 113). Some aspects of this work are easy to criticize. The reporting of probabilities with
sixteen zeros to the right of the decimal point will strike as gratuitous those readers who consider .001, .01, or even .05 sufficient to render randomness an implausible explanation for a result,
especially when, as here, the danger of erroneously rejecting the null hypothesis (alpha or Type I error) is clearly preferable to the premature truncation of inquiry that could result from erroneous
failure to reject the null (beta or Type II error). More importantly, one would like to have seen more attention given to the problems that attend using regression with time-series data. Values in a
variable's time series tend to be affected by adjacent values ("autocorrelation"), a condition that violates one of the assumptions underlying the ordinary-least-squares model and that regularly
results, for regressions on time itself (e.g., population plotted against time), in exaggerated R-squared magnitudes and significance levels; similar exaggeration results for regressions of a
time-trending variable on one or more other time-trending variables (e.g., population growth rate plotted against population). The frequent appearance, in the book's graphs, of long runs of data
points on the same side of a theoretical line or curve is a symptom of autocorrelation; and the book's regressions of trending variables on other trending variables do not appear to have been
protected from this source of spuriousness by inclusion of time itself as an independent variable in the regression equations. The hyperbolic curve, moreover, is not systematically compared here with
serious competitors. For these reasons the hyperbolic curve's superiority, as a description of human population history, remains by no means beyond question (cf. Cohen 1995: chapter 5 and appendix
Important questions remain, too, about the tenability of the Kuznets-Kremer assumptions appealing as they are to some of us offered to theoretically account for the hyperbolic model's applicability
to human population history. For example, the key assumption that technological growth tends to keep pace with population growth appears problematic enough to warrant perhaps greater caution than the
authors express. Also, one would like to see a better fit between the abstract global model on one hand, and what we know about the growth rates for particular populations on the other. Since
particular populations seldom sustain even exponential growth for very long, explaining sustained hyperbolic growth globally apparently requires invoking the spread, from population to population, of
the demographic transition's first phase (cf. CMWSG: 92-93; SCMTA: 116-117). This is for recent centuries only; to cover the pre-industrial period, the authors posit five somewhat intricate and
interrelated mechanisms, one of which again relies on diffusion (the "innovation diffusion" mechanism) (SCMTA: 140-141). It seems somewhat awkward, however, to rely so much on diffusion from donor to
recipient regional populations, sometimes over considerable time periods, given that the Kuznets-Kremer assumptions appear to ascribe the (apparently) hyperbolic shape of long-term global population
growth to direct and continuous interaction, within a single world-system population, between a single technological base and a single inventive potential (both seen as proportional, quantitatively,
to population itself).
While the translation's English is often less than felicitous, it is quite clear; the few typographical errors I noted were not of a kind to create misunderstanding. The authors are to be commended,
I think, for putting most of the mathematics "up front" rather than tucked away in appendices, as publishers are wont to urge. (There are technical appendices three in CMWSG, two in SCMT, and one in
SCMTA; but their function is by no means to keep the text itself free of math.) Cultural evolutionism is still near the beginning of the long process of becoming a mathematical science; to that
extent, the medium of this book is, if perhaps not the message, certainly a message (Carneiro 2003: 285-286)!
Even more generally, this work vigorously asserts the value of studying social and cultural evolution as such. Noting the "almost total disenchantment with the evolutionary approach in the social
sciences as a whole" (SCMT: 140), the authors perspicuously compare the resulting stultification to the fate that "would have stricken physicists if a few centuries ago they had decided that there is
no real thing such as gas, that gas is a mental construction, and that one should start with such a simple' thing as a mathematical model of a few free-floating molecules in a closed vessel" (SCMT:
140, note 6).
Thirty years ago, Mark Nathan Cohen wrote, "It has been my observation that simple hypotheses boldly defended are often the best teaching tools and the best spurs to research" (Cohen 1977: ix) Aside
from the difficulties we all encounter, sooner or later, comprehending mathematics (we differ only in when, not in whether, the difficulties begin), this book's theses are simple; and they are
nothing if not "boldly defended"! In sum, this work deserves attention from anyone interested in cultural evolutionism's scientific prospects, and close study indeed by anyone hoping to contribute to
this field's development from a mathematical point of view.
Carneiro, R. L. 2003. Evolutionism in Cultural Anthropology: A Critical History. Boulder, CO: Westview.
Cohen, J. E. 1995. How Many People Can the Earth Support? New York: W. W. Norton.
Cohen, M. N. 1977. The Food Crisis in Prehistory: Overpopulation and the Origins of Agriculture. New Haven, CT: Yale University Press.
Foerster, H. von, P. Mora, and L. Amiot 1960. Doomsday: Friday, 13 November, A.D. 2026. Science 132: 1291-1295. | {"url":"http://urss.ru/cgi-bin/db.pl?lang=en&blang=en&page=Book&list=14&id=37484","timestamp":"2014-04-21T09:38:54Z","content_type":null,"content_length":"43309","record_id":"<urn:uuid:fafd1d65-a49f-47ab-a7d3-44ad1dcebc68>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
The following description of the DAS subtests includes qualitative characteristics for g loadings, reliability, and specificity. The following definitions and criteria were used in assessing each
subtest. Additional information is available elsewhere in this volume and in four valuable sources: Carroll (1993), Elliott (1997), Flanagan, Genshaft, & Harrison (1997), and McGrew & Flanagan (1998,
pp. 14-25, 64-68, 71, 63-91, 92-128).
When describing the norms for each subtest, the terms usual and extended means that the subtest is appropriate for the full range of ability at that age. The term Out of level denotes ages at which
the subtest is appropriate for most, but not all children.
The g Loading refers to the subtest’s loading on the first unrotated factor or component in principle factor analysis. A subtest with a general factor loading of .70 or greater was considered Good; a
loading of .51 to .69 Fair; and a loading of .50 or lower Poor. These are the same criteria used in The intelligence test desk reference (ITDR): Gf-Gc cross-battery assessment (Kaufman, 1979, pp.
109-110; McGrew & Flanagan, 1998, pp. 64, 72). Estimates of g loadings were taken from Tables 9.4 (p. 202), 9.7 (p. 204), and 9.11 (p. 206) of the DAS Introductory and Technical Handbook.
Reliability refers to the degree to which a test score is free from errors of measurement. A subtest’s reliability was considered High if it was greater than or equal to .90, Medium if it was greater
than .79 but less than .90, and Low if it was below .80. (McGrew & Flanagan, 1998, p. 64). Subtest reliabilities were found in the DAS Introductory and Technical Handbook Tables 8.1 and 8.2 (pp.
A subtest has three types of variance: common variance (that which is shared with other subtests in the battery); specific variance (that portion of the subtest’s variance that is reliable and unique
to that subtest); and error variance (equal to 1 minus the reliability coefficient). We cannot interpret an ability supposedly measured by an individual subtest, unless that subtest contains a
reasonable amount of reliable specific variance (specificity) and this specificity exceeds the error variance. We computed the specificity for each subtest at each age by the following procedure. The
shared or common variance was first estimated by the squared multiple correlation between the specified subtest and all other subtests in the battery. Subtracting the reliability coefficient from the
common or shared variance provided the estimate of specific variance for each subtest. Specificity was considered Ample if the value was equal to or above 25% of the total test variance and it
exceeded the error variance, Adequate if it met only one of the two criteria noted for Ample, and Inadequate if it did not meet either of the two criteria noted for Ample. Again we followed the
criteria listed in McGrew & Flanagan (1998, pp. 64-66).
The McGrew, Flanagan, and Ortiz Integrated Carroll/Cattell-Horn Gf-Gc Cross-Battery Approach Gf-Gc classifications are those proposed by McGrew & Flanagan (1998). See also Carroll (1993); Flanagan,
Genshaft, & Harrison (1997); Flanagan, McGrew, & Ortiz (2000); McGrew (1997); and Woodcock (1990).
CORE SUBTESTS
Word Definitions Similarities Matrices Sequential & Quantitative Reasoning
Recall of Designs Pattern Construction Block Building Verbal Comprehension
Picture Similarities Early Number Concepts Naming Vocabulary Copying
DIAGNOSTIC SUBTESTS
Recall of Objects Recall of Digits Recognition of Pictures
Speed of Information Processing Matching Letter-Like Forms
ACHIEVEMENT TESTS
Basic Number Skills Spelling Word Reading | {"url":"http://alpha.fdu.edu/~dumont/psychology/das_subtests.htm","timestamp":"2014-04-20T09:50:26Z","content_type":null,"content_length":"12896","record_id":"<urn:uuid:1012c65e-56b6-4422-8690-0c5731bd9f24>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Date Subject Author
6/18/13 Joel David Hamkins on definable real numbers in analysis fom
6/19/13 Re: Joel David Hamkins on definable real numbers in analysis David Petry
6/19/13 Re: Joel David Hamkins on definable real numbers in analysis fom
6/19/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/19/13 Re: Joel David Hamkins on definable real numbers in analysis Peter Percival
6/19/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/19/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/19/13 Re: Joel David Hamkins on definable real numbers in analysis Tucsondrew@me.com
6/21/13 Re: Joel David Hamkins on definable real numbers in analysis David C. Ullrich
6/21/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/21/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis Tucsondrew@me.com
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis Tucsondrew@me.com
6/23/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/23/13 Re: Joel David Hamkins on definable real numbers in analysis Tucsondrew@me.com
6/23/13 Re: Joel David Hamkins on definable real numbers in analysis Tucsondrew@me.com
6/23/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/23/13 Re: Joel David Hamkins on definable real numbers in analysis Tucsondrew@me.com
6/23/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/23/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/24/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/23/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/23/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/23/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/23/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/23/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/24/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/24/13 Re: WM screws up the notion of a limit! Virgil
6/25/13 Re: WM screws up the notion of a limit! Virgil
6/25/13 Re: WM screws up the notion of a limit! Virgil
6/26/13 Re: WM screws up the notion of a limit! Virgil
6/26/13 Re: WM screws up the notion of a limit! fom
6/26/13 Re: WM screws up the notion of a limit! Virgil
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis Tucsondrew@me.com
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis David C. Ullrich
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/22/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/23/13 Re: Joel David Hamkins on definable real numbers in analysis David C. Ullrich
6/23/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/23/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/24/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/24/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/25/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/25/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/24/13 Re: Joel David Hamkins on definable real numbers in analysis David C. Ullrich
6/24/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/24/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/25/13 Re: Joel David Hamkins on definable real numbers in analysis David C. Ullrich
6/25/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/25/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/26/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/26/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/25/13 Re: Joel David Hamkins on definable real numbers in analysis Tucsondrew@me.com
6/26/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/26/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/25/13 Re: Joel David Hamkins on definable real numbers in analysis Ralf Bader
6/26/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/26/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/26/13 Re: Joel David Hamkins on definable real numbers in analysis David C. Ullrich
6/26/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/26/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/27/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/27/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/26/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/27/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/27/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/28/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/28/13 Re: Joel David Hamkins on definable real numbers in analysis Tucsondrew@me.com
6/28/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/28/13 Re: Joel David Hamkins on definable real numbers in analysis Tucsondrew@me.com
6/28/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/28/13 Re: Joel David Hamkins on definable real numbers in analysis Tucsondrew@me.com
6/29/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/29/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/30/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/28/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/29/13 Re: Joel David Hamkins on definable real numbers in analysis mueckenh@rz.fh-augsburg.de
6/29/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/28/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/28/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/19/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil
6/19/13 Re: Joel David Hamkins on definable real numbers in analysis Virgil | {"url":"http://mathforum.org/kb/message.jspa?messageID=9145629","timestamp":"2014-04-19T20:40:52Z","content_type":null,"content_length":"128145","record_id":"<urn:uuid:f585cc36-20a4-40ae-be7e-a43b84c967ff>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
325 f to celcius
You asked:
325 f to celcius
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/325_f_to_celcius","timestamp":"2014-04-20T06:26:11Z","content_type":null,"content_length":"55998","record_id":"<urn:uuid:bb5e51e0-9cde-4531-aab4-5a42dc6a4c9f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
I can't be sure as you don't remember the function, but it is possibly because you didn't find the second derivative of the function. You see, the first derivative can equal zero indicating a local
extrema. However, if the second derivative also equals zero at this point it is undefined. The case may be that the function did not have a real value at zero. This is what some call a discontinuity.
There may even be a limit as the function approaches zero, but still have no value at that point. Again, the second derivative test would allow you to make this distinction.
Also, if the function had no values for x less than zero, it looks like there would be a relative maximum at zero. | {"url":"http://www.mathisfunforum.com/post.php?tid=2514&qid=24766","timestamp":"2014-04-17T04:15:33Z","content_type":null,"content_length":"17378","record_id":"<urn:uuid:f19c8a93-7367-4a13-9893-2a0ca0222f32>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sumner, WA ACT Tutor
Find a Sumner, WA ACT Tutor
...My primary programming language is currently Java. Regardless of the subject, I would say I am effective at recognizing patterns. I love sharing any shortcuts or tips that I discover.I have
taken 2 quarters of Discrete Structures (Mathematics) at University of Washington, Tacoma.
16 Subjects: including ACT Math, chemistry, calculus, algebra 2
...I've taught in classrooms, over the kitchen table, and I have to say that the online experience is by far the best. We cover more material faster, it's much more convenient for our schedules,
and I can email you PDFs of all of the problems that we did. You can also record our session so you can watch them again and again for free.
16 Subjects: including ACT Math, geometry, Chinese, algebra 1
...For seven years I have been doing this in the Buckley/Bonney Lake area where I live and in the White River School District where I am fairly well known. Although I do not hold a teaching
certificate (working on that), I am a certified Substitute teacher, I taught Algebra 2 at Choice HS, and I wa...
11 Subjects: including ACT Math, calculus, ASVAB, geometry
...I have had roles in many productions, both musical (such as Stephen Sondheim's "A Little Night Music" ) and non-musical (John Patrick's "Teahouse of the August Moon"), and I also directed the
play WASP by Steve Martin. My approach to acting is method based (notable examples of method actors inc...
22 Subjects: including ACT Math, reading, English, writing
...One sentence out of place and the whole meaning of the paper or paragraph could be lost to the reader's comprehension. Proper sentence structure helps provide a road map toward the final
meaning of the piece. Many people associate geography with knowing the locations of countries and capitals around the world.
50 Subjects: including ACT Math, chemistry, reading, English
Related Sumner, WA Tutors
Sumner, WA Accounting Tutors
Sumner, WA ACT Tutors
Sumner, WA Algebra Tutors
Sumner, WA Algebra 2 Tutors
Sumner, WA Calculus Tutors
Sumner, WA Geometry Tutors
Sumner, WA Math Tutors
Sumner, WA Prealgebra Tutors
Sumner, WA Precalculus Tutors
Sumner, WA SAT Tutors
Sumner, WA SAT Math Tutors
Sumner, WA Science Tutors
Sumner, WA Statistics Tutors
Sumner, WA Trigonometry Tutors
Nearby Cities With ACT Tutor
Algona, WA ACT Tutors
Bonney Lake ACT Tutors
Edgewood, WA ACT Tutors
Federal Way ACT Tutors
Fife, WA ACT Tutors
Fircrest, WA ACT Tutors
Gig Harbor ACT Tutors
Graham, WA ACT Tutors
Jovita, WA ACT Tutors
Milton, WA ACT Tutors
Normandy Park, WA ACT Tutors
Pacific, WA ACT Tutors
Puy, WA ACT Tutors
Puyallup ACT Tutors
Spanaway ACT Tutors | {"url":"http://www.purplemath.com/sumner_wa_act_tutors.php","timestamp":"2014-04-17T07:40:01Z","content_type":null,"content_length":"23660","record_id":"<urn:uuid:95f3c139-87ca-4feb-a72f-52e5fe4604c6>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
[C Tutorial] Recursion
Hello guys ,
I decided to do an example about recursion cause it's an argument I had difficulty understanding
at first.So what is recursion in programming?
Recursion is a programming technique that allows the programmer to express operations in terms of themselves. In C, this takes the form of a function that calls itself. A useful way to think of
recursive functions is to imagine them as a process being performed where one of the instructions is to "repeat the process". This makes it sound very similar to a loop because it repeats the same
code, and in some ways it is similar to looping. On the other hand, recursion makes it easier to express ideas in which the result of the recursive call is necessary to complete the task. Of course,
it must be possible for the "process" to sometimes be completed without the recursive call. One simple example is the idea of building a wall that is ten feet high; if I want to build a ten foot high
wall, then I will first build a 9 foot high wall, and then add an extra foot of bricks. Conceptually, this is like saying the "build wall" function takes a height and if that height is greater than
one, first calls itself to build a lower wall, and then adds one a foot of bricks.
so here is an example I wrote for recursion so you can understand it better(I will post more examples as I code them)
/*rafy recursion example 1 */
#include <stdio.h>
void recurse (int count)
if (count<9) /*We need a limit otherwise the program would execute till it crashes*/
printf("%d \n",count); /*prints the count on-screen*/
recurse (count +1); /*calls the function recurse and adds 1 to the count*/
int main ()
recurse (1); /* the first time I call the function I put count=1 */
return 0;
So here it goes for now.I know tail recursion is missing,when I learn that I will add a second part to the tutorial. | {"url":"https://evilzone.org/c-c/(c-tutorial)-recursion/msg12696/","timestamp":"2014-04-19T15:09:38Z","content_type":null,"content_length":"43807","record_id":"<urn:uuid:309ac66e-9c36-47fe-bbcc-60d9b56a2b1e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Now would this be considered AAS?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
we call this rhs
Best Response
You've already chosen the best response.
becaus it is concerned with 90 degree
Best Response
You've already chosen the best response.
my choices are AAS, SAS, SSS, or insufficient informationt to prove congruence.
Best Response
You've already chosen the best response.
The Angle Angle Side postulate (often abbreviated as AAS) states that if two angles and the non-included side one triangle are congruent to two angles and the non-included angle of another
triangle, then these two triangles are congruent. In other words, yes.
Best Response
You've already chosen the best response.
thank you.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/505118efe4b0bea430069123","timestamp":"2014-04-21T07:47:37Z","content_type":null,"content_length":"50858","record_id":"<urn:uuid:bb0c28d9-0007-4971-96ef-4e6d464aa2fd>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00505-ip-10-147-4-33.ec2.internal.warc.gz"} |
G. Berhuy, Z. Reichstein, On the notion of canonical dimension for algebraic groups, Advances in Mathematics, 198 (2005), 128--171.
Abstract: We define and study a new numerical invariant of an algebraic group action; we call it the canonical dimension. We then apply the resulting theory to the problem of computing the minimal
number of parameters required to define a generic hypersurface of degree d in P^{n-1}. | {"url":"http://www.math.ubc.ca/~reichst/can-dim-ind.html","timestamp":"2014-04-17T15:56:49Z","content_type":null,"content_length":"834","record_id":"<urn:uuid:42c32e8f-bb01-44cc-a0da-6fd522801b4a>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comparison Theorems for the Position-Dependent Mass Schr
ISRN Mathematical Physics
VolumeΒ 2012Β (2012), Article IDΒ 461452, 11 pages
Research Article
Comparison Theorems for the Position-Dependent Mass SchrΓΆdinger Equation
Theoretical Physics Department, FFEKS, Dniepropetrovsk National University, 72 Gagarin Avenue, Dniepropetrovsk 49010, Ukraine
Received 13 August 2011; Accepted 13 September 2011
Academic Editor: E.Β Yomba
Copyright Β© 2012 D. A. Kulikov. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The following comparison rules for the discrete spectrum of the position-dependent mass (PDM) SchrΓΆdinger equation are established. (i) If a constant mass and a PDM are ordered everywhere, that is
either, or , then the corresponding eigenvalues of the constant-mass Hamiltonian and of the PDM Hamiltonian with the same potential and the BenDaniel-Duke ambiguity parameters are ordered. (ii) The
corresponding eigenvalues of PDM Hamiltonians with the different sets of ambiguity parameters are ordered if has a definite sign. We prove these statements by using the Hellmann-Feynman theorem and
offer examples of their application.
1. Introduction
Last few decades, quantum mechanical systems with position-dependent mass (PDM) have received considerable attention. The interest stems mainly from the relevance of the PDM background for describing
the physics of compositionally graded crystals [1, 2] and semiconductor nanodevices [3β 5]. These applications have stimulated the study of the various theoretical aspects of the PDM SchrΓΆdinger
equation; in particular, its exact solvability [6β 8], shape invariance [9], supersymmetry and intertwining properties [10β 12], point canonical transformation [13, 14], iterative solution [15],
and relation to theories in curved spaces [16] have been examined.
However, it is known that the PDM SchrΓΆdinger equation suffers from ambiguity in operator ordering, caused by the non-vanishing commutator of the momentum operator and the PDM. The PDM Hamiltonians
with different ambiguity parameters have been proposed [17β 20], but none of them can be preferred according to the existing reliability tests [21β 23]. Therefore, the attempts are made to settle
the issue by fitting the calculated binding energies to the experimental data [24, 25].
For generelizing such findings and obtaining additional information, one needs some tools to compare the energy eigenvalues predicted by the different PDM Hamiltonians. Within the constant-mass
framework, a convenient tool is provided by the so-called comparison theorems [26β 28]. For example, the elementary comparison theorem [26, 28] states that if two real potentials are ordered, , then
each corresponding pair of eigenvalues is ordered, .
The purpose of this paper is to establish the comparison theorems that confront the energy eigenvalues of the constant-mass and PDM SchrΓΆdinger equations, as well as the energy eigenvalues of the
PDM problems with different ambiguity parameters. Our presentation is based on the Hellmann-Feynman theorem [29, 30] and makes use of the ideas developed for the constant-mass case [28, 31].
The plan of the paper is as follows. In Section 2, we introduce the PDM Hamiltonians and recall the Hellmann-Feynman theorem. In Section 3, the comparison theorems on the PDM background are
formulated and proved. In Section 4, we apply these theorems to two PDM problems of current interest. Finally, our conclusions are summarized in Section 5.
2. Preliminaries
For the PDM SchrΓΆdinger equation, the most general form of the Hamiltonian is given by [17] where , , are the ambiguity parameters () and the units with are used. In this paper, we will adopt the
sets of the ambiguity parameter values suggested by BenDaniel and Duke [18] (, ), Li and Kuhn [19] (), and Gora and Williams [20] (, ). Although there are infinitely many alternative values of
constrained by , our derivation of the comparison theorems requires some quantum mean values to have definite signs that imposes further restrictions. As we will see, these amount to choose or still
capturing most of the ambiguity patameter sets known in literature.
The methods we are going to apply are valid for arbitrary dimension . We suppose that the Hamiltonian operators have domains ; they are bounded below, essentially self-adjoint, and have at least one
discrete eigenvalue at the bottom of the spectrum.
To derive our main results, we need the Hellmann-Feynman theorem [29, 30]. This theorem states that if the Hamiltonian of a system is , where is a parameter, and the eigenvalue equation for a bound
state is , where is the energy and the normalized associated eigenstate, then Note that the proof relies on the self-adjointness of and does not change for PDM Hamiltonians.
3. Comparison Theorems
First, let us formulate the theorem that confronts the energy eigenvalues of the constant mass and BenDaniel-Duke PDM Hamiltonians with the same potentials.
Theorem 3.1. Suppose that the Hamiltonian with a real potential and a constant mass has discrete eigenvalues characterized by a set of quantum numbers . Then, the corresponding eigenvalues of the
BenDaniel-Duke PDM Hamiltonian satisfy provided that these eigenvalues exist.
Proof. Define the Hamiltonian which turns into and when and , respectively. Assume that possesses well defined eigenvalues , for , and the normalized associated eigenfunctions in the coordinate
representation are .
Applying the Hellmann-Feynman theorem , we get where the integration is performed over the whole space and the asterisk denotes complex conjugation.
Integrating by parts and taking into account that and must vanish at infinity, we obtain It is a positive (negative) number if () for all , so that is an increasing (decreasing) function of . For
definiteness, let . Then, it follows immediately that that completes the proof. Note that an alternative proof can be given by applying the variational characterization [32] of the discrete part of
the SchrΓΆdinger spectrum.
It is now tempting to compare the eigenvalues of the constant-mass Hamiltonian with those of PDM Hamiltonians other than the BenDaniel-Duke one. However, in that case, at least one of the ambiguity
parameters and in (2.1) must be nonzero and we encounter an obstacle that becomes clear if we first find out how the eigenvalues of different PDM Hamiltonians are ordered. This is done in the
following theorem.
Theorem 3.2. The discrete eigenvalues , and of the BenDaniel-Duke, Li-Kuhn, and Gora-Williams PDM Hamiltonians: satisfy provided that these eigenvalues exist.
Proof. Let us prove the inequalities for and . We define the parameter-dependent Hamiltonian by and make use of the Hellmann-Feynman theorem (2.2), to obtain
Integration by parts yields
Let for all , then is an increasing function and we get that completes the proof. For the case of and , the proof is identical since the factor arises in this case as well. However, it is hardly
possible to extend the theorem to the situations when both the ambiguity parameters and are nonzero. The reason is that integrals like (3.15) then contain extra terms (proportional to ), so that the
term cannot determine the sign by itself.
Moreover, it is now evident from (3.6) and (3.15) that if we try to compare with the constant-mass energy , then the sign of the integral will be determined by the signs of both and . Unfortunately,
this leads to inconsistent conditions. For example, in order to get the inequality , we have to put and ; that is, must be bounded from above and convex which is impossible. The same obstacle is
encountered when dealing with .
4. Applications
In this section, we consider two specific PDM problems, which are discussed in literature and show how the comparison theorems explain the peculiarities of their energy spectra.
Case 1. The three-dimensional mass distribution of the form with and nonnegative has been shown [16] to give rise to an exactly solvable extension of the Coulomb problem, . This extension is useful
as it enables one to trace the link between the PDM background and theories with deformations in the quantum canonical relations or with curvature of the underlying space.
For this case, the discrete energy eigenvalues of the PDM Hamiltonian (2.1), in units with , are written as [16] where and are the orbital and principal quantum numbers, respectively. In contrast to
the constant-mass Coulomb problem, the system has only a finite number of discrete levels, so that the allowed values of and are restricted by Such a restriction implies that in presence of the PDM
the energy eigenvalues may be closer to continuum and thus larger than the ordinary Coulomb eigenenergies calculated with the mass .
It is Theorems 3.1 and 3.2 that permit us to determine how the energy eigenvalues are ordered. Since in (4.1) we have , the eigenvalues of the BenDaniel-Duke PDM Hamiltonian must obey , by Theorem
3.1. Since , it follows from Theorem 3.2 that the eigenvalues of the PDM Hamiltonians with different ambiguity parameters are ordered as .
In order to illustrate these inequalities, we present Figure 1 where we plot the energy for the ground state (, ) and the first radially excited state (, ), as a function of the deforming parameter .
In Figure 1, the solid lines correspond to the constant-mass case whereas the broken curves represent the PDM cases with different ambiguity parameters. The circles indicate the points at which the
bound states disappear according to (4.3). From Figure 1, we see that, for all allowed , it holds that , as it was proved, and also , but we observe both and regions. Furthermore, we can see that the
second proved inequality, , is indeed fulfilled.
Case 2. Now, let us consider the one-dimensional mass distribution which is found to be useful for studying quantum wells [3]. Applying Theorem 3.1 to this PDM profile, we get the inequality that
justifies the shift of electron and hole binding energies to lower values which was observed in [3] when the spatial dependence of mass was included. On the other hand, Theorem 3.2 does not apply
since the quantity has an indefinite sign.
It is worth examining how this sign indefiniteness affects the energy spectrum. To that end, we choose the harmonic-oscillator potential, , for which the accurate numerical solution of the PDM
SchrΓΆdinger equation with the mass distribution (4.4) is available [15]. In Figure 2, we plot the corresponding energy of the ground and the fifth excited states, as a function of , for the three
PDM Hamiltonians with different ambiguity parameters. The energies have been calculated with , by using the shooting method, and are in agreement with those computed in [15] where the results
obtained with the same and , and are reported.
From Figure 2, it is evident that for the excited state the discrepancy among the energies evaluated using the different PDM Hamiltonians is less profound. However, we call attention to a serious
difference between the ground and excited states. As seen in Figure 2, the ground-state energies are ordered as whereas the energies of the fifth excited state (and of the states with ) are in
inverse order. This inversion can be easily understood in conjunction with Theorem 3.2. It is known that the wave functions of highly excited states are spread to larger distances. Consequently, with
increasing , the mean value of grows and eventually reaches the point where the sign of in Theorem 3.2 reverses, thus inverting the order of energies.
5. Summary
In this paper, we have established the comparison theorems for the PDM SchrΓΆdinger equation. Our first theorem states that the corresponding eigenvalues of a constant-mass Hamiltonian and of a
BenDaniel-Duke PDM Hamiltonian with the same potential are ordered if the constant and position-dependent masses are ordered everywhere. The second theorem concerns PDM Hamiltonians with the
different sets of ambiguity parameters: the BenDaniel-Duke, Li-Kuhn, and Gora-Williams Hamiltonians. It is proved that their corresponding eigenvalues are ordered if the Laplacian of the inverse mass
distribution has a definite sign.
We have applied these theorems to the PDM Coulomb and harmonic-oscillator problems and have been led to the following conclusions. First, the eigenvalues of PDM Hamiltonians other than the
BenDaniel-Duke one do not have to be in the strict order with respect to the eigenvalues of the constant-mass Hamiltonian. For instance, from both Figures 1 and 2, it is seen that the order of the
Gora-Williams and constant-mass ground-state energies do vary, depending on the value of the deforming parameter . Second, if the quantity has no definite sign and thus Theorem 3.2 does not apply,
then the order of the energies calculated using different PDM Hamiltonians may alternate, as seen by comparing parts (a) and (b) of Figure 2. We therefore think that for establishing further
comparison rules within the PDM framework one should restrict the potential profile to, for example, a spherically symmetric case, the way the generalized comparison theorems for the ordinary
SchrΓΆdinger equation have been obtained [27].
The comparison rules we have found out can be employed for analyzing the energy spectra in semiconductor nanodevices; an example of application to the quantum well system was sketched in the previous
section. In this connection, it is worthwhile to extend the present approach to periodic heterostructures, which allow the direct fit of PDM binding energies to experiment [25]. Then, we will have to
abandon the requirement of vanishing of the wave function at infinity which the proof of our theorems relies on. What comparison rules might be formulated in that case is an interesting open
The author thanks Dr. O. Yu. Orlyansky for discussions and a careful reading of the paper. The research was supported by Grant N0109U000124 from the Ministry of Education and Science of Ukraine which
is gratefully acknowledged.
1. G. Bastard, Wave Mechanics Applied to Semiconductor Heterostructures, Editions de Physique, Les Ulis, 1988.
2. K. Young, β Position-dependent effective mass for inhomogeneous semiconductors,β Physical Review B, vol. 39, no. 18, pp. 13434β 13441, 1989. View at Publisher Β· View at Google Scholar Β·
View at Scopus
3. G. L. Herling and M. L. Rustgi, β Spatially dependent effective mass and optical properties in finite parabolic quantum wells,β Journal of Applied Physics, vol. 71, no. 2, pp. 796β 799, 1992.
View at Publisher Β· View at Google Scholar Β· View at Scopus
4. A. J. Peter and K. Navaneethakrishnan, β Effects of position-dependent effective mass and dielectric function of a hydrogenic donor in a quantum dot,β Physica E, vol. 40, no. 8, pp. 2747β
2751, 2008. View at Publisher Β· View at Google Scholar Β· View at Scopus
5. R. Khordad, β Effects of position-dependent effective mass of a hydrogenic donor impurity in a ridge quantum wire,β Physica E, vol. 42, no. 5, pp. 1503β 1508, 2010. View at Publisher Β· View
at Google Scholar Β· View at Scopus
6. L. Dekar, L. Chetouani, and T. F. Hammann, β Wave function for smooth potential and mass step,β Physical Review A, vol. 59, no. 1, pp. 107β 112, 1999. View at Scopus
7. A. D. Alhaidari, β Solutions of the nonrelativistic wave equation with position-dependent effective mass,β Physical Review A, vol. 66, article 042116, 7 pages, 2002.
8. S.-H. Dong and M. Lozada-Cassou, β Exact solutions of the Schrödinger equation with the position-dependent mass for a hard-core potential,β Physics Letters, Section A, vol. 337, no. 4–6, pp.
313β 320, 2005. View at Publisher Β· View at Google Scholar Β· View at Scopus
9. B. Bagchi, A. Banerjee, C. Quesne, and V. M. Tkachuk, β Deformed shape invariance and exactly solvable Hamiltonians with position-dependent effective mass,β Journal of Physics A: Mathematical
and General, vol. 38, no. 13, pp. 2929β 2945, 2005. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
10. T. Tanaka, β N-fold supersymmetry in quantum systems with position-dependent mass,β Journal of Physics A: Mathematical and General, vol. 39, no. 1, pp. 219β 234, 2006. View at Publisher Β·
View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
11. A. Ganguly and L. M. Nieto, β Shape-invariant quantum Hamiltonian with position-dependent effective mass through second-order supersymmetry,β Journal of Physics A: Mathematical and Theoretical
, vol. 40, no. 26, pp. 7265β 7281, 2007. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
12. B. Midya, B. Roy, and R. Roychoudhury, β Position dependent mass Schrödinger equation and isospectral potentials: intertwining operator approach,β Journal of Mathematical Physics, vol. 51, no.
2, article 022109, 2010. View at Publisher Β· View at Google Scholar
13. C. Tezcan and R. Sever, β Exact solutions of the Schrödinger equation with position-dependent effective mass via general point canonical transformation,β Journal of Mathematical Chemistry,
vol. 42, no. 3, pp. 387β 395, 2007. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
14. R. A. Kraenkel and M. Senthilvelan, β On the solutions of the position-dependent effective mass Schrödinger equation of a nonlinear oscillator related with the isotonic oscillator,β Journal of
Physics A: Mathematical and Theoretical, vol. 42, no. 41, article 415303, p. 10, 2009. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
15. R. Koç and S. Sayın, β Remarks on the solution of the position-dependent mass Schrödinger equation,β Journal of Physics A: Mathematical and Theoretical, vol. 43, no. 45, article 455203, 2010.
View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
16. C. Quesne and V. M. Tkachuk, β Deformed algebras, position-dependent effective masses and curved spaces: an exactly solvable Coulomb problem,β Journal of Physics A: Mathematical and General,
vol. 37, no. 14, pp. 4267β 4281, 2004. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
17. O. von Roos, β Position-dependent effective masses in semiconductor theory,β Physical Review B, vol. 27, no. 12, pp. 7547β 7552, 1983. View at Publisher Β· View at Google Scholar Β· View at
18. D. J. BenDaniel and C. B. Duke, β Space-charge effects on electron tunneling,β Physical Review, vol. 152, no. 2, pp. 683β 692, 1966. View at Publisher Β· View at Google Scholar Β· View at
19. T. L. Li and K. J. Kuhn, β Band-offset ratio dependence on the effective-mass Hamiltonian based on a modified profile of the GaAs-AlxGa1-xAs quantum well,β Physical Review B, vol. 47, no. 19,
pp. 12760β 12770, 1993. View at Publisher Β· View at Google Scholar Β· View at Scopus
20. T. Gora and F. Williams, β Theory of electronic states and transport in graded mixed semiconductors,β Physical Review, vol. 177, no. 3, pp. 1179β 1182, 1969. View at Publisher Β· View at
Google Scholar Β· View at Scopus
21. R. A. Morrow and K. R. Brownstein, β Model effective-mass Hamiltonians for abrupt heterojunctions and the associated wave-function-matching conditions,β Physical Review B, vol. 30, no. 2, pp.
678β 680, 1984. View at Publisher Β· View at Google Scholar Β· View at Scopus
22. O. Mustafa and S. H. Mazharimousavi, β A singular position-dependent mass particle in an infinite potential well,β Physics Letters A, vol. 373, no. 3, pp. 325β 327, 2009. View at Publisher Β·
View at Google Scholar
23. A. de Souza Dutra and C. A. S. Almeida, β Exact solvability of potentials with spatially dependent effective masses,β Physics Letters, Section A, vol. 275, no. 1-2, pp. 25β 30, 2000. View at
Publisher Β· View at Google Scholar Β· View at MathSciNet Β· View at Scopus
24. F. S. A. Cavalcante, R. N. Costa Filho, J. Ribeiro Filho, C. A. S. de Almeida, and V. N. Freire, β Form of the quantum kinetic-energy operator with spatially varying effective mass,β Physical
Review B, vol. 55, no. 3, pp. 1326β 1328, 1997. View at Scopus
25. V. A. Smagley, M. Mojahedi, and M. Osinski, β Operator ordering of a position-dependent effective-mass Hamiltonian in semiconductor superlattices and quantum wells,β in Proceedings of the SPIE
, P. Blood, M. Osinski, and Y. Arakawa, Eds., vol. 4646, pp. 258β 269, Academic, San Jose, Calif, USA, 2002.
26. R. L. Hall, β Refining the comparison theorem of quantum mechanics,β Journal of Physics A: Mathematical and General, vol. 25, pp. 4459β 4469, 1992. View at Publisher Β· View at Google Scholar
Β· View at Scopus
27. R. L. Hall and Q. D. Katatbeh, β Generalized comparison theorems in quantum mechanics,β Journal of Physics A: Mathematical and General, vol. 35, no. 41, pp. 8727β 8742, 2002. View at
Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
28. C. Semay, β General comparison theorem for eigenvalues of a certain class of Hamiltonians,β Physical Review A, vol. 83, no. 2, article 024101, 2 pages, 2011.
29. R. P. Feynman, β Forces in molecules,β Physical Review, vol. 56, no. 4, pp. 340β 343, 1939.
30. H. Hellmann, Einfűhrung in die Quantenchemie, Franz Deuticke, Leipzig, Germany, 1937.
31. R. L. Hall, β Relativistic comparison theorems,β Physical Review A, vol. 81, no. 5, article 052101, 2010. View at Publisher Β· View at Google Scholar
32. M. Reed and B. Simon, Methods of Modern Mathematical Physics IV: Analysis of Operators, Academic Press, New York, NY, USA, 1978. | {"url":"http://www.hindawi.com/journals/isrn/2012/461452/","timestamp":"2014-04-16T05:59:51Z","content_type":null,"content_length":"259446","record_id":"<urn:uuid:1f24123c-5335-4cfa-88dc-2930ee8fe4dc>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00587-ip-10-147-4-33.ec2.internal.warc.gz"} |
Getting an answer despite exiting recursion
i wrote this python code, which from wolfram says that its supposed to return the factorial of any positive value (i probably messed up somewhere), integer or not:
the problem is , say i input pi to 6 decimal places. 2*n will not become a float with 0 as its decimals any time soon, so the equation turns out to be
how would i stop the recursion and still get the answer?
ive had suggestions to add an index to the definitions or something, but the problem is, if the code stops when it reaches an index, there is still no answer to put back into the previous "nests" or
whatever you call them
Visit calccrypto.wikidot.com for detailed descriptions of algorithms and other crypto related stuff (not much yet, so help would be appreciated). | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=136774","timestamp":"2014-04-18T16:47:25Z","content_type":null,"content_length":"9966","record_id":"<urn:uuid:1cd2c25f-590b-4965-a616-5dbea103768d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
Description of the illustration ntile.gif
See Also:
"Analytic Functions"
for information on syntax, semantics, and restrictions, including valid forms of
NTILE is an analytic function. It divides an ordered data set into a number of buckets indicated by expr and assigns the appropriate bucket number to each row. The buckets are numbered 1 through expr
. The expr value must resolve to a positive constant for each partition. Oracle Database expects an integer, and if expr is a noninteger constant, then Oracle truncates the value to an integer. The
return value is NUMBER.
The number of rows in the buckets can differ by at most 1. The remainder values (the remainder of number of rows divided by buckets) are distributed one for each bucket, starting with bucket 1.
If expr is greater than the number of rows, then a number of buckets equal to the number of rows will be filled, and the remaining buckets will be empty.
You cannot nest analytic functions by using NTILE or any other analytic function for expr. However, you can use other built-in function expressions for expr.
The following example divides into 4 buckets the values in the salary column of the oe.employees table from Department 100. The salary column has 6 values in this department, so the two extra values
(the remainder of 6 / 4) are allocated to buckets 1 and 2, which therefore have one more value than buckets 3 or 4.
SELECT last_name, salary, NTILE(4) OVER (ORDER BY salary DESC) AS quartile
FROM employees
WHERE department_id = 100
ORDER BY last_name, salary, quartile;
LAST_NAME SALARY QUARTILE
------------------------- ---------- ----------
Chen 8200 2
Faviet 9000 1
Greenberg 12008 1
Popp 6900 4
Sciarra 7700 3
Urman 7800 2 | {"url":"http://docs.oracle.com/cd/E11882_01/server.112/e17118/functions115.htm","timestamp":"2014-04-20T18:05:45Z","content_type":null,"content_length":"11134","record_id":"<urn:uuid:f909b9d4-4617-4b11-876f-b212ece96673>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
May 2nd 2010, 02:25 AM #1
Apr 2010
I've got a maths exam next week and i need to know how to work this question out for my revision so please could u show me:
(a) In a set of dolls, the height of the middle doll is 9 cm.
What are the heights of the other dolls?
........... cm 9 cm .............. cm
Smallest Middle Tallest
(b) In another set of dolls, the height of the tallest doll is 9 cm.
What are the heights of the other dolls?
Show your working and give your answer to one decimal place.
............ cm ................ cm 9 cm
smallest middle tallest
Please could you show your working for all the questions so i understand it.
I've got a maths exam next week and i need to know how to work this question out for my revision so please could u show me:
(a) In a set of dolls, the height of the middle doll is 9 cm.
What are the heights of the other dolls?
........... cm 9 cm .............. cm
Smallest Middle Tallest
(b) In another set of dolls, the height of the tallest doll is 9 cm.
What are the heights of the other dolls?
Show your working and give your answer to one decimal place.
............ cm ................ cm 9 cm
smallest middle tallest
Please could you show your working for all the questions so i understand it.
Unless you actually include the ratio your questions cannot be answered.
May 2nd 2010, 03:05 AM #2 | {"url":"http://mathhelpforum.com/algebra/142590-ratio.html","timestamp":"2014-04-18T13:11:26Z","content_type":null,"content_length":"33439","record_id":"<urn:uuid:1db6affe-4c29-4b8f-9df8-749388c456b9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
skip to main content
5 posts tagged with numbertheory. (View popular tags)
Displaying 1 through 5 of 5. Subscribe:
For a work assignment, I need to come by a conceptual understanding of modular forms that's light on jargon and, ahem, actual math. If such a thing is possible.
posted by Zerowensboring on Jan 18, 2013 - 12 answers
Is there a closed-form method for finding the unique subsets from the power set of prime factors of an integer?
[more inside] posted by Blazecock Pileon on Mar 27, 2010 - 9 answers
I'm an algebraist, and I need help with a difficult number theory problem involving Mersenne Numbers. Difficult Number Theorists, please help me!
[more inside] posted by CrunchyFrog on Sep 21, 2005 - 9 answers
This page
mentions, among other things, that "100 is the smallest square which is also the sum of 4 consecutive cubes." Obviously, this refers to the sum of 1^3 (1), 2^3 (8), 3^3 (27) and 4^3 (64). But it
seems to me that the sequence can be pushed back to start with 0^3 (0), in which case you can get four consecutive cubes adding up to 36, which is a square. Is zero then not considered a cube?
[more inside] posted by ubernostrum on Mar 2, 2005 - 9 answers | {"url":"http://ask.metafilter.com/tags/numbertheory","timestamp":"2014-04-18T22:49:44Z","content_type":null,"content_length":"18628","record_id":"<urn:uuid:72b5e3f0-0f79-4cda-b0e3-f861e062b256>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maths Challange
Challenge #04 - Training at the Academy
Your Answers ----------------------------------------
(25/10) Gekido Tokumei : the answer is 0
Shika : please refer to previous posting.
(14/10) Rekka Hanabishi : Shika, the number of nin's left over is 1. First of all, for the tables to be similar, the number of opponents that each individual fights is the same for round A and B. If
we let the number of nins be n, the number of nins that each nin has to fight in each round is (n-1)/2. Since they have to be equal, n-1 must be an even number. Hence, the only possibilities are 1
and 3. Let's say A chooses to fight B, B will have no choice but to fight A. As a result, when you tabulate the tables, the number of cells will always be even. Considering the table, the number of
rows(number of nins) it has is n, and the number of columns(opponents) is (n-1)/2. Thus, the total number of cells is n(n-1)/2. Since it has to be even, (n-1)/2 has to be even(n-1 is even, n is odd,
odd x odd = odd. even x odd = even). Since (n-1)/2 is even, n cannot be 7, 11, 15... hence, the remainder cannot be 3. So the number of nin's left over is 1.
Shika : Your first assertion is questionable, that is why you didn't get it
(13/10) Spray M727 : I think that the answer is 0.
Shika : I would prefer some form of reasoning, but any way your answer is wrong
(10/10) E.S.M. : I've taken the liberty of pasting the original question so that this would be easier to comprehend. In your question you have seemed to add lots of unnecessary information. Your real
question never states the original amount of students. The only answerable question was: If the unknown amount of students were divided into groups of four then what would be the possible number of
students that would be left over? If you divide a number by four the only remainders possible would be zero, one, two, or three. If you had more than four you would be able to make another set, if
you had four you would have another set, and since the amount of students was not given the only possible answers are 0, 1, 2, or 3.
- From the Irish Samurai, and the evil shadow monster who hides in the corner of people's rooms (or you could just call me E.S.M. for short)
Shika : Why would I ruin my own reputation by asking such an idiotic question.
(09/10) Tim Watson : If your referring to Iruka's actual class than there must be 21 students. Divide that by 4 and you get one student left out. My guess is 1. I have an equation, but it's too
troublesome to type out.
Shika : Maybe this is sometime in the future and maybe more people has enrolled or maybe the students has kids who become students... anyway wrong answer. Next!
(06/10) Tim Watson : If your referring to Iruka's actual class than there must be 21 students.
Divide that by 4 and you get one student left out. My guess is 1. I have an
equation, but it's too troublesome to type out.
Shika : Maybe this is sometime in the future and maybe more people has enrolled or maybe the students has kids who become students... anyway wrong answer. Next!
(04/10) Atsushi : You can't figure out the answer. You aren't given the number of students in the first place. Unless you are assuming we know how many students are in Iruka's class, we can't answer
the problem.
Shika : Refer to the previous try.
(04/10) Jeff : Shikamaru-sama, I noticed one key component missing from your problem. You never give the size of Iruka Sensai's class, so we can't determine how many students would be left over,
because we don't know how many students there are to begin with. I will, however, continue to work on the problem, and I can't wain tuntil you post the answer, and a new question. Thanks, and YOU
TOTALLY ROCK! Shikamaru forever!!!
Shika : If I told you the number of students, then this would be a problem on long division which is beneath Shikamaru :P Thanks for your overwhelming support though :)
(03/10) Nara Shikamaru : Hi, this is my answer.
Suppose the number of students in the class is x. If Iruka can just relabel the students for second round, means that each student fights with the same number of students in both the 1st and 2nd
Therefore a student fights with (x-1)/2 number of students in each round.
Since (x-1) is divisible by 2, (x-1) is an even number.
So therefore x is an odd number.
If x is an odd number and is divided by 4, then the possible number of
students left over will be either 1 or 3.
Shika : you are wrong but your apporach is similar to the other guy.
(02/10) Konoha Lau : answer can be 1,2,3 its depending on your explanation my explanation is that and y 0 is not included because if its 0 the total will be even,and if total is even,both the tables
won't be identical
if total = 41
1(a student) fights 20 (other students) next round he can fight 20 more students remainder if divided by 4 is 1
if remainder is 2 same concepts and total will be 42 1 fights 21
nxt round another 21(identical as round A)
and 3 is the same...so...1,2,3 can be correct right?
Shika : any number divided by 4 would give you 1,2, or3
>>Return to IQ question page or check out other answers | {"url":"http://narutofever.com/fun/maths-answers-5.php","timestamp":"2014-04-19T06:52:11Z","content_type":null,"content_length":"24049","record_id":"<urn:uuid:12ed8613-ff65-47eb-b733-63d8c4af315d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics and the environment
Public release date: 12-Oct-2010
[ | E-mail ]
Contact: Mike Breen/Annette Emerson
American Mathematical Society
Mathematics and the environment
Providence, RI---It was a mathematician, Joseph Fourier (1768-1830), who coined the term "greenhouse effect". That this term, so commonly used today to describe human effects on the global climate,
originated with a mathematician points to the insights that mathematics can offer into environmental problems. Three articles in the November 2010 issue of the Notices of the American Mathematical
Society examine ways in which mathematics can contribute to understanding environmental and ecological issues.
"Earthquakes and Weatherquakes: Mathematics and Climate Change", by Martin E. Walter (University of Colorado)
Data about earthquakes indicates that there are thousands of small earthquakes that do no damage, and there are just a few very strong earthquakes that do a great deal of damage. A striking fact
emerges from the data: Over a sufficiently long period of time, the sum of the "intensity" of all earthquakes of a given Richter scale magnitude is the same for any point on the Richter scale. So for
example the total intensity of the 100,000 magnitude-3 quakes that occur over the course of a year is the same as the intensity of a single magnitude-8 trembler. Put another way, there is no
preferred size or scale of earthquakes. This is an empirical fact that can be easily translated into mathematical terms, by noting that the data for earthquakes follows what is known as a power law.
The author uses the example of earthquakes to formulate a hypothesis about "weatherquakes"---extreme weather events like hurricanes and tornadoes. As in the case of earthquakes, he suggests, there is
no preferred size or scale for the intensity of weatherquakes. That is, weatherquake phenomena also follow a power law. Taking the mathematics a few steps further, the author examines what would
happen to the distribution of extreme weather events if the global climate heated up. The finding is worrisome: As temperatures rise, the most intense weatherquakes would increase in number.
"Environmental Problems, Uncertainty, and Mathematical Modeling", by John W. Boland, Jerzy A. Filar, and Phil G. Howlett (all three authors affiliated with the Institute for Sustainable Systems and
Technologies at the University of South Australia)
This article examines some special characteristics shared by many models of environmental phenomena: 1) the relevant variables (e.g., levels of persistent contamination in a lake) are not known
precisely but evolve over time with some degree of randomness; 2) both the short-term behavior (day-by-day interaction of toxins in the lake) and longer-term behavior (cumulative effects of repeated
winter freezes) are important; and 3) the system is subject to outside influences from human behavior, such as industrial pollution and environmental regulations. Concerning the latter
characteristic, the article discusses ideas from a branch of mathematics called control theory, which studies how systems are affected when they are strategically influenced from the outside.
Interventions for environmental problems can influence ecological systems dramatically but are often neglected in development planning. Control theory offers methods for determining an appropriate
level of intervention and for evaluating its effects. One example from the article looks at the use of solar panels to run a desalination plant. A model using ideas from control theory can guide
optimal use of the plant in the sense of maximizing the expected volume of fresh water produced.
"The Mathematics of Animal Behavior: An Interdisciplinary Dialogue", by Shandelle M. Henson and James L. Hayward (both authors at Andrews University, Michigan)
The two authors, one an applied mathematician and the other a biologist, teamed up to model aspects of gull behavior in a wildlife preserve in Washington state. The article is structured in an
unusual way, as a sort of conversation between the two researchers describing their work together. Before the two began collaborating, the biologist collected reams of data on gull behavior; his
biology colleagues teased him, "Don't you know how to sample?" But the applied mathematician was delighted to have such complete data. She and the biologist constructed a model representing a group
of gulls as they "loaf". For gulls the term "loafing" refers to a collection of behaviors---such as sleeping, sitting, standing, resting, preening, and defecating---during which the birds are
immobile. Loafing is of practical importance because it often conflicts with human interests. The model constructed by Henson and Hayward fit beautifully with the data and also produced predictions
about how the number of birds loafing in a given location changed over time. For example, the loafing model correctly predicted that the lowest numbers of gulls would occur at high tide on days
corresponding to tidal nodes. This is contrary to previously published assertions, based on data averaging, that the lowest numbers occur near low tide. Their work also showed that it is not always
necessary to base models of animal group dynamics on behavior of the individual animals. As Henson puts it, "You wouldn't use quantum models to study the classical dynamics of a falling apple."
Similarly, you don't always need to use a collection of individual-based simulations to study the dynamics of a group behavior.
Advance copies of these articles are available to reporters at the following web sites:
On October 12, 2010, they will be publicly posted on the Notices web site, http://www.ams.org/notices.
For specific questions, please contact the authors of the articles. General inquiries may be directed to:
Mike Breen and Annette Emerson
AMS Public Awareness Office
Email: paoffice@ams.org
Telephone: 401-455-4000
Founded in 1888 to further mathematical research and scholarship, the more than 30,000-member American Mathematical Society fulfills its mission through programs and services that promote
mathematical research and its uses, strengthen mathematical education, and foster awareness and appreciation of mathematics and its connections to other disciplines and to everyday life.
American Mathematical Society
201 Charles Street
Providence, RI 02904
[ | E-mail ] | {"url":"http://www.eurekalert.org/pub_releases/2010-10/ams-mat100510.php","timestamp":"2014-04-19T10:01:56Z","content_type":null,"content_length":"15464","record_id":"<urn:uuid:aea54aa5-1643-420d-a920-1a34893fe393>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/nicc96/answered","timestamp":"2014-04-18T23:51:09Z","content_type":null,"content_length":"121636","record_id":"<urn:uuid:f7f18837-7f86-46ff-bb5e-8a286bc4f760>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Robust Variable Sampling Time BLDC Motor Control Design Based upon
The Scientific World Journal
Volume 2013 (2013), Article ID 236404, 7 pages
Research Article
A Robust Variable Sampling Time BLDC Motor Control Design Based upon -Synthesis
^1Department of Electrical Engineering, National Yunlin University of Science and Technology, Yunlin, Douliou 64002, Taiwan
^2Department of Mechanical Engineering, National Taiwan University, Taipei 10617, Taiwan
Received 13 August 2013; Accepted 1 October 2013
Academic Editors: D. Dong and A. S. Elwakil
Copyright © 2013 Chung-Wen Hung and Jia-Yush Yen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
The variable sampling rate system is encountered in many applications. When the speed information is derived from the position marks along the trajectory, one would have a speed dependent sampling
rate system. The conventional fixed or multisampling rate system theory may not work in these cases because the system dynamics include the uncertainties which resulted from the variable sampling
rate. This paper derived a convenient expression for the speed dependent sampling rate system. The varying sampling rate effect is then translated into multiplicative uncertainties to the system. The
design then uses the popular -synthesis process to achieve a robust performance controller design. The implementation on a BLDC motor demonstrates the effectiveness of the design approach.
1. Introduction
The brushless DC (BLDC) motor applications are getting popular due to their efficient operation and high power density characteristics. Because of its high efficiency and easy maintenance, the
application range has been found from large system like electrical vehicles to small systems like computer peripherals. The BLDC motor usually uses hall-effect sensors to determine the commutation
timing among the armature windings. The same hall sensor signal is also used to measure motor speed. There are many commercial industrial driver ICs which apply such kind of design.
BLDC motors may be standard application, but ones still encountered many difficulties when engineers try to implement in precision speed control. As mentioned before, the typical BLDC motor feedback
signal is from the few hall-effect sensor signals which originally are designed for commutation purpose. The time interval between the consecutive feedback signals depends on the motor speed. Notice
that BLDC motor is used in many high performance industrial applications like computer hard disk drive and the optical drive servosystems. High performance controller design is necessary. The BLDC
motor system design maintains high resolution speed measurement through high frequency clock. The traditional driver design uses current transducer to achieve constant current sampling for the
acceleration loop; however, the velocity loop has to be based on the speed dependent sampling, and some sort of robust variable sampling rate controller design is thus desirable.
Most previous researches on multirate sampling systems in the literatures have focused on multiple but fixed sampling rate problems [1–6]. If one examines the system with the slow sampling rate, the
multirate control offers improved performance. Some people have also applied multirate control to achieve smoother system responses due to the fact that controls can be updated more frequently than
the measurements [7, 8]. Moore et al. [9] summarized these control strategies into an N-delay input/output control and have demonstrated successful implementation [10]. Even though the intervals in
this approach do not need to be uniform, the result still requires a fixed “N” step delay. These literatures did not focus on variable sampling system. The unpredicted sampling period really imposes
a barrier on the theoretical development. In 1993, Hori published an interesting result in [11], which he considered a pure integrator system, and was able to reduce the speed-position relation into
a (time) invariant system (thus, guaranteed error convergence). This result is only valid for pure integrator, but there is still a long way to a more complete result. However, an accurate system
model could be derived, and the controller should be able to draw information from it. The studies in [12, 13] confirm that such approaches may be beneficial. References [14, 15] discussed the
variable sampling status caused by networked-induced variable delay and developed the controller. But the delay is limited in some interval range. Reference [16] implemented the neural network to
compensate the delay caused by networked control systems. References [17, 18] used fuzzy control to handle the variable sampling system.
In the preceding work [19], the authors derived the variable sampling frequency system model for the observer design and used singular value assignment to guarantee the convergence of the observation
error. A model based compensator is designed and then forms the control bases. The approach is basically systematic, and because the BLDC motor system is of low order in general, the singular value
assignment procedure does not run into too much difficulty. The singular value assignment process is inherently very conservative. From our experience it does not always provide robust observation.
Plus, there is no access to the stability of the feedback system. References [20, 21] proposed a series of methods to simulate the variable sampling system and predict cheaply the results without
hardware experiments. Reference [22] designed an adaptive antiwindup PID sliding mode controller for the EHA system. The experimental showed that the controller could against hardware saturation,
load disturbance, and lumped system uncertainties and nonlinearities. But it did not discuss variable sampling system. In [23], the authors improved the discrete-time variable structure control and
modified the fixed system time into variable sampling time. It could successfully provide stable and fast response; however the controller took too much MCU calculation resource and is finally
implemented in look-up table method.
In this paper, the authors realized that the variable sampling rate system model derived for the observer can be reduced to describe the variable sampling rate system itself. Instead of treating the
system with a prespecified sampling rate, one can look at the system only when the feedback is available and lump the effects of the variable sampling time into system uncertainties. It is now
possible to describe the system with the standard linear fractional transformation. The -synthesis can now provide a necessary and sufficient condition for the controller design. The simulation and
the experimental results demonstrate very satisfactory results from this method.
2. Problem Formulation
Consider two time scales, the underlying system sampling rate is denoted by , and the other one describing the measurement update is denoted by . Therefore, as mentioned above, the sampling system
has a variable sampling rate. Figure 1 illustrates the relation between the two time frames.
If the basic sampling time for the system is , the system could be described as where , , is the continuous-time system matrix, is the continuous-time input matrix, is the system order, and is the
same in the discrete-time system as in the continuous-time system.
For the design purpose, assume that the system model is precisely known, and the manipulated input is known at every instance. This is a valid assumption because the system model can be measured with
various system identification processes, and the control input is from the known controller.
Consider that “” stands for the instance when the th measurement is available. “” stands for the number of fast samples from the instance “” to the instance “.” The control law is updated only when
the measurement is available; therefore the system description becomes the following equations: and so forth.
Thus, for every instance , (3) could be derived Notice that , for in our case; if one considers the ()th measurement instance, then the system becomes Equations (4) then describe the variable
sampling rate system of interest. It is interesting to note that the representation for the input matrix is a very neat term of the form . One can treat the power of ’s as an uncertainty term. Then,
the system block diagram representation could be shown in Figure 2.
Note again that is the number of underlying samples between two consecutive feedback measurements. This is basically an unknown value depending on when the next feedback measurement is available. In
the case of BLDC motor driver, the number of samples between measurements depends on the speed of the motor. When the motor is still, may be infinite, but the number should be small when the motor is
running at high speed.
3. Controller Design
The system in Figure 2 is treated as a nominal system under the influence of plant uncertainties, and the terms and are the uncertainty terms influencing and , respectively. As discussed before, is
the system matrix and is completely known; however, cannot be determined before the next sample measurement is available. Notice that the matrix at this stage is still arbitrary. The authors have not
yet discovered any way to further decouple the power of . Therefore, for the moment it seems the best way to treat this variation as a lumped uncertainty. The uncertainty setup is still clearly
structured; for the terms and each affects different part of the system. The structured singular value control synthesis is therefore the best suited design approach to arrive at a less conservative
To facilitate the -synthesis framework, the system uncertainties are represented in their magnitudes:
The control system can be represented with the familiar linear fractional transformation (LFT) as in Figure 3.
The uncertainties and affect the constant matrices and , respectively. Therefore, there is no need to consider the number of right half plane poles in these cases. The constants and are scalars to
represent the magnitudes of the singular values and , respectively. Even though the hypothesis allows complete access to and the power of , the maximum singular values of still need to be calculated
for each to determine the worst case singular values. The same is true for , and the calculation has to be carried out for every . Notice that the structure and the variation of the uncertainties are
quite certain in this case. One needs only a step by step multiplication to understand all the powers of , the possible uncertainty and the worst case uncertainty. in Figure 3 allows a reasonable
convergence criterion for the synthesis procedure, and it also provides access to the system performance specifications. The setup can then be programmed with the MATLAB toolbox for control
4. System Description
This experiment uses a BLDC motor from Troy Co., Taiwan, and the TMS320F243 from Texas Instrument for the controller implementation. The driver uses a simple protection circuit to read the hall
sensor signal and determine the rotor angle and the rotor speed. Another protection circuit is used to drive the six MOSFET switches for the three-phase winding. The controller determines the proper
on-off sequence and the controlled PWM duty cycle for the MOSFET to achieve the control purpose. The schematics of the control circuit and the physical setup are shown in Figure 4.
The capture unit built in the TMS320F243 is used to detect the hall sensor signals and determine the rotor angle. The proposed driver also uses these signals to determine the proper MOSFET to turn it
on. There are shunt resistors for constant rate sampling of the phase currents, but the motor speed measurement still depends on the hall sensor signals. The time interval between two consecutive
hall sensor signals is used to determine the rotor speed. The sampling rate of the feedback system is thus speed dependent. In this experiment, the underlying sampling frequency for the TMS320F243 is
set at 4KHz. And the PWM module built in TMS320F243 is set to generate PWM waveform whose carrier is also set at 4KHz. In an other site, there are 12 hall sensor signal updates per revolution for a
4-pole permanent magnet arrangement. Therefore there would be 66 samples between measurements when the motor runs at 3000rpm.
Figure 5 shows the frequency spectrum of the open-loop system response. Because the measurement is time varying, the horizontal axis of the figure is a modified frequency unit based on the samples.
The signal contains high frequency harmonics over the multiple of twelve. The reason for the noise comes from the mechanical tolerance of the three-hall sensor position. The misalignment of the poles
in permanent magnet rotor and the hall sensors induces measurement harmonics with period of 12 samples. Figure 5 also shows the spectrum response (dotted line) of a filter that eliminates signal with
period of 12 samples. The filtered signal spectrum (dashed line) indicates that the filter is effective in attenuating the periodic noise.
The filtered signal can serve as the bases for system identification and for the control feedback. The identified system transfer function is
Again, there are 12 measurement updates per cycle. For the motor operating at 300~3000rpm, the sampling frequency would have range of 60~600Hz. The sampling frequency for the underlying control is
4KHz. Therefore the number of sampling periods, , between feedback measurements would range from 11 to 66. Now it is necessary to calculate the singular values of the powers of and for = 11~66 and
to obtain the expressions for and .
The experiment then uses the MATLAB -synthesis tool box for the control synthesis. The simulation also uses the bilinear transformation to convert the discrete-time system for continuous-time
simulation environment. The performance bound is set to the transfer function in Figure 6 for small control error in the low frequency region over the range of sampling rates, to ensure similar
performance over a definite range of operating frequencies. It is also interesting to note that the performance bound also helps the control synthesis process to converge. The authors had a hard time
making the value come below 1 before the bound is imposed. The iteration procedure becomes fairly easy when the bound is imposed.
The synthesis resulted in a 6th order controller with the following form:
Figure 7 shows the theoretical responses of the synthesis result under speed command from 300rpm to 3000rpm. From the performance setting, the system should have similar responses over the
specified speed range. Notice that we have variable sampling rate systems; therefore the number on the time axis is translated from the underlying sampling frequency. One can observe similar
responses from the system even when the different setting actually changes the system behavior.
The controller calculates a PWM duty cycle in the actual system. Inherently, there is no negative command from the duty cycle. There is a separate logic for the reversed rotation, but it will not be
discussed here. The lack of negative command would result in slower recovery from the overshoot. The MOSFET switches also impose limits on the output current. Figure 8 shows the simulation responses.
The traces in the figure show the responses to 300, 1000, 2000, and 3000rpm command. Due to the saturation and the nonnegative control output effect, the responses are slower and the recovery from
the overshoot is comparatively slow. However, the tracking performances become similar once the responses come close to the set points.
5. Controller Implementation
The TMS320F243 is a 16-bit fixed-point digital signal processor. There are special considerations when implementing high order control algorithm. The compensator is first decomposed into fractional
expansion in Figure 13, where one of the second order terms has been omitted because it has a gain close to zero.
The first two terms in Figure 13 use Q15 and Q14 formats, respectively, to avoid truncation error. The direct gain term is represented in the less detailed Q13 format because there is no recursive
calculation. The inverter determines the new set of MOSFET switches to turn off and turn on, and the controller will compute the speed measurement and update the control law. The new control duty
cycle takes effect when the counter enters the counting for the next sampling period. Figure 9 illustrates the working process of the firmware.
6. Experimental Results
Figure 10 shows the results of the control when the reference speed varies from 1000 to 3000rpm. The experiments only carry the responses down to 1000rpm because negative control efforts start
appearing, and the PWM setup has difficulty representing negative efforts. From Figure 10, one observes that the controller achieves similar system responses when the speed commands are around the
high speed range. The slow rising times in the initial transient are the result from driver saturation. The effect is included in the simulation but is not included in the controller synthesis. The
responses grow oscillatory as the speed command is reduced to close to 1000rpm (near the limit of the uncertainty assumptions.) A closer look at the responses (Figure 11) shows that the controller
achieves very good 1% steady state error performances over the range of speed commands. The speed variation which is also very important in precision applications remains within a range less than
±0.3%. This is considered a very accurate servo performance in the precision engineering applications. As mentioned before, the PWM drive signal does not reflect negative control efforts. Figure 12
illustrates the effect of nonnegative control efforts and also the effect of control saturations. It is easy to see that the control saturation significantly prolonged the system response time. The
nonnegative control output not only prolongs settling time but also adds to the oscillation. As mentioned before, this effect reduced the performance of the proposed controller in low speed
operation. However, the experiments still demonstrated the effect of the control over the specified speed range.
7. Conclusion
This paper proposed a controller design for systems with variable sampling rate. By variable sampling rate, the authors mean a system with undetermined sampling frequency or speed dependent sampling
frequency. The paper first presented the system modeling for the variable sampling rate system. The changing sampling rate was then translated into system uncertainties. The uncertainties appear in
different locations in the system with fixed structure; therefore, the popular -synthesis procedure is introduced to calculate the controller. The experimental verification is based on the TMS320F243
digital signal processor. The experimental results over different speed settings agree with the original design specifications and have demonstrated the effectiveness of the proposed method.
This project is sponsored in part by the National Science Council, Taiwan, under Contract no. NSC 102-2221-E-224-028.
1. H. M. Al-Rahmani and G. F. Franklin, “Multirate control: a new approach,” Automatica, vol. 28, no. 1, pp. 35–44, 1992. View at Publisher · View at Google Scholar · View at Scopus
2. H. In and C. Zhang, “A multirate digital controller for model matching,” Automatica, vol. 30, no. 6, pp. 1043–1050, 1994. View at Publisher · View at Google Scholar · View at Scopus
3. P. Colaneri and G. De Nicolao, “Multirate LQG control of continuous-time stochastic systems,” Automatica, vol. 31, no. 4, pp. 591–596, 1995. View at Scopus
4. M. J. Er and B. D. O. Anderson, “Design of reduced-order multirate output linear functional observer-based compensators,” Automatica, vol. 31, no. 2, pp. 237–242, 1995. View at Scopus
5. M. De La Sen, “The reachability and observability of hybrid multirate sampling linear systems,” Computers and Mathematics with Applications, vol. 31, no. 1, pp. 109–122, 1996. View at Publisher ·
View at Google Scholar · View at Scopus
6. K. G. Arvanitis and G. Kalogeropoulos, “Stability robustness of LQ optimal regulators based on multirate sampling of plant output,” Journal of Optimization Theory and Applications, vol. 97, no.
2, pp. 299–337, 1998. View at Scopus
7. T. Hara and M. Tomizuka, “Multi-rate controller for hard disk drive with redesign of state estimator,” in Proceedings of American Control Conference, pp. 3033–3037, Philadelphia, Pa, USA, 1998.
8. S.-E. Baek and S.-H. Lee, “Design of a multi-rate estimator and its application to a disk drive servo system,” in Proceedings of the American Control Conference (ACC '99), pp. 3640–3644, June
1999. View at Scopus
9. K. L. Moore, S. P. Bhattacharyya, and M. Dahleh, “Capabilities and limitations of multirate control schemes,” Automatica, vol. 29, no. 4, pp. 941–951, 1993. View at Publisher · View at Google
Scholar · View at Scopus
10. H. Fujimoto, A. Kawamura, and M. Tomizuka, “Generalized digital redesign method for linear feedback system based on N-delay control,” IEEE/ASME Transactions on Mechatronics, vol. 4, no. 2, pp.
101–109, 1999. View at Publisher · View at Google Scholar · View at Scopus
11. Y. Hori, “Robust and adaptive control of a servomotor using low precision shaft encoder,” in Proceedings of the 19th International Conference on Industrial Electronics, Control and
Instrumentation, pp. 73–78, November 1993. View at Scopus
12. A. M. Phillips, Multirate and variable-rate estimation and control of systems with limited measurements with applications to information storage devices [Ph.D. thesis], Department of Mechanical
Engineering, University of California, Berkeley, Calif, USA, 1995.
13. A. M. Phillips and M. Tomizuka, “Multirate estimation and control under time-varying data sampling with applications to information storage devices,” in Proceedings of the American Control
Conference, pp. 4151–4155, June 1995. View at Scopus
14. Y. Xue and K. Liu, “Controller design for variable-sampling networked control systems with dynamic output feedback,” in Proceedings of the 7th World Congress on Intelligent Control and Automation
(WCICA '08), pp. 6387–6390, June 2008. View at Publisher · View at Google Scholar · View at Scopus
15. Y. Xue and K. Liu, “Analysis of variable-sampling networked control system based on neural network prediction,” in Proceedings of the International Conference on Wavelet Analysis and Pattern
Recognition (ICWAPR '07), pp. 772–777, November 2007. View at Publisher · View at Google Scholar · View at Scopus
16. A. Antunes, F. M. Dias, and A. Mota, “A neural network delay compensator for networked control systems,” in Proceedings of the 13th IEEE International Conference on Emerging Technologies and
Factory Automation (ETFA '08), pp. 1271–1276, September 2008. View at Publisher · View at Google Scholar · View at Scopus
17. H. Gao and T. Chen, “Stabilization of nonlinear systems under variable sampling: a fuzzy control approach,” IEEE Transactions on Fuzzy Systems, vol. 15, no. 5, pp. 972–983, 2007. View at
Publisher · View at Google Scholar · View at Scopus
18. D. W. Kim, H. J. Lee, and M. Tomizuka, “Fuzzy stabilization of nonlinear systems under sampled-data feedback: an xact discrete-time model approach,” IEEE Transactions on Fuzzy Systems, vol. 18,
no. 2, pp. 251–260, 2010. View at Publisher · View at Google Scholar · View at Scopus
19. J.-Y. Yen, Y.-L. Chen, and M. Tomizuka, “Variable sampling rate controller design for brushless DC motor,” in Proceedings of the 41st IEEE Conference on Decision and Control, pp. 462–467, Las
Vegas, Nev, USA, December 2002. View at Scopus
20. C.-W. Hung, C.-T. Lin, and C.-W. Liu, “An efficient simulation technique for the variable sampling effect of BLDC motor applications,” in Proceedings of the 33rd Annual Conference of the IEEE
Industrial Electronics Society (IECON '07), pp. 1175–1179, Taipei, Taiwan, November 2007. View at Publisher · View at Google Scholar · View at Scopus
21. C.-W. Hung, C.-T. Lin, and J.-H. Chen, “A variable sampling PI control with variable sampling torque compensation for BLDC motors,” in Proceedings of the 5th IEEE Conference on Industrial
Electronics and Applications (ICIEA '10), pp. 1234–1237, June 2010. View at Publisher · View at Google Scholar · View at Scopus
22. J. M. Lee, S. H. Park, and J. S. Kim, “Design and experimental evaluation of a robust position controller for an electro hydrostatic actuator using adaptive antiwindup sliding mode scheme,” The
Scientific World Journal, vol. 2013, Article ID 590708, 16 pages, 2013. View at Publisher · View at Google Scholar
23. C.-W. Hung, C.-T. Lin, C.-W. Liu, and J.-Y. Yen, “A variable-sampling controller for brushless DC motor drives with low-resolution position sensors,” IEEE Transactions on Industrial Electronics,
vol. 54, no. 5, pp. 2846–2852, 2007. View at Publisher · View at Google Scholar · View at Scopus | {"url":"http://www.hindawi.com/journals/tswj/2013/236404/","timestamp":"2014-04-19T06:04:56Z","content_type":null,"content_length":"154033","record_id":"<urn:uuid:ca0a8f7d-7c04-44a6-aac7-c53f0c6e4701>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Revenue Equation help
February 10th 2008, 08:21 PM #1
Space Comet
[SOLVED] Revenue Equation help
Please help!!! Here is the problem:
The profit p (in thousands of dollars) on x thousand units of a specialty item is p = 0.8x - 15.5. The cost c of manufacturing x items is given by c = 0.6x + 15.5 .
Find an equation that gives the revenue r from selling x items.
How many items must be sold for the company to break even (i.e., for revenue to equal cost)?
Round to the nearest integer.
Profit is revenue minus the cost, so:
$p = r - c$
Therefore, to find the revenue we have:
$r = p + c$
So we add the two equations:
$<br /> (0.8x - 15.5) + (0.6x + 15.5)<br />$
To get:
Therefore, $r = 1.4x$
To find when you will break even, simply set the two equations equal to each other:
$1.4x = 0.6x + 15.5$
To get:
$0.8x = 15.5$
Therefore, $x = 19.375$
Note that another way of finding when you will break even is to set the profit equal to 0:
$0.8x - 15.5 = 0$
Which gives you the same result:
$<br /> x = 19.375$
February 10th 2008, 08:48 PM #2 | {"url":"http://mathhelpforum.com/business-math/27947-solved-revenue-equation-help.html","timestamp":"2014-04-16T09:13:58Z","content_type":null,"content_length":"33016","record_id":"<urn:uuid:bb7c13f8-e242-4c56-ab2f-361745eef518>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Meeting Details
For more information about this meeting, contact Leonid Berlyand, Mark Levi, Alexei Novikov.
Title: Equations with random coefficients: Convergence to deterministic or stochastic limits and theory of correctors.
Seminar: Applied Analysis Seminar
Speaker: Guillaume Bal, Columbia University
Equations with small scale structures abound in applied sciences. Such structures often cannot be modeled at the microscopic level and thus require that one understand their macroscopic influence. I
will consider the situation of partial differential equations with random, highly oscillatory, potentials. One is then interested in the behavior of the solutions to that equation as the frequency of
oscillations in the micro-structure tends to infinity. Depending on spatial dimension and the decorrelation properties of the random potential, I will show that the limit is the solution to either a
deterministic, homogenized (effective medium) equation or a stochastic equation with multiplicative noise. More precisely, there is a critical spatial dimension above which we observe convergence to
a deterministic solution and below which we observe convergence to a stochastic solution. In the former case, a theory of correctors to homogenization allows one to asymptotically capture the
randomness in the solution to the equation with the small scale structure. Once properly rescaled, this corrector is shown to solve a stochastic equation with additive noise.
Room Reservation Information
Room Number: MB106
Date: 09 / 14 / 2010
Time: 04:00pm - 04:55pm | {"url":"http://www.math.psu.edu/calendars/meeting.php?id=7766","timestamp":"2014-04-17T13:06:59Z","content_type":null,"content_length":"4362","record_id":"<urn:uuid:b9e465f3-e015-49ea-a0a2-aebe525c7d14>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by micheal on Tuesday, June 16, 2009 at 12:18am.
Review example 2 How does the author determine what the?
Example 2. Translate. The first row of the table and the fourth sentence of the
problem tell us that a total of 63 pupae was received. Thus we have one
equation: p+q=63
Since each pupa of morpho granadensis costs $4.15 and p pupae were
received, 4.15p is the cost for the morpho granadensis species. Similarly,
1.50q is the cost of the battus polydamus species. From the third row of
the table and the information in the statement of the problem, we get a
second equation: 4.15p+1.50q=147.50
We can multiply by 100 on both sides of this equation in order to clear
the decimals. This gives us the following system of equations as a
translation: p+q=63, (1)
415p+150q=14,750. (2)
Solve. We decide to use the elimination method to solve the system.
We eliminate q by multiplying equation (1) by -150 and adding it to
equation (2): -150p - 150q = -9450 Multiplying equation (1) by -150
415p +150q=14,750
263p =5300 adding
p =20. solving for p
To find q, we substitute 20 for p in equation (1) and solve for q:
p+q=63 Equation (1)
20+q=63 Substituting 20 for p
q=43. Solving for q
We obtain (20,43),or p=20, q=43
Related Questions
pronoun - When he arrive at the cafe', Zach started preparing the scones, ...
English 102 - I am not asking anyone to do my homework, i am just asking for ...
algebra - Explain in your own words what it means for an equation to model a ...
Finte Math - Simplex Method - Just a few questions to make sure I am ...
Pre-Algebra - Okay so I have no clue how to determine if a relation is a ...
math - Review examples 2, 3, and 4 in section 8.4 of the text. How does the ...
biology - Can you help me find an everyday object that has a similar function as...
Biology/help please - Can you help me find an everyday object that has a similar...
biology /cell analogy - Can you help me find an everyday object that has a ...
Lanuguage Arts - Our teacher gave us terrible instructions on a sentence we are ... | {"url":"http://www.jiskha.com/display.cgi?id=1245125889","timestamp":"2014-04-19T13:33:54Z","content_type":null,"content_length":"9150","record_id":"<urn:uuid:a2d00a03-9bdf-4687-9e84-5a2080d8cfba>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Post a New Question | Current Questions
12th grade
how do you solve limits that have infinities!?
Tuesday, September 2, 2008 at 8:58pm
12th grade???
What is your school subject for this question?
Thursday, August 28, 2008 at 7:49pm
12th grade
any of the six can be first, any of the remaining five can be second... 6!
Thursday, August 28, 2008 at 7:48pm
12th grade
determine teh number of ways arranging the letters in the word handle if there are no restrictions?
Thursday, August 28, 2008 at 7:47pm
12th grade
Monday, August 25, 2008 at 10:12pm
12th grade government/economics
Cristina -- we don't DO your homework, but we'll be glad to help you if you post YOUR ideas.
Sunday, August 24, 2008 at 10:30pm
12th grade government/economics
Cristina -- we don't DO your homework, but we'll be glad to help you if you post YOUR ideas.
Sunday, August 24, 2008 at 10:30pm
12th grade government/economics
Cristina -- we don't DO your homework, but we'll be glad to help you if you post YOUR ideas.
Sunday, August 24, 2008 at 10:29pm
12th grade government/economics
Cristina -- we don't DO your homework, but we'll be glad to help you if you post YOUR ideas.
Sunday, August 24, 2008 at 10:29pm
12th grade government/economics
what margin is required to choose the president?
Sunday, August 24, 2008 at 10:24pm
12th grade government/economics
what are two ways that amendments to the constitution can be ratified?
Sunday, August 24, 2008 at 10:22pm
12th grade government/economics
what are two ways that amendments to the constitution can be proposed?
Sunday, August 24, 2008 at 10:21pm
12th grade government/economics
The Constitution specifies a three-fourths marjority for just one process. what?
Sunday, August 24, 2008 at 10:07pm
10th grade math
Yah, so would you not even look at the 10 to the 12th part of it?
Tuesday, August 19, 2008 at 9:00pm
10th grade math
sry... 1.548937075 x 10 to the 12th. round the number to 3 significat figures.
Tuesday, August 19, 2008 at 8:38pm
10th grade math
ok. Like here in this example... 1.548937075 x 10 to the 12th. Would you do the problem 1.548937075 x 10 to the 12th which would be 1,548,937,075,000 first before you rounded. I think in the number
that I got (1,548,937,075,000) there are 10 significant numbers??
Tuesday, August 19, 2008 at 8:32pm
12th grade math help!!!!
I think Thomas Elva Eddison invented the light bulb
Thursday, April 3, 2008 at 12:11am
12th grade math help!!!!
Who invented the lightbulb? Thanks
Friday, March 28, 2008 at 12:16pm
Math - PreCalc (12th Grade)
How many different basketball teams of 5 players can be created from a group of 7 players?
Monday, March 24, 2014 at 10:14am
Math - PreCalc (12th Grade)
The function f(x) = 6x + 5 is defined over the interval [-1, 4]. If the interval is divided into n equal parts, what is the value of the right end point of the first rectangle?
Friday, March 21, 2014 at 11:42am
Math - PreCalc (12th Grade)
How many 4-digit numbers are possible if the hundreds digit is 8 and if repetition of digits is allowed? A) 100 B) 1,000 C) 9,000 D) 900 E) 1,800
Friday, March 21, 2014 at 11:08am
Math - PreCalc (12th Grade)
The function f(x) = x2 − 5 is defined over the interval [0, 5]. If the interval is divided into n equal parts, what is the area of the kth rectangle from the right? A) [(2+k(5/n))^2+5](5/n) B) [(k(3/
n))^2−5](5/n) C) [(k(5/n))^2+5](5/n) D) [(k(3/n))^2+3](5/n) E) [(k...
Friday, March 21, 2014 at 10:53am
Math - PreCalc (12th Grade)
Use the inverse matrix to solve this system of equations: 4x+3y=7.5 7x+9z=14 4y-z=8.3 4,3,0 7,0,9 0,4,-1
Friday, March 21, 2014 at 9:20am
Math - PreCalc (12th Grade)
I guess, though product is also involved lim(2x) = 2 * lim(x) But I guess C is closest.
Thursday, March 20, 2014 at 1:25pm
Math - PreCalc (12th Grade)
You are correct; you cannot divide by zero. So, f(x) is not defined for x=4. However, for any other value of x, f(x) = (x-4)(x+4)/(x-4) = (x+4) So, if you define f(4) = 8, then f(x) = x+4 for all
values of x, and is now continuous.
Thursday, March 20, 2014 at 12:43pm
Math - PreCalc (12th Grade)
start checking. How about showing some of your own ideas on further problems? I can do them - the goal is for you to work on them, eh?
Wednesday, March 19, 2014 at 12:07pm
Math - PreCalc (12th Grade)
What is the first natural number that causes the statement 1 + 3 + 5 +... + (2n − 1)<=4n − 1 to fail? A) 2 B) 3 C) 4 D) 5 E) 6
Wednesday, March 19, 2014 at 12:05pm
Math - PreCalc (12th Grade)
at x=5, f(x) = 0/0, or undefined. At any other value, though, f(x) = (x-5)(x+5)/(x-5) = x+5 So, f(x) is undefined only at x=5. By defining f(5) = 10, the discontinuity is removed. f(x) = x+5 for all
Wednesday, March 19, 2014 at 12:05pm
Math - PreCalc (12th Grade)
just start checking: 5^1 <= 4^1 + 3^1 5^2 <= 4^2 + 3^2 5^3 > 4^3 + 3^3
Wednesday, March 19, 2014 at 12:03pm
Math - PreCalc (12th Grade)
What does this function show at x = 5? f(x)=x^2−25/x−5 A) removable discontinuity B) jump discontinuity C) infinite discontinuity D) continuity E) none of the above
Wednesday, March 19, 2014 at 12:00pm
Math - PreCalc (12th Grade)
What is the right-hand limit of this function at x=2? g(x)=x^2−3/x−3 A) -1 B) 0 C) 1 D) 2 E) 3
Wednesday, March 19, 2014 at 11:54am
Math - PreCalc (12th Grade)
What is the left-hand limit of f(x)=|x−3|/x−3 as x approaches 3? A) 1 B) 2 C) 0 D) -1 E) -2
Wednesday, March 19, 2014 at 11:14am
12th Grade Vocabulary Antonyms Practice
You need a swag calculator, but get one, and then calculate my swag (warning calculator might go off the charts and break)
Thursday, February 27, 2014 at 9:11pm
Hey, this is the exact question I am answering Now! are you in COVA? Here is a tip, answer the best you can, and if you are still having trouble and you get the question wrong, just ask your teacher
and he or she will help you gladly. I should know because I am a student. 12th...
Saturday, February 15, 2014 at 3:33pm
Geometry/Algebra 1/12th grade
Find the value of x. Round u'r answer to the nearest hundredth, if necessary. Area of triangle=108m^2 A=1/2 bh b=(x+6)m h=xm So, 108m^2=1/2 ((x+6)m)(xm) 108m^2=1/2 m^2+x^2+6???? If so then what? Sub
m^2 from the left side?? Then what? :/
Friday, August 30, 2013 at 2:19pm
Poetry/British Literature/12th grade
Which of the following poems is a dramatic monologue? (A) "My Last Duchess" (B) "Love Among the Ruins" (C) Childe Roland to the Dark Tower Came (D) all of these
Tuesday, August 27, 2013 at 4:13pm
I've referred your post to Writeacher who is much more experienced with 12th grade English than I am. She's busy now, but will try to help you within a few hours.
Saturday, July 20, 2013 at 3:46pm
I've referred your post to Writeacher who is much more experienced with 12th grade English than I am. She's busy now, but will try to help you within a few hours.
Saturday, July 20, 2013 at 3:46pm
12th Grade Life Orientation
In 10-15 lines critically discuss teenage pregnancy in 5 ways in which the human or environmental problems impacts on the community
Wednesday, April 17, 2013 at 2:22pm
12th Grade Life Orientation
In 10-15 lines critically discuss 5 ways in which the human or environmental problems impacts on teenage pregnancy on the community.
Thursday, April 11, 2013 at 10:49am
12th grade life orientation
i dnt hv an answer but i also need help plz i am begging cause i need to submit this by the end of this week
Tuesday, April 2, 2013 at 2:31pm
Adv Chem 12th grade
I'll give you the names. You figure out the rules. Rubidium fluoride copper(II) oxide (the old name is cupric oxide) ammonium oxalate.
Wednesday, December 19, 2012 at 8:57pm
Adv Chem 12th grade
4. Given the chemical formulas of the following compounds, name each compound and state the rules you used to determine each name. RbF CuO (NH4)2C2O4 (Note: C2O4 is called oxalate.)
Wednesday, December 19, 2012 at 7:34pm
Math - Need help please
5/18 of the total smokers are in 12th grade. You need to find how many smokers there are.
Tuesday, December 18, 2012 at 7:41pm
Math 12th grade
(5/14) ÷ (1/7) = 2.5 (25/28)÷(5/14) = 2.5 From the first three terms I will assume that this is a geometric series with r = 2.5 since r > 1, it will diverge, thus it will not have a sum that can be
calculated. The sum will be infinite.
Sunday, December 16, 2012 at 10:30pm
Math 12th grade
Find the sum of each infinite series, or state that the sum doesn'nt exist 1/7+5/14+25/28+
Sunday, December 16, 2012 at 8:38pm
Math (trigonometry)
MONDAY December 3, 2012 SCHOOL SUBJECTS Art Business Computers English Foreign Languages Health Home Economics Math Music Physical Education Science Social Studies GRADE LEVELS Preschool Kindergarten
Elementary School 1st Grade 2nd Grade 3rd Grade 4th Grade 5th Grade 6th...
Monday, December 3, 2012 at 8:14pm
LCM(4,6) = 12 so, on the 12th school day she will have to choose. Not knowing which day of the week was Sept 5, it's hard to say the date of the 12th school day afterwards. Especially if Labor Day is
in there somewhere!
Friday, November 30, 2012 at 4:43am
Brett it is for a 12th grade paper in which I have to summarize an article. The paragraph above is describing the purpose of the article. This is the article "Robotics." World of Computer Science.
Gale, 2002 WHat do you think??
Friday, August 17, 2012 at 3:14pm
Calculus 12th grade (double check my work please)
2 and 3 are right. the first one is a trick question. It is a circle, so the tangent line is always 90 degrees to the radius
Thursday, June 7, 2012 at 12:41am
Im in 11th im almost in 12th. I just need to finish fine arts, I have like 3 essays left, and I still have to do the art safaris and the research papers. I was looking up my questions and came across
yours for help lol. Im assuming your in 12th ?
Tuesday, May 22, 2012 at 1:59pm
12th Grade Life Orientation
in 10-15 lines critically discuss in which the human or enviromental problem impacts on the community to supports your claims
Wednesday, April 25, 2012 at 3:13pm
12th Grade Life Orientation
in 10-15 lines critically discuss in which the human or enviromental problem impacts on the community to supports your claims
Wednesday, April 25, 2012 at 3:12pm
12th Grade Life Orientation
pollution :LEAD TO DESEASES AND THIS SUDDENLY CAUSES DEATH BECAUSE THE ATMOSPHERE WOULD HAVE ALREADY BEEN POLLUTED. hiv&aids:
Wednesday, April 18, 2012 at 5:13pm
Physics (12th Grade)
100 Cal = 418.68 J. W =418.68 -322 =96.68 J
Monday, April 9, 2012 at 10:33am
Physics (12th Grade)
500 cal =2093.4 J, ΔU = 2093.4 100 =1993.4 J.
Monday, April 9, 2012 at 8:57am
Physics(12th grade)
When the temperature of a gas in a vessel is increased by 1C then the pressure is increased by 0.5%. what will be its initial temperature?
Monday, April 9, 2012 at 12:52am
Physics (12th Grade)
When an ideal diatomic gas is heated at constant pressure then what fraction of heat given is used to increase internal energy of gas?
Monday, April 9, 2012 at 12:49am
Physics (12th Grade)
Equal volumes of monoatomic and diatomic gases of same initial temperature and pressure are mixed.What will be the ratio of sprcific heats of the mixture(Cp/Cv)?
Monday, April 9, 2012 at 12:33am
Physics (12th Grade)
In a certain process 500 cal of heat is given to a system and the system does 100J of work. What will be the increase in internal energy of the system?
Monday, April 9, 2012 at 12:27am
Physics (12th Grade)
The isothermal bulk modulus of elasticity of a gas is 1.5 * 100000 N/m^2. What will be its adiabatic bulk modulus of elasticity? Given the ratio of Cp/Cv=1.4
Monday, April 9, 2012 at 12:20am
Physics (12th Grade)
W = ν R T ln(V2/V1) = =10 8.31 273 ln(20/1) = 6.845 10^4 J.
Sunday, April 8, 2012 at 1:41pm
Physics (12th Grade)
What will be the amount of work done in increasing the volume of 10 mols of an ideal gas from one litre to 20 litre at 0C?
Sunday, April 8, 2012 at 1:08pm
12th Grade Life Orientation
in 10-15 lines critically discuss 5 ways in which the human or environmental problem impacts on the community of teenage pregnancy
Monday, April 2, 2012 at 8:52am
12th Grade Life Orientation
in 10-15 lines critically discuss five ways in which the human or environmental impact causes within any community in South Africa.
Friday, March 23, 2012 at 10:52am
12th Grade Government
_____ believed that there should be no limits to the power of the government once the will of the people was determined.
Thursday, February 9, 2012 at 10:07am
12th Grade Calculus
how will you multiply y = ae^3x + be^-2x with dy/dx = 3ae^3x - 2be^-2x
Sunday, January 22, 2012 at 1:07am
On a map of downtown, 12th street is perpendicular to Avenue J. The equation y=-4x+3 represents 12th street. What is the equation representing Avenue J if it passes through the point (8,16)?
Thursday, January 5, 2012 at 11:29pm
12th grade physics
A person pulls a wagon through thick grass with a force of 800 N. The angle that the handle makes with the ground is 35. How much force moves the wagon forward?
Monday, September 26, 2011 at 6:40pm
12th grade
yes because maths is life and you wont know how to calculate your change so maths its important in life and in future
Saturday, May 28, 2011 at 7:27am
12th grade
yes because maths is life and you wont know how to calculate your change so maths its important in life and in future
Saturday, May 28, 2011 at 7:27am
12th grade
yes because maths is life and you wont know how to calculate your change so maths its important in life and in future
Saturday, May 28, 2011 at 7:24am
12th grade
yes because maths is life and you wont know how to calculate your change so maths its important in life and in future
Saturday, May 28, 2011 at 7:24am
12th grade
yes because maths is life and you wont know how to calculate your change so maths its important in life and in future
Saturday, May 28, 2011 at 7:24am
12th grade
yes because maths is life and you wont know how to calculate your change so maths its important in life and in future
Saturday, May 28, 2011 at 7:24am
12th grade life orientation
I also want help. Identify one environmental or human factor that causes ill health,accidents,crises and disasters within any community in south africa.
Tuesday, May 10, 2011 at 1:51pm
12th grade life orientation
Air pollution:many activities that human do,most of them pollute the environment.Activities such as burning of fossil fuels,deforestation and others,they all cause illness.
Sunday, May 8, 2011 at 10:22am
Spanish7th grade-Please check
I'm not sure about the last one-it is referring to 7-12th-that's why I wasn't sure whether it would be true or false
Tuesday, April 19, 2011 at 8:11pm
12th grade English
do the bib and note cards for elizabeth barret browning and robert browning
Wednesday, April 13, 2011 at 2:11pm
3rd Grade Math
A full circle has 360° In one hour the "hour hand" moves 1/12th of that. What do you get, if you divide 360 by 12?
Tuesday, March 8, 2011 at 8:27pm
12th grade
I am assuming you are talking about committees... Committees examine each bill and combine like bills to create a final bill that is presented to the Legislature.
Monday, March 7, 2011 at 8:43pm
12th grade Trigonometry
theta = 209,55192° theta = 76,70829°
Friday, March 4, 2011 at 6:57am
12th grade chemistry
How many molecules of water can be made from 9.21 x 10^22 molecules of hydrogen sulfate? The equation is 2Fe(OH)3 + 3H2SO4 -> Fe2(SO4)3 + 6H2O
Wednesday, March 2, 2011 at 12:04am
12th grade, Advance Algebra
How to press the value of log, to evaluate log, to find the unknown log using EL-531WH, advance D.A.L
Sunday, February 27, 2011 at 6:48am
12th grade
think of a business in your local area. describe its operation in terms of factor markets and product markets
Tuesday, February 8, 2011 at 5:52pm
12th grade
if you pay $160 for a camera after reciving a discount of 2%, whst was the price of the camera before the discount?
Monday, January 31, 2011 at 10:51pm
12th grade calculus
I would integrate a dArea as ydx from x=1 to 4 area=INT (x^1/2 + 2) dx from x=1 to 4 area= 2/3 x^3/2 + 2x from 1 to 4 = 2/3 (8-1)+2(4-1) check that, I did it in my head.
Tuesday, January 25, 2011 at 3:10pm
12th grade
one type of probability used everyday is: a)experienced probability b)scientific probability c)frequency of occurence d)none of the above
Wednesday, December 15, 2010 at 9:20pm
12th grade
can anyone explain how to divide in algebra? I have this problem: (15x2-24+9)/(3x-3) I am not looking for someone to give me the answer straight out I just need to know how to get there.
Wednesday, December 15, 2010 at 10:42am
12th grade
The equation balances and I assume the starting triprotic acid is citric acid?
Wednesday, December 15, 2010 at 4:29am
12th grade international business
Change 125 mL of gas at 25 degress celsius to a volume at 36 degress celsius?
Monday, December 13, 2010 at 6:32pm
12th grade
Objects on the moon's surface have an acceleration due to gravity one-sixth that on the earth's. What would the 40 kg boy weigh on the moon? (Round answer to nearest tenth.)
Thursday, December 9, 2010 at 10:38am
12th grade math? physics?
Indicate your subject in the "School Subject" box, so those with expertise in the area will respond to the question.
Tuesday, December 7, 2010 at 11:32pm
12th grade math needs help
Indicate your subject in the "School Subject" box, so those with expertise in the area will respond to the question.
Tuesday, December 7, 2010 at 10:51pm
12th grade algebra
Indicate your subject in the "School Subject" box, so those with expertise in the area will respond to the question. We do not have your diagram, so we don't know if they have common fences.
Thursday, December 2, 2010 at 5:46pm
English(how is it?)
What is your question? This reminds me that 12th grade was added to my grandfather's rural Indiana high school about 1895 -- 115 years ago.
Wednesday, December 1, 2010 at 11:25pm
English(how is it?)
Should there be abolishment of the 12th grade? To some this may sound like a good idea in theory. But there could be too many complications that come along as well. Some feel as if the 12th grade
should be abolished in favor of job training. As well as trying to encouraging ...
Wednesday, December 1, 2010 at 11:17pm
12th grade English
I've never read that book. You might also check it out on www.sparknotes.com/lit and www.bookrags.com
Tuesday, November 30, 2010 at 9:23pm
12th grade English
I need help i have to have a literary term for ch. 1,2,3,5,12,13,14,16,and 26 im having trouble figuring out how to do that so can somone help me please
Tuesday, November 30, 2010 at 9:11pm
12th grade
zero. Kinetic Energy is the energy of motion. Object not moving so KE=0.
Tuesday, November 30, 2010 at 12:12am
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> | {"url":"http://www.jiskha.com/12th_grade/?page=6","timestamp":"2014-04-21T15:26:43Z","content_type":null,"content_length":"32602","record_id":"<urn:uuid:7ebcce4f-1ce1-4723-9a87-64b785b90176>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00505-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wittmann Math Tutor
Find a Wittmann Math Tutor
...I have a BA in Special Education and Elementary Education from William Paterson University. I also hold a M.Ed in Curriculum and Instruction from Arizona State University. I have taught in the
areas of Special Education and most subjects in grades K-8.
12 Subjects: including algebra 2, elementary (k-6th), special needs, discrete math
...Always wanting to be the best kept me up late at night, knowing that my hard work would pay off in a scholarship to the school of my choice. I graduated Valedictorian from my college prep high
school, and got accepted into every school to which I applied (MIT, Princeton, Notre Dame, and Tulane, ...
20 Subjects: including calculus, English, trigonometry, writing
...I have very little official tutoring history, but I have had many unpaid tutoring opportunities. I have helped students in Math in High School and assisted students in several classes in
college. My tutoring approach is very non-directive because I believe that it is imperative for a student to feel like they can conquer the subject matter.
11 Subjects: including algebra 1, trigonometry, statistics, prealgebra
I am an Instructional Math Assistant. While I was in the AF I completed of my AS in Emergency Management Information Systems Technology. I am looking forward to attending ASU in the spring to
finish my BA.
19 Subjects: including algebra 2, calculus, elementary (k-6th), vocabulary
...I have tutored students in most math and science courses, and I am comfortable with all grade levels, from elementary school to college. My teaching style varies according to the individual
needs of the student. My main goal is to help the student become enthusiastic about the specific subject or activity, and confident in their ability to interpret what a question is actually
28 Subjects: including trigonometry, ACT Math, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/Wittmann_Math_tutors.php","timestamp":"2014-04-18T08:25:27Z","content_type":null,"content_length":"23634","record_id":"<urn:uuid:efa6a20b-661f-4316-bd9c-bdbc771bc74c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
List elements of the set in roster notation
May 29th 2012, 08:23 PM #1
Mar 2012
List elements of the set in roster notation
{x | 2-x = 4 and x is a fraction}
The answer is -2, right? I'm only asking because the question before it asks the same thing and stipulates that x must be an integer. I have a feeling there is a trick somewhere.
Re: List elements of the set in roster notation
The integers are a subset of the rationals (fractions), so yes, the answer is x = -2.
Re: List elements of the set in roster notation
If we assume that "fraction" here is a synonym of a "rational number," then the answer is {-2}, i.e., a set. However, it seems that "fraction" here means "a rational number that is not an
integer." In this case, the answer is the empty set.
May 29th 2012, 08:36 PM #2
May 30th 2012, 03:16 AM #3
MHF Contributor
Oct 2009 | {"url":"http://mathhelpforum.com/algebra/199438-list-elements-set-roster-notation.html","timestamp":"2014-04-17T07:43:05Z","content_type":null,"content_length":"38835","record_id":"<urn:uuid:faac1dac-9ccc-4c81-80cc-a51e609dfd2b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Preemptive MythBusters: Giant Water Slide Jump
Rhett Allain on May 16, 2010
I am excited. This Wednesday, the MythBusters are doing the giant water slide jump. Maybe you are new to the internet and you haven’t seen this video. Here it is:
And since it is as old as the hills, of course I have already analyzed it – actually twice. First, the video is fake – but it is an excellent fake. Here is another site with details on how this was
What did I look at in my previous posts? Here is a summary.
• The video is difficult to analyze because of perspective changes.
• Even with these problems, nothing says it has to be fake. The vertical acceleration during the free fall is constant-ish.
• The horizontal velocity seems constant.
• To get to the speed that it appears the guy has, the slope would need to be around 40 degrees.
• The launch speed would be about 19 m/s at about a 32 degree launch angle.
• Landing in the pool should not be a problem. After all, Professor Splash jumped from 35 feet into a pool only 1 foot deep. So that is not a problem.
• The biggest problem would be variation in the jump – which it appears the MythBusters will look at. This should be great.
Worked out solution
It turns out that I also worked out this problem for a snow board jump (basically same idea). Here is the post on calculating the jump of a snow board ramp. I also made a spreadsheet for that case –
so you can put in your own numbers.
The MythBusters Episode
This looks great. You can check out some of the clips already. Here is a shot that shows their setup.
From this, you can see they plan to do a 165 foot slide at 24 degrees with a 30 degree jump ramp. If they use this particular camera angle, it will be great for video analysis. If I put that info in
the spreadsheet calculator (assuming a ramp length of 3 meters – which they did not specify) I get that the person will go 30 meters (98 feet).
I will make sure to record this episode.
Recent Comments
• DaveC426913 on The Physics of Professor Splash’s Jump into 1 foot of water
• DaveC426913 on The Physics of Professor Splash’s Jump into 1 foot of water
• larry lurio on Grades: curve or no curve
• Tinpusher on Mercedes Roll
• Iram Imran on Pressure demo: suction | {"url":"http://scienceblogs.com/dotphysics/2010/05/16/preemptive-mythbusters-giant-w/","timestamp":"2014-04-19T10:00:02Z","content_type":null,"content_length":"56898","record_id":"<urn:uuid:2732faa8-a79c-4533-9496-a5d0273521df>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimization of microchannel heat sinks using entropy generation minimization method
Conference Proceeding
Optimization of microchannel heat sinks using entropy generation minimization method
Dept. of Mech. Eng., Waterloo Univ., Ont.
IEEE Transactions on Components and Packaging Technologies
(Impact Factor: 0.94). 04/2006; 32(2):78 - 86. DOI:10.1109/STHERM.2006.1625210 ISBN: 1-4244-0153-4 In proceeding of: Semiconductor Thermal Measurement and Management Symposium, 2006 IEEE
Twenty-Second Annual IEEE
ABSTRACT In this study, an entropy generation minimization (EGM) procedure is employed to optimize the overall performance of microchannel heat sinks. This allows the combined effects of thermal
resistance and pressure drop to be assessed simultaneously as the heat sink interacts with the surrounding flow field. New general expressions for the entropy generation rate are developed by
considering an appropriate control volume and applying mass, energy, and entropy balances. The effect of channel aspect ratio, fin spacing ratio, heat sink material, Knudsen numbers and accommodation
coefficients on the entropy generation rate is investigated in the slip flow region. Analytical/empirical correlations are used for heat transfer and friction coefficients, where the characteristic
length is used as the hydraulic diameter of the channel. A parametric study is also performed to show the effects of different design variables on the overall performance of microchannel heat sinks
[show abstract] [hide abstract]
ABSTRACT: Cooling systems take a significant portion of the total mass and/or volume of power electronic systems. In order to design a converter with high power density, it is necessary to
minimize the converter's cooling system volume for a given maximum tolerable thermal resistance. This paper theoretically investigates whether the cooling system volume can be significantly
reduced by employing new advanced composite materials like isotropic aluminum/diamond composites or anisotropic highly orientated pyrolytic graphite. Another strategy to improve the power density
of the cooling system is to increase the rotating speed and/or the diameter of the fan, which is limited by increasing power consumption of the fan. Fan scaling laws are employed in order to
describe volume and thermal resistance of an optimized cooling system (fan plus heat sink), resulting in a single compact equation dependent on just two design parameters. Based on this equation,
a deep insight into different design strategies and their general potentials is possible. The theory of the design process is verified experimentally for cooling a 10 kW converter. Further
experimental results showing the result of the operation of the optimized heat sink are also presented.
IEEE Transactions on Components, Packaging, and Manufacturing Technology 05/2011; · 1.26 Impact Factor
03/2012; , ISBN: 978-953-51-0278-6
[show abstract] [hide abstract]
ABSTRACT: In this paper, the optimization of the cooling performance of a rectangular microchannel heat sink is investigated with four different gaseous coolants; air, ammonia gas,
dichlorodifluoromethane (R-12) and chlorofluoromethane (R-22). A systematic robust thermal resistance model together with a methodical pumping power calculation is used to formulate the objective
functions, the thermal resistance and pumping power. The non-dominated sorting genetic algorithm (NSGA-II), a multi-objective algorithm, is applied in the optimization procedure. The optimized
thermal resistances obtained are 0.178, 0.14, 0.08 and 0.133°K/W for the pumping powers of 6.4, 4, 22.4 and 16.5 W for air, ammonia gas, R-12 and R-22, respectively. These results show that among
all the gaseous coolants investigated in the current study, ammonia gas exhibited balanced thermal and hydrodynamic performances. Due to the Montreal Protocol, the coolant R-12 is no longer
produced while R-22 will eventually be phased out. The results from ammonia provide a strong motivation to conduct more investigations on the potential usage of this gaseous coolant in the
electronic cooling industry.
Procedia Engineering. 01/2013; 56:337–343.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does
not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
51 Downloads
Available from
Nov 15, 2013 | {"url":"http://www.researchgate.net/publication/4238626_Optimization_of_microchannel_heat_sinks_using_entropy_generation_minimization_method","timestamp":"2014-04-19T15:34:50Z","content_type":null,"content_length":"232752","record_id":"<urn:uuid:21354624-4c36-4db4-9b54-1b63c2f30e99>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combinatorics Formulas
The number of Permutations (arrangements) of n different things taken r at a time
where n! = n(n-1)(n-2)(n-3)......4 x 3 x 2 x 1
Some Important resultsCombinations
The number of combinations (selections) of n different things taken r at a time
Some Important results
Character is who you are when no one is looking. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=33018","timestamp":"2014-04-20T05:59:20Z","content_type":null,"content_length":"14718","record_id":"<urn:uuid:971a9c53-cbf0-4406-9777-ed65a7ebcbd7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
Schamplifier - Heat Sink Design
Schamplifier – Heat Sink Design
[Edit: Whoa! I was way off with my calculation of the heat sink surface area. 12"x3" is 0.023 m^2, not 0.21 m^2. This changes a lot, it makes $R_{\theta(hs-a)-conv} \approx 8.6 ^{\circ}C / W$,
which is way too much. If I up the heat sink size to 12"x6", I get $R_{\theta(hs-a)-conv} \approx 4.3 ^{\circ}C / W$, which is not as awesome, but is probably doable. I've updated the analysis to
reflect all of this.]
Like the Squelette, I aim to design an enclosure that can also serve as the heat sink for the amplifier ICs (and any other power electronics that end up inside, like voltage regulators for control
circuits, and so forth). The heat-sink portion of the enclosure will be made of sheet and angle aluminum.
Since I’m don’t yet have a list of everything I want to include inside the enclosure, I’d like to provide a little more breathing room than the Squelette enclosure would offer. Further, the two
amplifier ICs will probably not be attached to the same panel of the enclosure. Finally, I’d like to hook some kind of heat sink up to the ICs before I construct the enclosure so I can test out the
circuit without burning everything up. I could just use the dimensions specified for the Squelette enclosure, but that would be unwieldy for my test and wasteful of raw material for my enclosure, so
I’d like to understand thermal design enough to figure out what kind of heat sink this device requires and determine with some level of confidence whether a particular enclosure design will function
to dissipate heat adequately for the amplifier.
I had a helpful exchange with the author of the Squelette, Ross Herschberger, in the “Heatsink Design” thread on the Squelette main page. This exchange led me, ultimately, to this analysis.
Knowing zilch about heat dissipation and transfer, I had to start learning from scratch in support of this project. I found this EEVblog episode on thermal design the most helpful. This short page
and calculator on convective heat transfer helped me to understand how convection factors in. This page on heat sink design provides a lot more general background information. Here is a calculator
for heat sink temperature rise that made it quick and easy to check my results as I was fiddling around with numbers. The Wikipedia page on Thermal resistance in electronics helped me understand how
the heat travels through heat sink materials. Finally, I found this calculator that was somewhat helpful in finding the thermal resistance of an aluminum slab.
Maximum Power Dissipation
Before we can determine what kind of heat sink capability we require, we must first determine how much power the amplifier will be dissipating. From the LM1875T datasheet, given the power supply
voltage and average load, we can compute the maximum power dissipation of the device. The formula is
$P_{D(max)} \approx \frac{V_S^2}{2\pi^2 R_L} + P_Q \ \ \ (1)$
• $P_{D(max)}$ is the maximum power dissipation
• $V_s$ is the total power supply voltage (in this case, for the Squelette, ±18V = 36V)
• $R_L$ is the load resistance (in this case, 8Ω impedance speakers)
• $P_Q$ is the quiescent power dissipation of the device
The datasheet gives 100mA as the maximum quiescent current, and claims for example that a 60V system would have 6W of quiescent power, but it is not clear to me whether we can just use Joule’s law (P
=IE) to determine the power here (since it’s not just a plain DC system). If we did cheat and use Joule’s law, we’d get at most 3.6W for $P_Q$. Plugging everything else in to solve for $P_{D(max)}$
, we get roughly 11.8W for the power dissipation (not far off from the 11W given in the Squelette instructions). This also looks pretty close to the value shown on the datasheet graph for power
dissipation for a ±18V supply, which is further confirmation that we’re pretty close.(1)
The datasheet lists some example calculations for determining the maximum allowable heat sink to ambient thermal resistance. The relevant formula is
$R_\theta = \frac{T_{J(max)} - T_{A(max)}} {P_{D(max)}} \ \ \ (2)$
• $R_{\theta}$ is the total thermal resistance from the junction to ambient, the sum of
□ $R_{\theta(j-c)}$ (junction to case) (listed in the datasheet as 2 ^oC/W)
□ $R_{\theta(c-hs)}$ (case to heat-sink, e.g., the thermal resistance of your insulating pad or thermal compound layer) (in my case, the insulating pads are claimed 0.33 ^oC/W
□ $R_{\theta(hs-a)}$ (heat sink to ambient) (this is what we’re trying to determine)
• $T_{J(max)}$ is the maximum junction temperature from the datasheet (listed as 150^ oC)
• $T_{A(max)}$ is the maximum ambient temperature at which we are planning to operate the device (for example, 70 ^oC)
• $P_{D(max)}$ is the maximum power dissipation necessary (computed above as ~12W)
For our example, if we want to maintain a die temperature below the maximum rated 150 ^oC for ambient temperatures up to 70 ^oC, with our power dissipation at 12W, we would need a total thermal
resistance from junction to ambient of less than (150 – 70)/12 = 6.67 ^oC/W.
Since the other components of $R_{\theta(hs-a)}$ are known,
$R_{\theta(hs-a)} \leq 6.67 - (2+0.33)$
$R_{\theta(hs-a)} \leq 4.33 ^{\circ}C/W \ \ \ (2b)$.
So, given a chunk of aluminum of arbitrary size, how can we determine whether its thermal resistance is less than 4.33 ^oC/W? There are two factors at play here:
1. How quickly does the heat sink material transfer heat from the area where the heat source is connected (assuming it is smaller than the size of the heat sink itself) throughout the body of the
heat sink? We’ll call this value “heat sink thermal resistance due to conductance”, or $R_{\theta(hs-a)_{cond}}$
2. How quickly will the heat be dissipated from the heatsink due to convection and thermal emission? We’ll call this value “heat sink thermal resistance due to convection” or $R_{\theta(hs-a)_
They both contribute to the total thermal resistance of the heat sink. Imagine a heat sink made of a perfectly conductive material that instantly changed temperature across its entire mass — its
thermal resistance would still be bound by the amount of heat that could be dissipated by convection. Likewise, imagine a forced cooling system so effective that any heat on the surface of the heat
sink would be instantly whisked away to the frigid depths of space — no matter how quickly the heat could be drawn off the surface of the heat sink material, it must still travel from the source
through the material to the surface, and so presents some thermal resistance. Such a cooling system would be of little use if the heat sink had a thermal conductance so poor (and therefore, that the
heat took so long to travel from the source to the surface) that the IC had been fried by the time the heat could be dissipated.
Since I don’t know how these two factors affect one another, so will we take the naive, conservative approach and model thermal resistance of the heat sink as a whole as the sum of the two factors:
$R_{\theta(hs-a)} = R_{\theta(hs-a)_{cond}} + R_{\theta(hs-a)_{conv}} \ \ \ (3)$
Thermal Resistance of the Heat Sink
The rate at which the body of the heat sink can conduct heat away from a warmer place (where the IC is attached) to a cooler place (presumably, the edge of the heat sink) is determined by the heat
conductivity of the heat sink material and its shape. For a simplified model, consider Fourier’s Law for heat conduction (from Wikipedia, Thermal Resistance in Electronics):
From Fourier’s Law for heat conduction, the following equation can be derived, and is valid as long as all of the parameters (x, A, and k) are constant throughout the sample.
□ R[θ] is the thermal resistance (across the length of the material) (K/W)
□ x is the length of the material (measured on a path parallel to the heat flow) (m)
□ k is the thermal conductivity of the material ( W/(K·m) )
□ A is the total cross sectional area of the material (measured perpendicular to the heat flow) (m^2)
This gives us the general idea (that is, thermal resistance depends on how much material through which the heat must flow, and how fast the material carries heat). Heat will be transferred faster
through a shorter distance and through a wider cross section. This equation might work for us if our heat sink were the shape of the IC package (TO-220, in this case) and extruded out for some
length. Since we want to incorporate the heat sink into the enclosure as a broad plate, rather than extruded solid mass, however, it’s not so simple. The heat will be spreading along the (vary
narrow) cross-section of the aluminum from some point (presumably, near the middle) to the edges.
I could not find a good example set of equations with which to model this. However, I did come across this online calculator for “Slab Thermal Resistance With Constriction”, which models thermal
resistance from a smaller source (like our IC) to a larger slab of material (like our heat sink). It does not take into account convection, but that’s okay, we’re handling that elsewhere anyway.
If we plug in the following values:
• heat source width (TO-220 package dimensions, roughly 10mm)
• heat source length (TO-220 package width, roughly 15mm)
• slab width (heat sink width, 6″ or 0.1524m)
• slab length (heat sink length, 12″ or 0.3048m)
• slab height (heat sink thickness, 1/8″ or 3.175mm)
• thermal conductivity (of aluminum, 200 W/m^oC)
• film coefficient (of air, 5 W/m^2oC), though this value doesn’t seem to matter much
We get a result of $R_{\theta(hs-a)_{cond}} \approx 0.52 ^{\circ}C/W$ (see note (2))
Thermal Resistance Due to Convection
Ultimately, all heat dissipated by the heat sink must be dissipated by free convection (since I don’t want to use a fan or other active cooling system). Since we know the amount of power we want to
dissipate (12 W) and the size of the heat sink we are trying to use, we can use Newton’s Law of Cooling to determine the temperature at which the heat sink will dissipate that power. According to
this handy explanation of convective heat transfer,
The equation for convection can be expressed as:
$q = k A dT\ \ \ (4)$
$q$ is the heat transferred per unit time (W)
$A$ is the heat transfer area of the surface (m^2)
$k$ is the convective heat transfer coefficient of the process (W/m^2K or W/m^2oC)
$dT$ is the temperature difference between the surface and the bulk fluid (K or ^oC)
In our case, we know:
• $q$ (the power to dissipate, 12 W)
• $A$ (the area of our heat sink, for a 12″x6″ plate, roughly 0.046 m^2)
• $k$ (for air, 5 W/m^2 oC. Various places I looked had widely differing values for the convective heat transfer coefficient of air, but 5 W/m^2 oC seemed generally to be the worst value, so we’ll
be conservative and start with that)
We want to solve for dT (the temperature rise above ambient), which works out to
$dT = \frac{q}{kA} \ \ \ (5)$
or, substituting our values, ~48.2 ^oC. This means that, with a 12″x6″ heat sink attached to a device dissipating 12W of power, we can expect the heat sink to rise ~48.2 ^oC above the ambient
temperature. To compare, by the same formula, if we had a 3″x3″ heat sink, we would expect a rise of ~386 ^oC above ambient at the same power dissipation.
Now that we know the temperature our heat sink will be when dissipating a given amount of power, it is easy to compute the effective thermal resistance. In this case, it will be:
$R_{\theta(hs-a)conv} = \frac{dT}{q} \ \ \ (6)$
Substituting Eq. 4 into Eq. 5 yields:
$R_{\theta(hs-a)conv} = \frac{\frac{q}{kA}}{q} \ \ \ (7)$
which simplifies to:
$R_{\theta(hs-a)conv} = \frac{1}{kA} \ \ \ (8)$
So, for our 12″x6″ chunk of 1/16″ aluminum plate, substituting 5 W/m^2 oC for k and 0.046 m^2 for A, we get an $R_{\theta(HS-A)conv}$ of 4.3 ^oC/W.
Equation 8 seems to be a severe simplification, but perhaps we can determine experimentally whether it is an oversimplification. The question remains whether it is useful as a rule-of-thumb in the
absence of better information.
Working with our conservative, naive approach, we can go back to equation (3) to get what should be an upper bound for the thermal resistance of the heat sink to ambient, using our example values:
$R_{\theta(hs-a)} = R_{\theta(hs-a)_{cond}} + R_{\theta(hs-a)_{conv}}$
$R_{\theta(hs-a)} = 0.52^{\circ}C/W + 4.3^{\circ}C/W$
$R_{\theta(hs-a)} = 4.8^{\circ}C/W$
Since we have a max $R_{\theta(hs-a)}$ limit of 4.33 ^oC/W (from (2b)), it looks like we won’t be able to run quite at 70^oC ambient temperature. If our total thermal resistance with this heatsink
is 2 + 0.33 + 4.8 = 7.1 ^oC/W, we can see that at an ambient temperature of 20 ^oC, dissipating 12 W of power, our junction temperature should be:
$T_J = 20 ^{\circ}C + 12 W * 7.1 ^{\circ}C/W$
$T_J = 20 ^{\circ}C + 85 ^{\circ}C$
$T_J = 105 ^{\circ}$
This is within the max operating temperature, but is quite hot. If we recompute our maximum allowable ambient temperature, we get 64^oC. At the very least, this means we must be careful to make
sure the ambient temperature stays within reason (for example, we shouldn’t put the amplifier, enclosure and all, into some other sealed container).
An oft-repeated quotation, commonly attributed to Albert Einstein, goes something like
Everything should be made as simple as possible, but not simpler.
Now that we have made the heat sink design for this amp simple, we need to determine whether we have made it too simple. There was certainly a lot of assumptions and fudging going on in these
calculations, and there’s nothing better than to confirm them (or prove them wrong) than by some experimentation.
Let’s revisit our assumptions:
• Our computation of $P_Q$ using the power rule and the eye-balling of the graphs in the datasheet is pretty close to the right value
• The value of $R_{\theta(c-hs)}$ from the insulating pad datasheet is accurate
• My model of isolating the convection and conduction factors and summing them for an approximation of the total thermal resistance is not too far off
• The value of the conduction thermal resistance factor, which came from an online calculator (the correctness of which I cannot examine and verify), is pretty close to the right value
• It is sufficient to use in that calculator the total package dimensions instead of the dimensions of the die itself (see note 2)
• The value of 5 W/m^2K for the convective heat transfer coefficient of air is a good lower bound, and is not made even worse by size, shape, and orientation factors
• I didn’t screw up the algebra and arithmetic anywhere
To put these assumptions to the test, I will actually construct a heat sink, re-do this analysis (based on the final size and shape of it), and then run some tests to see if the measured case and
heat sink temperatures line up with the expectations. That at least should give me some sense of how close this approach will get me, or if it’s so far off as to burn up my amplifier chip.
1. The datasheet also provides a graph for supply current vs. supply voltage. If this is to mean quiescent current, then it looks like the chip draws ~60mA (instead of ~100mA) at ±18V, which
(again, cheating with Joule’s law) would give us 2.16W instead of 3.6W for $P_Q$ (quiescent power dissipation) in our total equation), which would bring $P_{D(max)}$ to just under 11W instead of
just under 12W. Working on the upper side of these values will provide us with a bit more headroom. We can always go back and recompute with more precision if the ballpark answers are too
difficult to work with.
2. In email discussions, a technical associate of Novel Concepts, Inc. (the organization which produced the calculator) advised me to use not the dimensions of the TO-220 package as the source but
rather the dimensions of the die inside the chip (which he gave as ~4.27 mm). I am not so sure about this approach, mainly because we already have from the datasheets the $R_{\theta(j-c)}$, but
also because most datasheets I have seen do not provide the dimensions of the die itself. Using the reduced dimensions results in a value of ~0.8 ^oC / W, rather than ~0.5 ^oC / W. That seems
relatively insignificant to me, but I’m certainly open to other feedback on the matter.
2 Responses to Schamplifier – Heat Sink Design
1. I n00bed the heat sink surface area conversion to meters, so the original thermal resistance due to convection was about an order of magnitude too low. I upped the heat sink size, and corrected
the figures, and arrived at a doable, though not awesome, new result.
2. Funny, I’ve been obsesed with dealing with thermal issues correctly, just yesterday I started the layout of a new power distribution board and while researching ran accross this article talking
about the use of thermal vias for bottom side pcb cooling. It was particularly intresting as I happen to use d2pak package voltage regulators alot.
Thanks for the post!
This entry was posted in Projects, Schamplifier. Bookmark the permalink. | {"url":"http://schazamp.wordpress.com/2011/07/08/schamplifier-heat-sink-design/","timestamp":"2014-04-17T07:42:56Z","content_type":null,"content_length":"91473","record_id":"<urn:uuid:d9d00535-1741-436a-8876-8474f3766e3f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Mathematics skills for writing a compiler?
Rafael 'Dido' Sevilla <dido@imperium.ph>
13 Sep 2004 12:27:03 -0400
From comp.compilers
| List of all articles for this month |
From: Rafael 'Dido' Sevilla <dido@imperium.ph>
Newsgroups: comp.compilers
Date: 13 Sep 2004 12:27:03 -0400
Organization: Compilers Central
References: 04-09-063
Keywords: practice
Posted-Date: 13 Sep 2004 12:27:03 EDT
On Wed, Sep 08, 2004 at 12:04:30PM -0400, Jack wrote:
> What mathematical skills do I need in order to build an "average" compiler?
> such as numerical methods, CFG, DFS.... etc
To write a compiler, the main thing you need is some background in
formal language and automata theory. All three levels of the Chomsky
Hierarchy are used in the construction of a compiler: most lexical
analyzers are based on regular languages, most parsing is done using
deterministic subsets of context-free languages, and well, all target
architectures are essentially linear bounded automata that can be
idealized as Turing machines. Naturally algorithms and data structures
are a necessity, and graph theory is certainly useful, in order to
swallow some of the techniques for code generation and optimization.
Numerical methods are not needed unless you're planning on designing the
successor to FORTRAN, a dialect of Matlab, or some other language that
specifically has numerical analysis as its problem domain.
Te capiam, cuniculus sceleste!
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/04-09-084","timestamp":"2014-04-16T16:06:19Z","content_type":null,"content_length":"6203","record_id":"<urn:uuid:70597b53-a998-4dd2-85ba-c57bb5536c27>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: August 2004 [00489]
[Date Index] [Thread Index] [Author Index]
Re: Re: Technical Publishing Made Easy with New Wolfram Publicon Software
• To: mathgroup at smc.vnet.net
• Subject: [mg50292] Re: [mg50281] Re: Technical Publishing Made Easy with New Wolfram Publicon Software
• From: DrBob <drbob at bigfoot.com>
• Date: Wed, 25 Aug 2004 03:36:00 -0400 (EDT)
• References: <cg20f3$od7$1@smc.vnet.net> <cgcicp$eo7$1@smc.vnet.net> <200408241022.GAA06691@smc.vnet.net>
• Reply-to: drbob at bigfoot.com
• Sender: owner-wri-mathgroup at wolfram.com
Thanks for doing all that research for us!
However, footnotes don't go at the end of a document--they go at the foot of each page.
Endnotes are getting to be the new standard, but they're far less useful to a reader.
On Tue, 24 Aug 2004 06:22:23 -0400 (EDT), Steve Luttrell <steve_usenet at _removemefirst_luttrell.org.uk> wrote:
> I have already used Publicon to write several papers (I had a busy
> weekend!). It fills in a much needed gap that Mathematica itself doesn't
> cover, at least not without a great deal of additional effort on my part. As
> I see it, Publicon aims to do what Scientific Word does but in a way that is
> preferable to Mathematica users.
> The several papers I wrote in Publicon were translations from papers I had
> already authored in Mathematica, but which I wanted to convert to a form
> from which I could easily generate LaTeX (I wanted to submit them to
> arXiv.org) I found that I could NOT simply read a Mathematica notebook into
> Publicon and have it behave in the same way as a notebook I had created
> directly in Publicon (e.g. Save As LaTeX did NOT work cleanly). However, I
> did find that copying material across from a Mathematica notebook (using
> Copy As Cell Expression) worked very well, but I had to do recreate the
> hyperlinks (cross references) afresh within Publicon in order for them to
> work correctly there. If I didn't do this then "Gather Backmatter" (which
> appears to rely on the special way that Publicon creates its cross
> references) did not work correctly.
> It would save me a great deal of time if I could automatically generate a
> Publicon notebook from a previously generated Mathematica notebook, so that
> it behaves as if it had been generated within Publicon in the first place.
> Maybe it is possible to design a filter to do this conversion automatically;
> this should be possible because there were only a few fairly well-defined
> conversion problems I encountered, and which I fixed manually.
> I have found NO problems at all in reading a Publicon notebook using
> Mathematica. However, it seems that a notebook created using Publicon knows
> that it originated there, so that double-clicking on it (in Windows) fires
> up Publicon rather than Mathematica (and vice versa for a notebook created
> in Mathematica).
> Publicon DOES support footnotes. You do "Insert Note" followed by "Gather
> Backmatter". The various footnotes (and references) are collected at the end
> of the document as backmatter. If you then "Save As LaTeX" you get a TeX
> file that compiles to give you the expected footnotes.
> To balance out the above positive comments I do have some criticisms. There
> are some Publicon message windows that sit on top of all other windows
> whatever you do to hide them. There are some characters that don't translate
> to LaTeX - e.g. I had to replace \[And] by \[Intersection] to make the
> exported LaTeX work correctly. I found that bold font in equations does not
> survive in the exported LaTeX, so now my vectors look like scalars. My
> habitual use of \[AlignmentMarker] has come home to haunt me because it is
> not translated to the (obvious) box form in LaTeX, so the exported LaTeX
> does not compile correctly. However, all of these problems are either benign
> or else manually fixable.
> Anyway, my overall impression of Publicon is very positive. It has a way to
> go to equal Scientific Word (which has been around for a while now), but the
> basic framework is already there in Publicon, and is very extensible via
> custom style sheets to define your own ways of generating LaTeX for
> instance; this sort of customisation is easy for someone who is already
> familiar with Mathematica's style sheets. I have already used this to create
> custom bibliography styles in the exported LaTeX; it works exactly as
> advertised.
> I hope that Publicon is subsumed into a future release of Mathematica, so
> that Mathematica (Publicon) is analogous to a souped up version of
> Scientific WorkPlace (Scientific Word) - check out
> http://www.sciword.demon.co.uk/ to see what I mean. This would avoid the
> time taken to convert from a Mathematica-authored notebook to something that
> works correctly in Publicon.
> Steve Luttrell
> "Bobby R. Treat" <drbob at bigfoot.com> wrote in message
> news:cgcicp$eo7$1 at smc.vnet.net...
>> This appears to be an elaborate waste of binary bits.
>> Rather than make Mathematica do pagination right (and a few other
>> simple things), they made a new stand-alone LaTex derivative with no
>> computational capability.
>> MUCH of the content I'd likely put into Publicon, if I used it, would
>> originate in Mathematica. But conversion is a one-way street.
>> Note that Publicon doesn't support footnotes; something every word
>> processor does do, and something every technical document needs.
>> On the PLUS side, it's cheap--except in terms of the learning curve.
>> The online tour makes using it look very involved.
>> Bobby
>> newsdesk at wolfram.com (Wolfram Research) wrote in message
> news:<cg20f3$od7$1 at smc.vnet.net>...
>> > Technical Publishing Made Easy with New Wolfram Publicon
>> > Software
>> >
>> > Wolfram Publicon, a powerful new publishing tool based on the
>> > underlying document technology of Mathematica, is now available
>> > to purchase as a download for Windows and Mac OS X.
>> >
>> > Created for the growing number of academic researchers,
>> > students, and industry professionals who need to create
>> > precisely formatted technical documents in XML and other
>> > structured data formats, Publicon incorporates many exciting
>> > features including inline math and chemistry typesetting,
>> > publisher-specific style sheets, and a scrolling WYSIWYG
>> > interface ideal for online presentation.
>> >
>> > With Publicon, users can compose more engaging technical
>> > documents that intuitively incorporate complex scientific
>> > research. Mathematica users will especially appreciate
>> > Publicon's unique ability to understand and identify math. All
>> > Mathematica work, including dynamic 2D and 3D plots, can be
>> > pasted directly into Publicon documents. Publicon will preserve
>> > the mathematical content so the work may be evaluated at any
>> > time in Mathematica.
>> >
>> > Heralded as a "major advance" by Open Access publisher BioMed
>> > Central, Publicon was built to take the guesswork and hassle out
>> > of formatting technical documents for publication. Combining
>> > ease of use with cutting-edge technology, Publicon is the first
>> > choice for composing structured technical documents for
>> > electronic or print publication.
>> >
>> > For more information, please visit:
>> > http://www.wolfram.com/publicon
DrBob at bigfoot.com
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2004/Aug/msg00489.html","timestamp":"2014-04-19T04:51:02Z","content_type":null,"content_length":"42556","record_id":"<urn:uuid:1ba9939a-5eab-46d1-9f36-d41a9ac80e07>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Huygen's wavelets
Waves move generally perpendicular to their wavefronts (think of the crest of a water wave), but for wavefronts striking barriers, the resulting motion is not immediately obvious.
A simple non-mathematical way of viewing and understanding wave propagation was introduced by Huygen. Each spot on the wavefront can be considered a source of a semicircular "wavelet". Where all the
wavelets merge is the new wavefront and the source of the next set of wavelets.
The reason goes to the fundamental notion of what a wave is. Our compact expression for a wave is a "self-propagating disturbance." Each point on a wave contains the makings of the future version of
the wave.
Each spot on a wave can be thought of as the source of a new semicircular wave moving forward. The
(possibly curved) line defined by the leading edge of all these semicircular wavelets is the new
wavefront. See the figure below.
If we have a plane wave, and if each points sends out a little semicircle, then moments later their combined
fronts create a new plane wave, just moved slight forward.
However, if something blocks a portion of the plane wave, then many of the wavelets are blocked,
This is the basis of diffraction. | {"url":"http://users.wfu.edu/matthews/courses/tutorials/Huygen/index.html","timestamp":"2014-04-19T02:32:02Z","content_type":null,"content_length":"3316","record_id":"<urn:uuid:7cb2b26b-90ff-41d5-8859-e2615d362f92>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
News from the Mathematics Department
Pedro Morales, Ph.D. student, coaches IMO medalist
Aug. 25, 2008
Pedro Morales, a Ph.D. student in the department, spent part of his summer teaching back home in Guatemala. He also helped to prepare the International Mathematical Olympiad team from Guatemala
before they departed for the IMO in Madrid, Spain this summer. Esteban Arreaga, one of Guatemala's team members, won a bronze medal in Madrid; this is Guatemala's first medal in the IMO. Way to go
Esteban and Pedro! | {"url":"http://www.baylor.edu/math/news.php?action=story&story=52298","timestamp":"2014-04-23T13:31:02Z","content_type":null,"content_length":"14971","record_id":"<urn:uuid:6e8228f7-ef36-4d71-a198-64db79760b00>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
right to left vs unary op
sab, 11/06/2011 - 08:03
Deep in another thread you stated in lack of parenthesis assume right to left evaluation
a = -1 + 2 + 3;
Right to left evaluation is equivalent to -(1 + (2 + 3)) = -6
as opposed to (-1) + (2 + 3) = 4
then there is
a = 1 + -2 + 3; ? -4 or 2 ?
IMHO unary operators +and - have to be specified (presumably with higher precedence than arithmetic operators)
Jim Dempsey
RSS Superior
sab, 11/06/2011 - 10:39
I think they actually meant Left to Right evaluation (right to left is just too weird) G
sab, 11/06/2011 - 11:59
sab, 11/06/2011 - 13:21
Quoting akki I believe unary operators have been disallowed. See here
I think the same, but many participants include it to their grammars through and through.
sab, 11/06/2011 - 17:04
Quoting dweeberlyloom I think they actually meant Left to Right evaluation (right to left is just too weird) G
What you are assuming they meant and what they (Rama) say seem to be at odds with one another.
Rama stated "Please evaluate expressions from right to left in the RHS of the mathematical expression."
Jim Dempsey
sab, 11/06/2011 - 17:10
Quoting akki I believe unary operators have been disallowed. See here
So it seems we cannot use
var m1 = -1;
and must resort to something like
var m1 = 0 - 1;
I am not objecting to this requirement (or shortcomming). I am objecting to not addressing this in the rules.
The rules could have stated:
No unary operators are supported. To enter in a negative number use an expression without an unary operator.
var x = 0-1;
Jim Dempsey
sab, 11/06/2011 - 18:07
The rules I learned from other threads; they are kind of the rules of rules which cover all the rule instances we asked
dom, 12/06/2011 - 09:13
When I use a word," Humpty Dumpty said in rather a scornful tone, "it means just what I choose it to mean - neither more nor less.
dom, 12/06/2011 - 21:19
Although general negation via a unary op isn't supported, it would make sense that negative values are, so that integral and floating point literals can have an optional + or - in front of the
number, with no space in between.
seg, 13/06/2011 - 07:27
Quoting mdma Although general negation via a unary op isn't supported, it would make sense that negative values are, so that integral and floating point literals can have an optional + or - in front
of the number, with no space in between.
So then they are supported, at least as a first character in expression. This would mean -(a+b)would be supported too. As far as I can determine from disscussions on this forum, from Intel employees
which I assume are on the rules committee, unary operators are not supported.
If the rules committee wants to say"unary operators not covered in the rules". Then it would be fair for them to also say "None of the test files will contain unary operators. Not even as a test for
error in syntax." _and_ to also assert that none exist in their test data. If they do this then anyone can choose to implement or not implement unary operators without risking points.
Jim Dempsey
seg, 13/06/2011 - 13:40
Unary operators are not supported - and I'm not saying otherwise - let's be clear about that. My point was that even though unary operators are disallowed, the minus sign in front of a literal
doesn't have to be a unary operator, but part of the literal definition. In the same way C allows L and d to be appended, or 0x to be prepended to a literal - these are not operators but part of the
literal definition. Whether negative literals are allowed, and any ensuing complexities this brings (such as ambiguity if spaces are not significant) is of course a matter for the judges to decide.
seg, 13/06/2011 - 16:55
Quoting mdma Unary operators are not supported - and I'm not saying otherwise - let's be clear about that. My point was that even though unary operators are disallowed, the minus sign in front of a
literal doesn't have to be a unary operator, but part of the literal definition. In the same way C allows L and d to be appended, or 0x to be prepended to a literal - these are not operators but part
of the literal definition. Whether negative literals are allowed, and any ensuing complexities this brings (such as ambiguity if spaces are not significant) is of course a matter for the judges to
What I'd like to be able to say is that we support at least negative numeric constants, and if that naturally results in the support of unary negation, no big deal so long as the inputs won't
actually test for error or correctness of more general unary negation.
@MDMA: I think that you are recommending something contrary to how languages are built. For example check out the "C" syntax (samples here): http://www.csci.csusb.edu/dick/samples/c.syntax.html http:
//www.externsoft.ch/download/cpp-iso.html Languages don't include '-' as part of the numeric literal because it is nearly impossible to process correctly, since the lexer is upstream of the parser
and has no idea about its syntax. Consider: var x = -1-2; If the "negative sign is part of number" rule is followed, this will produce tokens: VAR IDENTIFIER EQUAL NUMBER NUMBER SEMICOLON Which does
NOT match any rule -- the middle '-' was absorbed into the second NUMBER. On the other hand, if the '-' is not part of the number, you get VAR IDENTIFIER EQUAL MINUS NUMBER MINUSNUMBER SEMICOLON The
easiest way to parse this is to simply say that you support unary negation expressions, although you could write a special case for constants instead.
seg, 13/06/2011 - 17:37
var x = -1-2;
Left to right evaluation (-1)-2;
Right to left evaluation -(1-2); (unless unary op and unary op has precidence)
Rama stated right to left evaluation (with possibility for ()'s) and no unary support;
We will have to assume first non-blank character of numeric value or preceeding variable is never -.
Faça login para deixar um comentário. | {"url":"https://software.intel.com/pt-br/forums/topic/283378","timestamp":"2014-04-17T08:25:05Z","content_type":null,"content_length":"75520","record_id":"<urn:uuid:3bd7fa76-da76-445c-a86a-f0d4073df495>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
SR derived solely from one postulate
I can only use inertial observers as stated in the postulates in order to measure the speed of light the same in all inertial reference frames and then to perform the mathematics accordingly.
But "inertial observers" doesn't necessarily imply the Lorentz transformation unless you assume
postulates of SR. Inertial observers as defined in Newtonian physics all observe the same laws of physics (first postulate satisfied), and all see each other traveling at constant velocity, but
there's no invariant speed postulate and their coordinates transform according to the Galilei transformation. Likewise, I showed that if you just wanted to satisfy the second postulate but not the
first, you could have a family of coordinate systems that all see light moving at c, and that all see each other traveling at constant velocity, but where the coordinates transform according to a
different transformation. If you're going to go mucking about with the postulates, you can't
start out
assuming that the phrase "inertial observer" will mean exactly the same thing as it does in SR, with different frames related by the Lorentz transformation, that'd just be circular reasoning rather
than an actual "derivation'.
Okay, well if one messes with the distance coordinization in order to make the ticking working out the same
It's the time coordinate that determines the rate of ticking, not the distance coordinate.
in the reality of the non-inertial observer
Again, "the reality of the non-inertial observer" is meaningless since there is no single way to construct a coordinate system where a non-inertial observer is at rest. You have to talk about
coordinate systems, not "observers".
but then one would have to design a different coordinate system for accelerating away from the rest frame as accelerating back, otherwise how would something like the twin paradox work out?
No, you'd have a single non-inertial coordinate system, not two different ones for different parts of the trip. Since time dilation doesn't work the same way in non-inertial coordinate systems as it
does in inertial ones, there's no problem getting the twin paradox to work out, at some point the inertial twin would just have his clock ticking faster relative to coordinate time than the
non-inertial one.
This section
of the
twin paradox FAQ
features a diagram showing what lines of simultaneity could look like in a single non-inertial coordinate system (drawn relative to the space and time axis of the inertial frame where the inertial
twin is at rest):
You can see that during the phase where the non-inertial twin "Stella" is accelerating, the clock of the inertial twin "Terence" will elapse much more time than hers. Lines of constant position in
this non-inertial system aren't drawn in, you could draw them any way you like (including curved lines so that Stella could be at a constant position throughout her trip) and have a valid
non-inertial system.
In any case, I am only considering inertial observers with each having the same coordinate system to keep things simple.
Right, I editted my post since my last reply, sorry. The trailing observer having a greater acceleration must be what I was thinking. Anyway, I can now see that what is occuring with the light
catching up to the accelerating observer can all be worked out from the frame of an inertial observer as you said.
And you understand how in Rindler coordinates, any given clock in that family of accelerating clocks can be ticking at a constant rate relative to coordinate time, and occupying a fixed coordinate | {"url":"http://www.physicsforums.com/showthread.php?p=2583643","timestamp":"2014-04-20T14:21:40Z","content_type":null,"content_length":"39250","record_id":"<urn:uuid:ce6b08ca-d8e4-45f5-b227-f32b55581ffd>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Surface Area of Hemisphere - Mathematics
Surface Area of Hemisphere
You must know that a hemisphere is a half of a sphere. Below are the formulas to find the curved surface area of a hemisphere (without the base circle area) and the total surface area of a hemisphere
(with the base circle area).
Curved Surface Area of Hemisphere:
You must know that the surface area of a sphere is 4πr². A hemisphere is half of a sphere, this must mean that the curved surface area of a hemisphere is 2 divide by the surface area of a sphere.
That is; 4πr²/2 which gives the following expression.
The above is the surface area for a hemisphere without taking into consideration the base circle below it.
Total Surface Area of Hemisphere
If you’re to take into consideration the base circle below the curve meaning the total surface area. The surface area would be a combination of the above area and the area of the circle. As simple as
that!. The area of the circle is πr². This means the total surface area is 2πr² + πr². If we simplify this gives 3πr² as expressed below.
Remember there is a difference between the curved surface area and the total surface area. The total surface area includes the base circle below the hemisphere and the curve surface area does not,
it’s just the curve of the hemisphere.
2 Responses
1. Donald Olukoya says:
Really helpful and nice work.
Like or Dislike: 2 0
2. natasha says:
how do you find a curved surface area when height is given in the question…?????
Like or Dislike: 3 0
Leave a Reply Cancel reply | {"url":"http://mathematicsi.com/surface-area-of-hemisphere/","timestamp":"2014-04-19T04:21:34Z","content_type":null,"content_length":"53191","record_id":"<urn:uuid:38310ba9-bd97-4003-8470-8db80482c9cc>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: INT94JV: The Beauty of Mathematics Freshman seminar
Thursdays, 2-2:50pm
Santa Rosa
Instructor: Daryl Cooper
Office Hours: MWF 12-1 or by appointment.
This is a 1 unit class which is graded Pass/No-Pass.
To get a Pass grade you should attend regularly. If you miss a class email me to explain
why. There is no homework. There are no exams. There is reading assigned which you may
choose to do or not. You may if you wish write a paper. You are strongly encouraged to ask
Some topics we will cover:
various sizes of infinity
fourth dimension
curved space
how can we know some things are impossible, now and for ever?
what is mathematical proof and why is it important?
irrational numbers and music
how many prime numbers are there?
and many other things.
I can lend you a copy of books 1 or 2, but you must pay a deposit which will be returned in | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/152/2336220.html","timestamp":"2014-04-17T10:12:34Z","content_type":null,"content_length":"7961","record_id":"<urn:uuid:388fce88-e74f-4c94-ac2d-bd35e35a5997>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Irving, TX Prealgebra Tutor
Find a Irving, TX Prealgebra Tutor
...During my years as a graduate student at Duke University, I tutored undergraduates and students in the MBA program. I am certified in Secondary Math in the State of Texas and have taught
Algebra I in the Irving ISD. Topics usually covered in Algebra I include number systems, properties of real ...
82 Subjects: including prealgebra, English, chemistry, calculus
...I have a BA in Mathematics from Rutgers University, and completed my certification through Rider University. I currently teach 8th grade mathematics at a Charter school. I love teaching and
helping others learn and fall in love with math.
5 Subjects: including prealgebra, algebra 1, elementary (k-6th), study skills
...I've done well in all of my college courses, including straight A's in 2 years of math. I also want to point out that as a Philosophy major I spent many hours studying, understanding and
analyzing how people think. Matched with a Psychology minor, I really have a lot of knowledge about how the brain functions as well as how that effects people as learners.
17 Subjects: including prealgebra, reading, chemistry, geometry
...I was valedictorian of my 1300-person high school class. I will share my secrets for success and my love of math so your child will succeed. I enjoy teaching math to young people and am
certified in Texas for mathematics (grades 4-8). In college I tutored gifted third graders, but I am most com...
15 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...Drew specializes in tutoring math and science. He is, however, equally as helpful in language studies. He is fluent in Spanish, and conversational in French.
37 Subjects: including prealgebra, Spanish, reading, chemistry
Related Irving, TX Tutors
Irving, TX Accounting Tutors
Irving, TX ACT Tutors
Irving, TX Algebra Tutors
Irving, TX Algebra 2 Tutors
Irving, TX Calculus Tutors
Irving, TX Geometry Tutors
Irving, TX Math Tutors
Irving, TX Prealgebra Tutors
Irving, TX Precalculus Tutors
Irving, TX SAT Tutors
Irving, TX SAT Math Tutors
Irving, TX Science Tutors
Irving, TX Statistics Tutors
Irving, TX Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Arlington, TX prealgebra Tutors
Carrollton, TX prealgebra Tutors
Dallas prealgebra Tutors
Euless prealgebra Tutors
Farmers Branch, TX prealgebra Tutors
Grand Prairie prealgebra Tutors
Grapevine, TX prealgebra Tutors
Highland Park, TX prealgebra Tutors
Keller, TX prealgebra Tutors
Lewisville, TX prealgebra Tutors
N Richland Hills, TX prealgebra Tutors
N Richlnd Hls, TX prealgebra Tutors
North Richland Hills prealgebra Tutors
Plano, TX prealgebra Tutors
Richardson prealgebra Tutors | {"url":"http://www.purplemath.com/irving_tx_prealgebra_tutors.php","timestamp":"2014-04-16T19:12:54Z","content_type":null,"content_length":"24024","record_id":"<urn:uuid:963f2894-fd76-4694-b80b-af712d7687b2>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
Submitted by Anonymous on May 20, 2011.
To find the root of the complex number a+ib, you find the arcotangent below 90 degrees of the complex ratio a/b, and then divide it by the root required. The cotangent of the result gives one of the
say 3 roots of the complex ratio. For the other two roots you add on to the arcotangent 360 degrees and then 720 degrees. This procedure applies for all integer roots of complex numbers, so that
there are always n nth roots of a complex number. | {"url":"http://plus.maths.org/content/comment/reply/4343/2433","timestamp":"2014-04-21T07:23:02Z","content_type":null,"content_length":"20241","record_id":"<urn:uuid:4b5b66f1-9528-4948-9018-cc4f50dff888>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Draw & Questions - Stupid Questions League
Hull FC vs Bradford BullsThe Draw
This took place at Hullgal Towers, with Hullgal's #1 child picking the home teams & Hullgal's #3 child picking the away teams, this was watched by an independent witness of Captain Jack the Cat
Ladies & Gentleman I give you the draw:
Chairman M v Der Kaiser
Millward is a Gurner v Its Pie Time
Billy Bunter v Futtocks
The Alchemist v Mark from Yick
Exiled Townie v Kol
St Ash v Amber Avenger
Lee v Paulspud
Yorkshire Pie v Glos Exile
Jill Halfpenny Fan v Zoe
and the questions
1. What will the first penalty be given for:
2. How many goals will be kicked in the first half (by both teams):
3. Who will be the first player to score (either try or goal):
4. How many points will have been scored after 20 mins:
5. Which team will kick off in the second half:
6. Will there be a sin binning:
7. What colour will the referees shirt be:
8. What will be the total number of points scored (by both teams) at the end:
9. Who will be the Man of the Match:
10. What will the attendance be:
• 5 Points if you get the exact attendance
• 4 Points if you are 100 out
• 3 Points if you are 200 out
• 2 Points if you are 300 out
• 1 point if you are 400 out
Tie Breaker: Adding together all the squad numbers of the players that are on the field at the start of the first half, what will the total be: | {"url":"http://www.totalrl.com/forums/index.php/topic/130704-the-draw-questions/","timestamp":"2014-04-20T13:25:51Z","content_type":null,"content_length":"138171","record_id":"<urn:uuid:0dbca8e2-9829-4c97-87cc-451e453434a7>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
Virtual Math Girl
Working with Coordinate Planes
Mathematics topics explored include four quadrant
grids, coordinate planes, the distance formula
for calculating distance on a grid, equivalent The purpose of Virtual Math Girl Module 1 (Student) and 2 (Teacher) is to familiarize students with the four quadrant grid, or coordinate plane. A
fractions, decimals, percentages, common discussion of the distance formula for calculating length on a grid is included as an activity for extended learning.
denominators in addition and subtraction of
fractions, improper fractions and mixed numbers. Student Video- 4:32 min.
Virtual Math Girl guides a student through understanding graphs by introducing single quadrant grids and four quadrant grids, plotting ordered pairs
The student videos introduce the mathematics and calculating distance.
academic content standards and underlying ideas
in practical applications of the concepts Teacher Video - 6:25 min.
specifically for 5th grade students. The instructor in this teacher-centered video outlines directions for presenting the unit to students and makes suggestions for further
reinforcement activities. Information given includes identifying ordered pairs, graphing ordered pairs, and calculating distances using the Distance
The Teacher’s Edition videos provide instructions Formula.
for presenting the mathematical concepts,
activities, assessments, and recommended topics
for further discussion with the students.
Printable activity sheets and printable
assessments are available for the Teacher’s
Edition of each video.
Working with Coordinate Planes
The purpose of Virtual Math Girl Module 1 (Student) and 2 (Teacher) is to familiarize students with the four quadrant grid, or coordinate plane. A discussion of the distance formula for calculating
length on a grid is included as an activity for extended learning.
Student Video- 4:32 min.
Virtual Math Girl guides a student through understanding graphs by introducing single quadrant grids and four quadrant grids, plotting ordered pairs and calculating distance.
Teacher Video - 6:25 min.
The instructor in this teacher-centered video outlines directions for presenting the unit to students and makes suggestions for further reinforcement activities. Information given includes
identifying ordered pairs, graphing ordered pairs, and calculating distances using the Distance Formula.
Student Video- 4:32 min.Virtual Math Girl guides a student through understanding graphs by introducing single quadrant grids and four quadrant grids, plotting ordered pairs and calculating distance.
Teacher Video - 6:25 min.The instructor in this teacher-centered video outlines directions for presenting the unit to students and makes suggestions for further reinforcement activities. Information
given includes identifying ordered pairs, graphing ordered pairs, and calculating distances using the Distance Formula. | {"url":"http://wbgu.org/wbgumultimedia/virtualmathgirl/coordinate.php","timestamp":"2014-04-20T05:43:11Z","content_type":null,"content_length":"14305","record_id":"<urn:uuid:06e0aae9-7a2d-4d11-91da-e33d92f810d4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: axioms of infinity
V. Yu. Shavrukov vys1 at mcs.le.ac.uk
Thu Jul 20 04:13:58 EDT 2000
This is to point out that the conjecture
>any finitely axiomatizable theory with only infinite models must
>interpret one of a small finite number of such theories
appearing in a posting of Stephen Simpson is untrue under the interpretation
of "to interpret" as the usual relative interpretability as found in e.g.
Shoenfield's book, as well as under a few (but probably not all)
generalizations thereof.
A quote from Simpson:
>Define an *axiom of infinity* to be a consistent sentence of
>first-order predicate calculus which has no finite model. Let AxInf
>be the set of axioms of infinity. It follows from Trakhtenbrot's
>Theorem (or perhaps, a refinement of it) that AxInf is productive in
>the sense of Post. [...]
(1) Any sentence interpreting an axiom of infinity is either inconsistent
or is itself an axiom of infinity.
(2) Suppose the conjecture were true with S the finite small set
of interpretability-minimal axioms of infinity.
Then in view of (1)
{sentences interpreting some element of S} =
= {axioms of infinity plus inconsistent sentences}.
(3) By Trakhtenbrot, the r.h.s. of (2) cannot be r.e. as it
complements the set {sentences true in some finite model},
which is a member of an inseparable r.e. pair.
(4) The l.h.s. of (2) on the other hand is r.e. because
all you need to interpret a single sentence is a translation
and a finite number of proofs.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2000-July/004188.html","timestamp":"2014-04-20T01:09:40Z","content_type":null,"content_length":"3768","record_id":"<urn:uuid:8d44a1bc-a154-445f-a653-1d79d5229e6e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Math behind DSF
The Math behind DSF Synthesis
Discrete Summation Formulae (DSF) Synthesis, which has been used in all Moppelsynths so far to achieve maximum signal quality, is a synthesis technique that computes a bandlimited, harmonic signal.
The generated timbres are quite similar to those of FM synthesis, yet DSF Synthesis doesn't require oversampling and filtering since it can compute directly only the desired harmonics below the
Nyquist frequency. Owing to this, the generated signal is of high quality, free of aliasing artefacts and phase distortion.
However, the quality comes at the price of a bit more CPU-intensity and not-so-trivial theory and implementation. Seeing how people get all orgasmic about VST synths with squashed sound with a reverb
on top, you may want to ponder if it's worth the effort.
This article describes how to derive the formulas of DSF Synthesis. At the same time, it explains a DSF variant that has been used in the Tetra and Sonitarium synths and that is more general than the
one proposed by Moorer in that it generates an additional series of cosines to obtain a quadrature signal, a stereo signal whose harmonics are pairwise out of phase by 90 degrees. In between, a
simple introduction to sine and cosine calculation by complex numbers is provided.
The Objective
To generate harmonic sound, a synthesis technique must generate a series of sinusoids such that the frequency difference between two neighbored sines is everywhere the same. In the digital sound
synthesis domain, DSF Synthesis attempts to achieve that in a straightforward manner by fast calculations of formulas of the form
s(t) = ∑[k=0..N ]w^k sin(2p(f[c] + k f[m]) t / sampletime),
• s(t) is the sample to be output at sample step t
• f[c] is the fundamental frequency
• f[m] is the "distance frequency" between the sine waves (hence called harmonics)
• w^k is the magnitude of the k-th harmonic (ie. for w<1, the magnitudes of the higher harmonics are getting smaller and smaller)
• t is the number of the sample step (with." t / sampletime" being the time in seconds at which sample t is output)
• N is the number of harmonics plus 1 (for the fundamental)
To get a more visual impression of what the signal s(t) is like, let's describe its spectrum::
There are N+1 partials, the fundamental is f[c], the distance between two subsequent harmonics is f[m], and the harmonics' magnitudes fall off by factor w. On a decibel scale, the partials fall off
Spectrum of a DSF-generated waveform with N = 8, f[c] = 200, f[m] = 50, and w = 0.7.
In order to avoid aliasing , the maximum frequency of the sine waves must be smaller than the Nyquist frequency, which is half the sample rate (eg. 22050Hz when you're sampling at 44100Hz).So N
should be chosen small enough such that f[c] + N f[m] is smaller than the Nyquist frequency. Once that is provided, you could calculate samples by that formula to get a harmonic signal that is free
of aliasing artefacts.
Computing all those sines would be quite slow, though, and DSF Synthesis fixes that by computing an equivalent formula that doesn't make it necessary to sum up all the sines. This means it aims for
nothing less than generating a bandlimited signal without adding all sines separately (unlike Additive Synthesis), and without having to oversample and filtering out the harmonics above the Nyquist
frequency (unlike Frequency- or Phase-Modulation Synthesis have to do).
So, how does DSF Synthesis do that?
Sines and Cosines by Phasor Rotations
In order to understand the math behind DSF synthesis, we first have to examine what sines and cosines are all about. Suppose we have an arc rotating counter clockwise around the center of the
Cartesian coordinate system in a plane. If that so-called phasor has length A and rotates by an angle of w per second, then after t seconds the x-coordinate of the tip of the phasor is A*cos(w*t) and
the y-coordinate is A*sin(w*t):
Not really spectacular, right. But this already gives us a hint that we can compute sines and cosines quickly once we have such a phasor. And this is where complex numbers come into play.
Sines and Cosines by Complex Number Multiplications
In fact, such phasors can be described conveniently by complex numbers such that a rotation can be performed with a couple of simple arithmetic operations. Complex numbers can be viewed as ordered
pairs [a,b] or as points in the plane for which addition, subtraction, multiplication and division are defined so we can calculate with them just like with real numbers. The x-coordinate of a complex
number is commonly called its real part and its y-coordinate is called its imaginary part. To denote the real and imaginary parts of a complex number, we define:
RE([a,b]) = a and IM([a,b]) = b
Addition and subtraction of two complex numbers are simply defined like this:
[a1,b1] + [a2,b2] = [a1+a2, b1+b2]
[a1,b1] - [a2,b2] = [a1-a2, b1-b2]
The multiplication is defined like this:
[a1,b1] * [a2,b2] = [a1*a2 - b1*b2, a1*b2 + b1*a2]
The division is somewhat more complicated and defined like this:
[a1,b1] / [a2,b2] = [(a1*a2 + b1*b2) / (a2*a2 + b2*b2), (b1*a2 - a1*b2) / (a2*a2 + b2*b2)]
Notice that the real numbers can be viewed as the subset of the complex numbers in that they can be represented by complex numbers with imaginary component 0:
a1 + a2 = [a1,0] + [a2,0] = [a1+a2,0]
a1 * a2 = [a1,0] * [a2,0] = [a1*a2 - 0*0, a1*0 + 0*a2] = [a1*a2,0]
x * [a,b] = [x,0] * [a,b] = [x*a, x*b].
Also notice that every complex number can be viewed as an arc and vice versa. That is, for each complex number [a1,b1] there exists an A and an angle w such that:
[a1,b1] = A * [cos(w),sin(w)],
where A is the length of the arc described by [a1,b1].
One feature of complex numbers that is particularly valuable for sound synthesis is that you can use them easily to implement rotating phasors by complex number multiplications, because it can be
shown that
(A*[cos(w),sin(w)]) * (B* [cos(g),sin(g)])
= A*B * [cos(w+g),sin(w+g)]
That means, we can rotate a complex number by the angle w by multiplying it with the complex number [cos(w),sin(w)].
From this observation, we can already derive a simple oscillator that quickly generates both a sine and a cosine wave. For each sample, we just have to perform a complex multiplication of a phasor
with a suitable complex number of the form [cos(w),sin(w)]. Every now and then, we have to normalize the phasor, ie. reset it to length 1. Because, otherwise, by the roundoff errors introduced by the
arithmetic operations, it will not keep the legnth 1 but degenerate. When using double precision floating point variables, this normalization needs to be done only occasionally, eg. every few
thousands of samples.
// f is the frequency
double angle = 2.0*Pi*f / sampleRate;
double phaseAdd_re = cos(angle);
double phaseAdd_im = sin(angle);
double phase_re = 1.0;
double phase_im = 0.0;
void cmul(double a1, double b1, double a2, double b2, double* a3, double* b3)
double t_re, t_im;
t_re = a1*a2-b1*b2;
t_im = a1*b2+b1*a2;
*a3 = t_re;
*b3 = t_im;
void subprocess(int samples)
int i;
for(i=0; i<samples;i++)
out1[i] = phase_re; // cos wave
out2[i] = phase_im; // sin wave
// normalize the phasor
double t = 1.0/sqrt(phase_re*phase_re + phase_im*phase_im);
phase_re = t * phase_re;
phase_im = t * phase_im;
This is just to get a rough idea how complex number arithmetic can be used for synthesis. But complex numbers can do much more, they can help us compute the DSF formula.
Deriving the Classic DSF Formula
We will now examine how to rewrite the target formula
s(t) = ∑[k=0..N ]w^k sin(2p(f[c] + k f[m]) t / sampletime)[]
by the help of complex numbers such that we get a closed-form expression, ie. the sum will vanish (and by that, there will be no need to sum up all the sines when computing the samples).
In fact, we can calculate s(t) by calculating the following complex formula:
c(t) = ∑[k=0..N ]a * b^k,
• a = [cos(2p f[c] t / sampletime), sin(2p f[c] t / sampletime)]
• b = [w*cos(2p f[m] t / sampletime), w*sin(2p f[m] t / sampletime)]
The samples specified by c(t) are complex numbers. Or, to put it another way, c(t) doesn't specify a series of floating point numbers like s(t) but rather a series of ordered pairs of floating point
Anyway, s(t) can be easily obtained from c(t) since it is true that
s(t) = IM(c(t)),
which means that we can obtain s(t) by picking the second number from each sample couple generated by c(t). To see that s(t) = IM(c(t)), please observe that
c(t) = ∑[k=0..N ]a * b^k
= ∑[k=0..N ][cos(2p f[c] t / sampletime), sin(2p f[c] t / sampletime)] * [w*cos(2p f[m] t / sampletime), w*sin(2p f[m] t / sampletime)]^k
= ∑[k=0..N ][cos(2p f[c] t / sampletime), sin(2p f[c] t / sampletime)] *[w^k *cos(k * 2p f[m] t / sampletime), w^k *sin(k * 2p f[m] t / sampletime)]
= ∑[k=0..N ][w^k cos(2p(f[c] + k f[m]) t / sampletime), w^k sin(2p(f[c] + k f[m]) t / sampletime)]
= [∑[k=0..N ]w^k cos(2p(f[c] + k f[m]) t / sampletime), ∑[k=0..N ]w^k sin(2p(f[c] + k f[m]) t / sampletime)]
So we can get s(t) by computing c(t) and picking its imaginary part.
c(t), again, resembles a prominent geometric series and can be simplified on complex numbers just like it's done on real numbers:
c(t) = ∑[k=0..N ]a * b^k = a * (1 - b^N+1) / (1 -b)
Woosh, the sum is gone! So we can simplify s(t) like this:
= IM( c(t) )
= IM( a * (1 - b^N+1) / (1 -b))
which is, since IM(x*y) = RE(x)*IM(y)+IM(x)*RE(y) (this follows from the definition of complex multiplication)
= RE(a)*IM((1 - b^N+1) / (1 - b)) + IM(a)*RE((1 - b^N+1) / (1 - b))
which is, since RE(x/y) = (RE(x)*RE(y)+(IM(x)*IM(y)) / (RE(y)*RE(y)+IM(y)*IM(y)) and IM(x/y) = (IM(x)*RE(y)-RE(x)*IM(y)) / (RE(y)*RE(y)+IM(y)*IM(y))
= ( RE(a)*(IM(1-b^N+1)*RE(1-b)-RE(1-b^N+1)*IM(1-b))
+ IM(a)*(RE(1-b^N+1)*RE(1-b)+IM(1-b^N+1)*IM(1-b)) )
/ (RE(1-b)*RE(1-b)+IM(1-b)*IM(1-b))
= ( cos(u)*((-w^N+1*sin((N+1)*v))*(1-w*cos(v))+(1-w^N+1*cos((N+1)*v))*w*sin(v))
+ sin(u)*((1-w^N+1*cos((N+1)*v))*(1-w*cos(v))+w^N+1*sin((N+1)*v)*w*sin(v)) )
/ ((1-w*cos(v))^2+(w*sin(v))^2)
where u = 2p f[c] t / sampletime and v = 2p f[m] t / sampletime. Applying further transformations by trigonometric formulas, this becomes after a while:
= { ( w*sin(v-u) + sin(u) ) + w^N+1* (w* sin(u + N*v) - sin(u + (N+1)*v)) } / ( 1 + w^2- 2*w*cos(v) )
That's it! In summary, we have just derived the classic DSF formula proposed by Moorer, which is
│ Classic DSF Synthesis │
│ │
│ s(t) = ∑[k=0..N ]w^k sin(u + k*v) │
│ │
│ = { ( w*sin(v-u) + sin(u) ) + w^N+1* (w* sin(u + N*v) - sin(u + (N+1)*v)) } / ( 1 + w^2- 2*w*cos(v) ) │
│ │
│ where u = 2p f[c] t / sampletime and v = 2p f[m] t / sampletime. │
Classic DSF Synthesis vs. Complex DSF Synthesis
Actually, we have just derived two methods to perform DSF Synthesis. On the one hand, we can use the classic DSF formula to compute
s(t) = ∑[k=0..N ]w^k sin(2p(f[c] + k f[m]) t / sampletime).
On the other hand, why not compute the complex formula c(t) instead of s(t), using complex arithmetic? After all, computing the complex c(t) samples will give us two samples with each sample step,
which can be used for stereo output. As shown before, we have
c(t) = ∑[k=0..N ][cos(u), sin(u)] * [w*cos(v), w*sin(v)]^k
= [∑[k=0..N ] w^k cos(u + k*v), ∑[k=0..N ]w^k sin(u + k*v)]
where where u = 2p f[c] t / sampletime and v = 2p f[m] t / sampletime.
So computing c(t) gives us a complex signal, the imaginary part of which is the sum of sines like it is obtained by classic DSF Synthesis, and the real part of which is a sum of cosines. Sending one
signal to the left stereo channel and the other to the right channel, we get a quadrature stereo signal, ie. the harmonics of the signal are pairwise out of phase by 90 degrees (since cos(x) = sin
(90° - x)).
│ Complex DSF Synthesis │
│ │
│ c(t) = [∑[k=0..N ]w^k cos(u + k*v), ∑[k=0..N ]w^k sin(u + k*v)] │
│ │
│ = [cos(u), sin(u)] * (1 - [w*cos(v), w*sin(v)]^N+1)/(1 - [w*cos(v), w*sin(v)]) │
│ │
│ where u = 2p f[c] t / sampletime and v = 2p f[m] t / sampletime. │
This is the synthesis technique used in Tetra and Sonitarium. The computation can be made much faster if the expensive sin and cos functions are not called during each sample step but only when a new
key is pressed. At that time, phasors can be initialized by using the standard sin and cos functions, and after that, rotating phasors can be used for fast sine and cosine calculation, as has been
described before for the simple osciallator example. This optimization is used in Sonitarium.
By the way, when using the formulas presented so far, the harmonics will fall off to the right side of the fundamental. If you want them to fall off to the left side, just let the complex result of
the expression "(1 - [w*cos(v), w*sin(v)]^N+1)/(1 - [w*cos(v), w*sin(v)])" be computed as usual, then flip the sign of its imaginary part. This way you will get another complex signal by a simple
sign flip. You can then mix it with the first signal or whatever.
On issue not mentioned yet is the normalization of the signal. After all, the signal would get much too loud with all those sines and cosines summed up. A simple estimation that has proven useful for
both classic and complex DSF synthesis is that the volume of the computed signal is at most
∑[k=0..N ]w^k, which is equal to (1 - w^N+1)/(1-w), where w is the weight factor of the harmonics. Divide each sample by that value, and you're done.
Thanks for reading,
Burkhard (moppelsynths@verklagekasper.de)
James A. Moorer, "The Synthesis of Complex Audio Spectra by Means of Discrete Summation Formulas", http://www.jamminpower.com/main/articles.html, 1976.
Tim Stilson, Julius Smith, "Alias-Free Digital Synthesis of Classic Analog Waveforms", http://ccrma.stanford.edu/~stilti/papers, 1996. | {"url":"http://www.verklagekasper.de/synths/dsfsynthesis/dsfsynthesis.html","timestamp":"2014-04-16T16:23:07Z","content_type":null,"content_length":"24620","record_id":"<urn:uuid:3d5ee736-d9c8-4a4e-a406-0f50748ebe49>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help with statistics please?
Steve Asked: Need help with statistics please?
1.) A study of college football games shows that the number of holding penalties assessed has a mean of 2.3 penalties per game and a standard deviation of 0.8 penalties per game. What is the
probability that, for a sample of 40 college games to be played next week, the mean number of holding penalties will be 2.5 penalties per game or more?
Carry your intermediate computations to at least four decimal places. Round your answer to at least three decimal places.
2.)Polychlorinated biphenyl (PCB) is among a group of organic pollutants found in a variety of products, such as coolants, insulating materials, and lubricants in electrical equipment. Disposal of
items containing less than 50 parts per million (ppm) PCB is generally not regulated. A certain kind of small capacitor contains PCB with a mean of 47.8 ppm and a standard deviation of 8 ppm. The
Environmental Protection Agency takes a random sample of 39 of these small capacitors, planning to regulate the disposal of such capacitors if the sample mean amount of PCB is 49.5 ppm or more. Find
the probability that the disposal of such capacitors will be regulated.
Carry your intermediate computations to at least four decimal places. Round your answer to at least three decimal places.
3.)The mean salary offered to students who are graduating from Coastal State University this year is $24,275, with a standard deviation of $3,678. A random sample of 85 Coastal State students
graduating this year has been selected. What is the probability that the mean salary offer for these 85 students is $20,000 or more?
Carry your intermediate computations to at least four decimal places. Round your answer to at least three decimal places.
4.)The producer of a weight-loss pill advertises that people who use the pill lose, after one week, an average (mean) of 1.75 pounds with a standard deviation of 1.05 pounds. In a recent study, a
group of 45 people who used this pill were interviewed. The study revealed that these people lost a mean of 1.7 pounds after one week. If the producer's claim is correct, what is the probability that
the mean weight loss after one week on this pill for a random sample of 45 individuals will be 1.7 pounds or more?
5.)The lifetime of a certain brand of electric light bulb is known to have a standard deviation of 47 hours. Suppose that a random sample of 80 bulbs of this brand has a mean lifetime of 491 hours.
Find a 90% confidence interval for the true mean lifetime of all light bulbs of this brand. What is the lower limit of the 90% confidence interval? & What is the upper limit of the 90% confidence
6.)A union of restaurant and foodservice workers would like to estimate the mean hourly wage,population mean, of foodservice workers in the U.S. The union will choose a random sample of wages and
then estimate population mean using the mean of the sample. What is the minimum sample size needed in order for the union to be 95% confident that its estimate is within $0.40 of population mean?
Suppose that the standard deviation of wages of foodservice workers in the U.S. is about $2.15.
Carry your intermediate computations to at least three decimal places. Write your answer as a whole number (and make sure that it is the minimum whole number that satisfies the requirements).
7.)Many college graduates who are employed full-time have longer than 40-hour work weeks. Suppose that we wish to estimate the mean number of hours,population mean, worked per week by college
graduates employed full-time. We'll choose a random sample of college graduates employed full-time and use the mean of this sample to estimate population mean. Assuming that the standard deviation of
the number of hours worked by college graduates is 6.00 hours per week, what is the minimum sample size needed in order for us to be 95% confident that our estimate is within 1.5 hours per week of
population mean?
Carry your intermediate computations to at least three decimal places. Write your answer as a whole number (and make sure that it is the minimum whole number that satisfies the requirements).
8.)A corporation that maintains a large fleet of company cars for the use of its sales staff is interested in the mean distance driven monthly per sales person. The following table gives the monthly
distances in miles driven by a random sample of 12 sales persons: 2595, 2201, 1888, 2384, 2626, 2546, 2449, 1941, 2458, 1956, 2658, 2398
Based on this sample, find a 90% confidence interval for the mean number of miles driven monthly by members of the sales staff, assuming that monthly driving distances are normally distributed.
Calculate the upper and lower limit of the confidence level
Math Answered:
No one will help you if you just put up all your homework and then leave. Also, the work to point ratio is way off.
Got a better answer? Share it below!
Incoming search terms:
• polychlorinated biphenyl is among a group of organic pollutants found in a variety of products such as coolants insulating
• The producer of a weight-loss pill advertises that people who use the pill lose after one week an average (mean) of pounds with a standard deviation of pounds In a recent study a group of people
who used this pill were interviewed The study revealed that
• A study of college football games shows that the number of holding penalties assessed has a mean of penalties per game and a standard deviation of penalties per game What is the probability that
for a sample of college games to be played next week the mea
• The lifetime of a certain brand of electric light bulb is known to have a standard deviation of hours Suppose that a random sample of bulbs of this brand has a mean lifetime of hours Find a
confidence interval for the true mean lifetime of all light bulbs
• The producer of a weight-loss pill advertises that people who use the pill lose after one week an average (mean) of pounds with a standard deviation of pounds In a recent study a group of people
who used this pill were interviewed The study revealed t
• the producer of a weight-loss pill advertises that people who use the pill lose after one week an average of 1 8 pounds with a standard deviation of 1 02 pounds
• what is the probability that the sample mean weight loss will be at least 26 2 pounds
Related posts:
Leave a Reply Cancel reply | {"url":"http://helplosingweightatlast.com/need-help-with-statistics-please/","timestamp":"2014-04-21T07:03:52Z","content_type":null,"content_length":"28533","record_id":"<urn:uuid:dbb561af-bdf0-4d60-849a-75fa8b6281a9>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00326-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculate - the etymology comes from the latin word for pebbles. In Roman times, man-powered taxis would have a bingo-like device that rolls with the axis of the wheel, depositing a pebble for each
rotation into a basket below. At the end of the ride, the driver/runner would calculate the pebbles that got dropped into the basket, thus figuring out the distance, and the cost, of the ride. | {"url":"http://everything2.com/title/Calculate","timestamp":"2014-04-17T07:17:10Z","content_type":null,"content_length":"24919","record_id":"<urn:uuid:ca4f6ca9-41f8-4a0f-b256-8ebbbcd062be>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fast linearized Bregman iteration for compressive sensing and sparse denoising
Results 1 - 10 of 46
, 2008
"... This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex
relaxation of a rank minimization problem, and arises in many important applications as in the task of reco ..."
Cited by 192 (12 self)
Add to MetaCart
This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex
relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem).
Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and
easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices {X k, Y k}
and at each step, mainly performs a soft-thresholding operation on the singular values of the matrix Y k. There are two remarkable features making this attractive for low-rank matrix completion
problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X k} is empirically nondecreasing. Both these facts allow the
algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On
, 2009
"... Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed
sensing is already quite immense. Inspired by recent breakthroughs in the development of novel first-order ..."
Cited by 71 (1 self)
Add to MetaCart
Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed
sensing is already quite immense. Inspired by recent breakthroughs in the development of novel first-order methods in convex optimization, most notably Nesterov’s smoothing technique, this paper
introduces a fast and accurate algorithm for solving common recovery problems in signal processing. In the spirit of Nesterov’s work, one of the key ideas of this algorithm is a subtle averaging of
sequences of iterates, which has been shown to improve the convergence properties of standard gradient-descent algorithms. This paper demonstrates that this approach is ideally suited for solving
large-scale compressed sensing reconstruction problems as 1) it is computationally efficient, 2) it is accurate and returns solutions with several correct digits, 3) it is flexible and amenable to
many kinds of reconstruction problems, and 4) it is robust in the sense that its excellent performance across a wide range of problems does not depend on the fine tuning of several parameters.
Comprehensive numerical experiments on realistic signals exhibiting a large dynamic range show that this algorithm compares favorably with recently proposed state-of-the-art methods. We also apply
the algorithm to solve other problems for which there are fewer alternatives, such as total-variation minimization, and
- SIAM J. Imaging Sci , 2008
"... Abstract. We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman
iterative regularization, and they give a very accurate solution after solving only a very small number o ..."
Cited by 59 (13 self)
Add to MetaCart
Abstract. We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman
iterative regularization, and they give a very accurate solution after solving only a very small number of 1 instances of the unconstrained problem minu∈Rn μ‖u‖1 + 2 ‖Au−fk ‖ 2 2 for given matrix A
and vector f k. We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six
iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving A and A ⊤ can be computed by fast
transforms. Utilizing a fast fixed-point continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of
compressed sensing problems on a standard PC.
, 2010
"... In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the
dimension of the model to be estimated. However in many practical situations of interest, models are constr ..."
Cited by 38 (10 self)
Add to MetaCart
In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the
dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient
dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse
problems. The class of simple models considered are those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases such as sparse
vectors (e.g., signal processing, statistics) and low-rank matrices (e.g., control, statistics), as well as several others including sums of a few permutations matrices (e.g., ranked elections,
multiobject tracking), low-rank tensors (e.g., computer vision, neuroscience), orthogonal matrices (e.g., machine learning), and atomic measures (e.g., system identification). The convex programming
formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial
- In Intl. Workshop on Comp. Adv. in Multi-Sensor Adapt. Processing, Aruba, Dutch Antilles , 2009
"... Abstract. This paper studies algorithms for solving the problem of recovering a low-rank matrix with a fraction of its entries arbitrarily corrupted. This problem can be viewed as a robust
version of classical PCA, and arises in a number of application domains, including image processing, web data r ..."
Cited by 33 (6 self)
Add to MetaCart
Abstract. This paper studies algorithms for solving the problem of recovering a low-rank matrix with a fraction of its entries arbitrarily corrupted. This problem can be viewed as a robust version of
classical PCA, and arises in a number of application domains, including image processing, web data ranking, and bioinformatic data analysis. It was recently shown that under surprisingly broad
conditions, it can be exactly solved via a convex programming surrogate that combines nuclear norm minimization and ℓ1-norm minimization. This paper develops and compares two complementary approaches
for solving this convex program. The first is an accelerated proximal gradient algorithm directly applied to the primal; while the second is a gradient algorithm applied to the dual problem. Both are
several orders of magnitude faster than the previous state-of-the-art algorithm for this problem, which was based on iterative thresholding. Simulations demonstrate the performance improvement that
can be obtained via these two algorithms, and clarify their relative merits.
, 2010
"... This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach
works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, app ..."
Cited by 31 (2 self)
Add to MetaCart
This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as
follows: first, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal first-order method. A merit of this approach is
its flexibility: for example, all compressed sensing problems can be solved via this approach. These include models with objective functionals such as the total-variation norm, ‖W x‖1 where W is
arbitrary, or a combination thereof. In addition, the paper also introduces a number of technical contributions such as a novel continuation scheme, a novel approach for controlling the step size,
and some new results showing that the smooth and unsmoothed problems are sometimes formally equivalent. Combined with our framework, these lead to novel, stable and computationally efficient
algorithms. For instance, our general implementation is competitive with state-of-the-art methods for solving intensively studied problems such as the LASSO. Further, numerical experiments show that
one can solve the Dantzig selector problem, for which no efficient large-scale solvers exist, in a few hundred iterations. Finally, the paper is accompanied with a software release. This software is
not a single, monolithic solver; rather, it is a suite of programs and routines designed to serve as building blocks for constructing complete algorithms. Keywords. Optimal first-order methods,
Nesterov’s accelerated descent algorithms, proximal algorithms, conic duality, smoothing by conjugation, the Dantzig selector, the LASSO, nuclearnorm minimization.
, 2009
"... Abstract. In this paper, we propose and study the use of alternating direction algorithms for several ℓ1-norm minimization problems arising from sparse solution recovery in compressive sensing,
including the basis pursuit problem, the basis-pursuit denoising problems of both unconstrained and constr ..."
Cited by 23 (2 self)
Add to MetaCart
Abstract. In this paper, we propose and study the use of alternating direction algorithms for several ℓ1-norm minimization problems arising from sparse solution recovery in compressive sensing,
including the basis pursuit problem, the basis-pursuit denoising problems of both unconstrained and constrained forms, as well as others. We present and investigate two classes of algorithms derived
from either the primal or the dual forms of the ℓ1-problems. The construction of the algorithms consists of two main steps: (1) to reformulate an ℓ1-problem into one having partially separable
objective functions by adding new variables and constraints; and (2) to apply an exact or inexact alternating direction method to the resulting problem. The derived alternating direction algorithms
can be regarded as first-order primal-dual algorithms because both primal and dual variables are updated at each and every iteration. Convergence properties of these algorithms are established or
restated when they already exist. Extensive numerical results in comparison with several state-of-the-art algorithms are given to demonstrate that the proposed algorithms are efficient, stable and
robust. Moreover, we present numerical results to emphasize two practically important but perhaps overlooked points. One point is that algorithm speed should always be evaluated relative to
appropriate solution accuracy; another is that whenever erroneous measurements possibly exist, the ℓ1-norm fidelity should be the fidelity of choice in compressive sensing. Key words. Sparse solution
recovery, compressive sensing, ℓ1-minimization, primal, dual, alternating direction method
, 2008
"... Abstract. One of the key steps in compressed sensing is to solve the basis pursuit problem minu∈R n{�u�1: Au = f}. Bregman iteration was very successfully used to solve this problem in [40].
Also, a simple and fast iterative algorithm based on linearized Bregman iteration was proposed in [40], which ..."
Cited by 21 (7 self)
Add to MetaCart
Abstract. One of the key steps in compressed sensing is to solve the basis pursuit problem minu∈R n{�u�1: Au = f}. Bregman iteration was very successfully used to solve this problem in [40]. Also, a
simple and fast iterative algorithm based on linearized Bregman iteration was proposed in [40], which is described in detail with numerical simulations in [35]. A convergence analysis of the smoothed
version of this algorithm was given in [11]. The purpose of this paper is to prove that the linearized Bregman iteration proposed in [40] for the basis pursuit problem indeed converges. 1.
- SIAM Journal on Scientific Computing , 2010
"... Abstract. We propose a fast algorithm for solving the ℓ1-regularized minimization problem minx∈R n µ‖x‖1 + ‖Ax − b ‖ 2 2 for recovering sparse solutions to an undetermined system of linear
equations Ax = b. The algorithm is divided into two stages that are performed repeatedly. In the first stage a ..."
Cited by 21 (7 self)
Add to MetaCart
Abstract. We propose a fast algorithm for solving the ℓ1-regularized minimization problem minx∈R n µ‖x‖1 + ‖Ax − b ‖ 2 2 for recovering sparse solutions to an undetermined system of linear equations
Ax = b. The algorithm is divided into two stages that are performed repeatedly. In the first stage a first-order iterative method called “shrinkage ” yields an estimate of the subset of components of
x likely to be nonzero in an optimal solution. Restricting the decision variables x to this subset and fixing their signs at their current values reduces the ℓ1-norm ‖x‖1 to a linear function of x.
The resulting subspace problem, which involves the minimization of a smaller and smooth quadratic function, is solved in the second phase. Our code FPC AS embeds this basic two-stage algorithm in a
continuation (homotopy) approach by assigning a decreasing sequence of values to µ. This code exhibits state-of-the-art performance both in terms of its speed and its ability to recover sparse
signals. It can even recover signals that are not as sparse as required by current compressive sensing theory.
- SIAM J. Imaging Sci
"... Abstract. This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman
method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit ..."
Cited by 19 (5 self)
Add to MetaCart
Abstract. This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method
has the exact regularization property; namely, it converges to an exact solution of the basis pursuit problem whenever its smooth parameter α is greater than a certain value. The analysis is based on
showing that the linearized Bregman algorithm is equivalent to gradient descent applied to a certain dual formulation. This result motivates generalizations of the algorithm enabling the use of
gradient-based optimization techniques such as line search, Barzilai–Borwein, limited memory BFGS (L-BFGS), nonlinear conjugate gradient, and Nesterov’s methods. In the numerical simulations, the two
proposed implementations, one using Barzilai–Borwein steps with nonmonotone line search and the other using L-BFGS, gave more accurate solutions in much shorter times than the basic implementation of
the linearized Bregman method with a so-called kicking technique. Key words. Bregman, linearized Bregman, compressed sensing, ℓ1-minimization, basis pursuit | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=9020352","timestamp":"2014-04-18T00:42:08Z","content_type":null,"content_length":"43319","record_id":"<urn:uuid:218edc65-e7f9-4d9b-a198-59c8283f258a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00652-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applied Stochastic Models in Business and IndustryOn allocating redundancies to k-out-of- n reliability systemsArc length asymptotics for multivariate time seriesOptimal (r,N)-policy for discrete-time Geo∕G∕1 queue with different input rate and setup timeEfficient performance evaluation of the generalized Shiryaev–Roberts detection procedure in a multi-cyclic setupA condition-based maintenance policy for deteriorating units. An application to the cylinder liners of marine engineHydrological scenario reduction for stochastic optimization in hydrothermal power systemsLocal risk-minimization with longevity bondsSome aspects of stationary characteristics and optimal control of the BMAP∕G−G∕1∕N(∞) oscillating queueing systemMultivariate conditional hazard rate functions – an overviewOptimal bi-level Stackelberg strategies for supply chain financing with both capital-constrained buyers and sellersReliability of demand-based warm standby systems subject to fault level coverageA condition-based imperfect replacement policy for a periodically inspected system with two dependent wear indicatorsAutoregressive model for a finite random sequence on the unit circle for investigating the fluctuations of residual stresses in the rims of new railroad wheelsA multiscale correction to the Black–Scholes formulaDetecting and interpreting clusters of economic activity in rural areas using scan statistic and LISA under a unified frameworkStatistical learning for variable annuity policyholder withdrawal behaviorGoal achieving probabilities of cone-constrained mean-variance portfoliosDiagnosing and modeling extra-binomial variation for time-dependent countsDefault risk analysis via a discrete-time cure rate modelPricing a stochastic car value depreciation dealA simpler proof of a result by Chiu [Appl. Stochastic Models Bus. Ind. 2008; 24:203–219]On a compound Poisson risk model with dependence and in the presence of a constant dividend barrierPredicting bank loan recovery rates with a mixed continuous-discrete modelDiagnostics in Birnbaum–Saunders accelerated life models with an application to fatigue dataSequential smoothing for turning point detection with application to financial decisionsBayesian Analysis of Abandonment in Call Center OperationsAbsolute ruin in the compound Poisson model with credit and debit interests and liquid reservesDividends in finite time horizonRobust pair-copula based forecasts of realizedvolatilityEstimation and monitoring of traffic intensities with application to control of stochastic systemsSome results for repairable systems with minimal repairs
The primary aim of this paper is to expose the use and the value of spatial statistical analysis in business and especially in designing economic policies in rural areas. Specifically, we aim to
present under a unified framework, the use of both point and area-based methods, in order to analyze in-depth economic data, as well as, to drive conclusions through interpreting the analysis
results. The motivating problem is related to the establishment of women-run enterprises in a rural area of Greece. Moreover, in this article, the spatial scan statistic is successfully applied to
the spatial economic data at hand, in order to detect possible clusters of small women-run enterprises in a rural mountainous and disadvantaged region of Greece. Then, it is combined with
Geographical Information System based on Local Indicator of Spatial Autocorrelation scan statistic for further exploring and interpreting the spatial patterns. The rejection of the random
establishment of women-run enterprises and the interpretation of the clustering patterns are deemed necessary, in order to assist government in designing policies for rural development. Copyright ©
2014 John Wiley & Sons, Ltd. | {"url":"http://onlinelibrary.wiley.com/rss/journal/10.1002/(ISSN)1526-4025","timestamp":"2014-04-20T09:42:41Z","content_type":null,"content_length":"119908","record_id":"<urn:uuid:bdab524a-2eee-43e2-a8ba-7e6fe833dfc5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westside, GA Statistics Tutor
Find a Westside, GA Statistics Tutor
...It is a must for those following the sciences - but can intimidate if the core concepts aren't fully mastered. Getting a solid grounding in core concepts is the key to turning Algebra 2 from a
scary snake into an inoffensive (but very useful) worm. Calculus is where math really brings years of ...
32 Subjects: including statistics, reading, calculus, physics
...I have been using AutoCAD for five years now during my architectural internship while getting my master's in architecture and while working for an interior designer. During my three years in
school, not only did I have a specific course in AutoCad, but I also had to produce drawings that took an...
48 Subjects: including statistics, English, reading, calculus
A current University of Miami student, I have always enjoyed both learning and teaching. While I was a student at West Forsyth High, I formally and informally tutored many of my peers, ranging
from casual homework help to teaching classes to students who were not attending public school. Here at UM, I volunteer at the Chemistry Resource Center, tutoring in some of my free time.
28 Subjects: including statistics, chemistry, calculus, geometry
...Graduated with a BA in Economics from BYU and and an MBA in Finance from the College of William and Mary. I love helping students understand Economics! Tutored fellow MBA students in Finance.
28 Subjects: including statistics, calculus, GRE, physics
...I can offer assistance in C programming at a beginner's level. Topics include data types, logic and control statements, arrays, strings, pointers, structures, file I/O, command line arguments,
and recursion. I graduated with my minor in Computer Science from Embry-Riddle Aeronautical University.
24 Subjects: including statistics, calculus, algebra 2, algebra 1
Related Westside, GA Tutors
Westside, GA Accounting Tutors
Westside, GA ACT Tutors
Westside, GA Algebra Tutors
Westside, GA Algebra 2 Tutors
Westside, GA Calculus Tutors
Westside, GA Geometry Tutors
Westside, GA Math Tutors
Westside, GA Prealgebra Tutors
Westside, GA Precalculus Tutors
Westside, GA SAT Tutors
Westside, GA SAT Math Tutors
Westside, GA Science Tutors
Westside, GA Statistics Tutors
Westside, GA Trigonometry Tutors
Nearby Cities With statistics Tutor
Barrett Parkway, GA statistics Tutors
Chatt Hills, GA statistics Tutors
Embry Hls, GA statistics Tutors
Fort Gillem, GA statistics Tutors
Fry, GA statistics Tutors
Gainesville, GA statistics Tutors
Green Way, GA statistics Tutors
Madison, SC statistics Tutors
Marble Hill, GA statistics Tutors
North Metro statistics Tutors
Penfield, GA statistics Tutors
Penfld, GA statistics Tutors
Philomath, GA statistics Tutors
Rockbridge, GA statistics Tutors
White Stone, GA statistics Tutors | {"url":"http://www.purplemath.com/Westside_GA_statistics_tutors.php","timestamp":"2014-04-18T11:14:19Z","content_type":null,"content_length":"24229","record_id":"<urn:uuid:4a488509-7157-4842-a8d3-d310fec8649f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Velocity Reviews - Need help deriving a convertion function on python
johnnyukpo@gmail.com 03-07-2013 08:25 PM
Need help deriving a convertion function on python
Good day,
I have a computer programming assignment. I am completely lost and i need some help.
These are the questions that are confusing me
(a) Write a function which converts from gallons to cups2
(b) Now we’d like to be able to convert from cups to milliliters, since metric measurements
are easier to convert between. Implement a function which does this.
(c) We’d also like to be able to easily convert gallons into milliliters,so implement a
function which does this
Rick Johnson 03-07-2013 10:15 PM
Re: Need help deriving a convertion function on python
On Thursday, March 7, 2013 2:25:42 PM UTC-6, johnn...@gmail.com wrote:
> I have a computer programming assignment. I am completely
> lost and i need some help. These are the questions that
> are confusing me
> (a) Write a function which converts from gallons to cups2
How can we help you if we have no idea of what level of programming experience you have attained. Can you write functions? Can you declare variables? What about mathematics? If you can already do all
these things then make anattempt to accomplish problem "(a)" and then ask us a specific question when you run into trouble.
> (b) Now we’d like to be able to convert from cups to
> milliliters, since metric measurements are easier to
> convert between. Implement a function which does this.
> (c) We’d also like to be able to easily convert gallons
> into milliliters, so implement a function which does this
In due time grasshopper. First make an attempt to solve problem "(a)". Chris gave some great advice however i would like to insert a "step 0" into his"advice list" (this is python after all ;-).
* STEP 0: Learn how to convert from gallons to cups using a
pencil and paper only.
* STEP 1: Read docs and learn about functions (if applicable).
* STEP 2: Try to write a python function that will take one
parameter (representing the number of gallons) and then
perform the mathematical equations required to deduce the
number of cups in a gallon, THEN return the result.
Dave Angel 03-08-2013 12:07 AM
Re: Need help deriving a convertion function on python
On 03/07/2013 03:40 PM, Chris Angelico wrote:
> On Fri, Mar 8, 2013 at 7:25 AM, <johnnyukpo@gmail.com> wrote:
>> Good day,
>> I have a computer programming assignment. I am completely lost and i need some help.
>> These are the questions that are confusing me
> By the way, you may
> find the python-tutor list more suitable, if you feel your questions
> are particularly basic:
Actually his brother 'akuma upko' (sharing the same account) posted the
same question on Python-tutor 2 minutes before johnny posted it here.
All times are GMT. The time now is 11:03 PM.
Powered by vBulletin®. Copyright ©2000 - 2014, vBulletin Solutions, Inc.
SEO by vBSEO ©2010, Crawlability, Inc. | {"url":"http://www.velocityreviews.com/forums/printthread.php?t=958475","timestamp":"2014-04-23T23:03:42Z","content_type":null,"content_length":"7676","record_id":"<urn:uuid:01aecb2d-094c-41c5-b75b-30a33e77bc6f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
Forest Park, GA Calculus Tutor
Find a Forest Park, GA Calculus Tutor
...Try me and you will never regret! I have just graduated from Georgia Tech with a degree in nuclear and radiological engineering, I have been tutoring people from over the place in this topic
since 2009, and I am well qualified for general physics, from Kinematics to modern physics even nuclear p...
10 Subjects: including calculus, physics, algebra 1, ASVAB
...Thanks.) My name is Anthon and I'm excited to have the opportunity to be your or your child's tutor. I have tutored for more than 10 years and have helped hundreds of students improve their
test scores and grades and I have been rewarded as a top 100 WyzAnt tutor nationwide (50,000+ tutors). ...
19 Subjects: including calculus, physics, geometry, GRE
Hi,My name is Alex. I graduated from Georgia Tech in May 2011, and am currently tutoring a variety of math topics. I have experience in the following at the high school and college level:- pre
algebra- algebra- trigonometry- geometry- pre calculus- calculusIn high school, I took and excelled at all of the listed classes and received a 5 on the AB/BC Advanced Placement Calculus exams.
16 Subjects: including calculus, geometry, algebra 1, algebra 2
...Results from these tests saves time and for that matter, cost of tutoring on the student's part. I have a BSc. Math, MSc.
30 Subjects: including calculus, chemistry, physics, geometry
...I have been teaching classes for about 15 years and tutoring for about 18 years. Yes, I started pretty young. So I still remember how it is to be a student and struggle in math classes.
20 Subjects: including calculus, statistics, algebra 1, algebra 2
Related Forest Park, GA Tutors
Forest Park, GA Accounting Tutors
Forest Park, GA ACT Tutors
Forest Park, GA Algebra Tutors
Forest Park, GA Algebra 2 Tutors
Forest Park, GA Calculus Tutors
Forest Park, GA Geometry Tutors
Forest Park, GA Math Tutors
Forest Park, GA Prealgebra Tutors
Forest Park, GA Precalculus Tutors
Forest Park, GA SAT Tutors
Forest Park, GA SAT Math Tutors
Forest Park, GA Science Tutors
Forest Park, GA Statistics Tutors
Forest Park, GA Trigonometry Tutors
Nearby Cities With calculus Tutor
Austell calculus Tutors
College Park, GA calculus Tutors
Conley calculus Tutors
East Point, GA calculus Tutors
Ellenwood calculus Tutors
Fayetteville, GA calculus Tutors
Fort Gillem, GA calculus Tutors
Hapeville, GA calculus Tutors
Jonesboro, GA calculus Tutors
Lake City, GA calculus Tutors
Morrow, GA calculus Tutors
Rex, GA calculus Tutors
Riverdale, GA calculus Tutors
Stockbridge, GA calculus Tutors
Union City, GA calculus Tutors | {"url":"http://www.purplemath.com/Forest_Park_GA_calculus_tutors.php","timestamp":"2014-04-20T10:53:00Z","content_type":null,"content_length":"24013","record_id":"<urn:uuid:79cbacec-7cff-4e6c-8b2a-b148e726a056>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stratified Sampling
October 21st 2013, 04:01 PM #1
Sep 2013
Stratified Sampling
A city has 90000 dwellings: 35000 are homes, 45000 are apartments, 10,000 are condos.
i) You know the mean electricity use is roughly twice as much for homes as
for apartments or condos, and the standard deviation is proportional
to the mean so $S_1$ = $2S_2$ = $2S_2$. How would you distribute a stratified sample
of 900 observations if you wanted to approximate the mean electricity usage
for all homes in the city?
ii)Now imagine that you take a stratified random sample with proportional allocation
and want to estimate the overall proportion of households in which energy
conservation is practiced. If 45% of homes, 25% of apartment,
and 3% of condos practice conserving energy, what is p for the
population? What gain would the stratified sample with proportional allocation
offer over an SRS, that is, what is $\frac{V_{prop}({\hat{p_{str}}})}{V_{SRS}{(\hat{p_{ SRS}}})}$?
i. Not sure how to allocate the sample.
ii. I found p = .303333, however, I am unable to solve for $s^2$ = $\frac{n}{n-1}$p(1-p). This is necessary to complete the problem.
Re: Stratified Sampling
Sorry, should read $S_1 = 2S_2 = 2S_3$
October 21st 2013, 06:21 PM #2
Sep 2013 | {"url":"http://mathhelpforum.com/advanced-statistics/223308-stratified-sampling.html","timestamp":"2014-04-16T16:12:49Z","content_type":null,"content_length":"33269","record_id":"<urn:uuid:c0781e12-f4be-4002-92e4-11692047cccc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Series Solutions Near a Regular Singular Point
March 17th 2009, 03:45 PM #1
Feb 2009
Series Solutions Near a Regular Singular Point
I need to find the series solution at the point x=0 for the equation 2xy''+y'+xy=0. I have already found the point to be regular singular and the roots to be 0 and 1/2. I need to solve the
equation using the root 1/2.
Let’s write the equation in the form $\dots$
$y'' + \frac {y'}{2x} + \frac {y}{2}=0$ (1)
That is an linear DE and its general solution is in the form $\dots$
$y(x) = c_{1}$$\cdot \lambda (x) + c_{2}\cdot \gamma (x)$ (2)
$\dots$ where $\lambda(*)$ and $\gamma(*)$ are two independently linear solutions of the (1) and $c_{1}$$,c_{2}$ two constants. Now we will research a solution of (1) analytic in $x=0$, so that
il can be written as $\dots$
$\lambda (x) = \sum_{n=0}^{\infty} a_{n}\cdot x^{n}$(3)
The way for finding the $a_{n}$ is to substitute the (3) into the (1) obtaining $\dots$
$\sum_{n=0}^{\infty} a_{n}\cdot x^{n}= -2\cdot \sum_{n=2}^{\infty} n\cdot (n-1)\cdot a_{n}\cdot x^{n-2} - \sum_{n=1}^{\infty} n\cdot a_{n}\cdot x^{n-2}$ (4)
The term $a_{0}$ is pratically the constant $c_{1}$ in (2). Observing the (4) it is easy to see that is $a_{1}=0$ and the same holds for all the $a_{n}$ of odd index. For the $a_{n}$ of even
index we have $\dots$
$a_{0}= -2\cdot 3\cdot a_{2} \rightarrow a_{2}=-\frac {a_{0}}{2\cdot 3}$
$a_{2}= -2\cdot 2\cdot 7\cdot a_{4} \rightarrow a_{4}=-\frac {a_{2}}{2\cdot 2\cdot 7}$
$a_{4}= -2\cdot 3\cdot 11\cdot a_{6} \rightarrow a_{6}=-\frac {a_{4}}{2\cdot 3\cdot 11}$
$a_{2k}= -2\cdot (k+1)\cdot (4k+3)\cdot a_{2(k+1)} \rightarrow a_{2(k+1)}=-\frac {a_{2k}}{2\cdot (k+1)\cdot (4k+3)}$ (5)
The recursive relations (5) permit us to arrive to the Taylor expansion around $x=0$ of the analytic solutions of (1) $\dots$
$\lambda (x)= 1+\sum_{k=1}^{\infty} (-1)^ {k}\cdot \frac {x^{2k}}{2^{k}\cdot k!\cdot 3\cdot 7\dots (4k-1)}$ (6)
That is about the analytic solutions of the form $c_{1}\cdot \lambda(x)$. For the non analytic solutions of the form $c_{2}\cdot \gamma (x)$ the problem is a little more difficult…
Kind regards
In previous post it has been ‘attacked’ the DE $\dots$
$y'' + \frac {y'}{2x} + \frac {y}{2}=0$ (1)
$\dots$ the general solution of which is $\dots$
$y(x) = c_{1}$$\cdot \lambda (x) + c_{2}\cdot \gamma (x)$ (2)
For the ‘analytic’ solution $\lambda(*)$ the following series expansion has been found $\dots$
$\lambda (x) = 1+\sum_{k=1}^{\infty} (-1)^ {k}\cdot \frac {x^{2k}}{2^{k}\cdot k!\cdot 3\cdot 7\dots (4k-1)}$ (3)
… and now we will try to arrive to the ‘non analytic’ solution $\gamma (*)$. Since $\lambda (*)$ and $\gamma (*)$ are both solutions of (1) it is $\dots$
$\lambda '' + \frac {\lambda '}{2x} + \frac {\lambda}{2}=0$
$\gamma '' + \frac {\gamma '}{2x} + \frac {\gamma}{2}=0$
Multiplying the first equation by $\gamma (*)$, the second by $\lambda (*)$ and making the difference we obtain $\dots$
$\lambda '' \cdot \gamma - \gamma '' \cdot \lambda + \frac {\lambda ' \cdot \gamma - \gamma ' \cdot \lambda}{2x}=0$ (4)
$\dots$ and setting $\dots$
$\phi (x)= \lambda ' \cdot \gamma - \gamma ' \cdot \lambda$ (5)
$\dots$ the (4) becomes $\dots$
$\phi ' + \frac {\phi}{2x}=0$ (6)
The (5) is a linear DE of first order whose solution is relatively easy to find $\dots$
$\phi (x)= \frac {c}{2x}$ (7)
Substituting the (7) into (5) and setting $c=2$ without losing anything we obtain $\dots$
$x \cdot \lambda ' \cdot \gamma - x \cdot \lambda \cdot \gamma '=1$ (8)
In a succesive post we will perform succesive steps...
Kind regards
March 18th 2009, 06:10 AM #2
March 20th 2009, 04:48 AM #3 | {"url":"http://mathhelpforum.com/calculus/79204-series-solutions-near-regular-singular-point.html","timestamp":"2014-04-18T20:54:06Z","content_type":null,"content_length":"54307","record_id":"<urn:uuid:463e2c38-6f61-4efd-8921-e2f07c6daf39>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Houston Texans Message Board & Forum - TexansTalk.com - View Single Post - Texans 0-3
OK, two predominate complaints keep coming up in this thread and others: (1) the Texans are throwing too much to DD and (2) the Texans aren't trying down field enough. Hmmm, any validity?
Let's look at the number of receptions for some top tier RB's:
Tomlinson 100
McAllister 69
Holmes 74
Green 50
Taylor 48
Not seeing where the Texans are wacky out of line here. Plus, DD is averaging 10.3 yards per reception. Last time I checked that will usually be a 1st down. I'll take 10 yard plays all day long.
Now how about looking at the guys ahead of Carr on yardage so far this year and see about all that going down field:
Testeverde 11 plays over 20 yards
Bulger 7
Brady 11
Carr 7 over 20 and 1 over 40--plus the 40+ yd non-reception reception by AJ last weekend.
Some other QB's:
Pennington 7 over 20 and 3 over 40
Harrington 6
Hassellbeck 4
Favre 3 over 20 and 1 over 40
Green 4 over 20 and 1 over 40
McNair 1
Plummer 6 over 20 and 1 over 40
and Mr. Air Attack himself
Manning 7 over 20 and 4 over 40
Sorry folks, just not seeing where the Texans are taking way fewer shots down field than other teams.
The Art of War | {"url":"http://www.texanstalk.com/forums/showpost.php?p=32016&postcount=12","timestamp":"2014-04-24T14:48:12Z","content_type":null,"content_length":"16367","record_id":"<urn:uuid:6c2e39ca-b8b3-4b75-9fda-0f875f1e5a57>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
ANN: data-fin
data-fin 0.1.0
The data-fin package offers the family of totally ordered finite sets, implemented as newtypes of Integer, etc. Thus, you get all the joys of:
data Nat = Zero | Succ !Nat
data Fin :: Nat -> * where
FZero :: (n::Nat) -> Fin (Succ n)
FSucc :: (n::Nat) -> Fin n -> Fun (Succ n)
But with the efficiency of native types instead of unary encodings.
I wrote this package for a linear algebra system I've been working on, but it should also be useful for folks working on Agda, Idris, etc, who want something more efficient to compile down to in
Haskell. The package is still highly experimental, and I welcome any and all feedback.
Note that we implement type-level numbers using [1] and [2], which works fairly well, but not as nicely as true dependent types since we can't express certain typeclass entailments. Once the
constraint solver for type-level natural numbers becomes available, we'll switch over to using that.
[1] Oleg Kiselyov and Chung-chieh Shan. (2007) Lightweight static resources: Sexy types for embedded and systems programming. Proc. Trends in Functional Programming. New York, 2–4 April 2007.
[2] Oleg Kiselyov and Chung-chieh Shan. (2004) Implicit configurations: or, type classes reflect the values of types. Proc. ACM SIGPLAN 2004 workshop on Haskell. Snowbird, Utah, USA, 22 September
2004. pp.33–44.
(n::Nat) is redundant
Date: 2013-07-22 08:00 am (UTC)
From: heisenbug.myopenid.com
`n` should suffice here as kind inference readily fills the ::Nat part in! | {"url":"http://winterkoninkje.dreamwidth.org/85357.html","timestamp":"2014-04-16T19:06:30Z","content_type":null,"content_length":"37312","record_id":"<urn:uuid:44d0493a-c7eb-4ce8-9890-fe0f8ef70b4c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
Higher Euler characteristics (possible generalizations)
up vote 2 down vote favorite
Let $X$ be projective and Gorenstein (over $\mathbb{C}$), of dimension $n$, then $\chi(\mathcal{O}_X)=(-1)^n\chi(w_X)$. Hence a "generalization": $\chi(w^{\otimes k}_X)$.
I'd like something of this sort for the topological Euler characteristic. For example, suppose $X$ is smooth, so $\chi(X)=c_n(T_X)$. We could consider $c_n(T^{\otimes k}_X)$. More generally, let $\
lambda$ be a Young tableau (symmetrization pattern), then we can consider $c_n(T^{[\lambda]}_X)$. In a similar way, starting from $\chi(X)=\sum(-1)^{p+q}h^p(\Omega^q_X)$ one could suggest $\sum(-1)^
I'd like the generalized Euler characteristic to be still defined on a broad class of topological spaces. (Or at least for any quasi-projective variety.) So, the suggestions above only give a
motivating idea. Also, I'd like the generalized E.char. to be additive (at least for algebraic stratifications).
Is there something known in this direction?
ag.algebraic-geometry at.algebraic-topology euler-characteristics
add comment
1 Answer
active oldest votes
We can associate to any $\mathbb{C}$-scheme $X$ in a canonical way a constructible function $\nu_{X}:X\rightarrow \mathbb{Z}$, which takes care of the singularities of the space $X$. This is
proved in this paper Donaldson-Thomas type invariants via microlocal geometry. We can then define the weighted Euler characteristic of $X$ by $$ \chi(X,\nu_{X})=\sum_{n\in\mathbb{Z}}n\chi(\
nu_{X}^{-1}(n)), $$ where $\chi$ is the topological Euler characteristic. The RHS is actually a finite sum and this is well-defined. The constructible function $\nu_{X}$ is quite mysterious
up vote and I don't think much is known about it. We know for example that $\nu_{X}(p)=(-1)^{\dim_{p}X}$ when $p\in X$ is a smooth point. So, when $X$ is smooth, we have $$ \chi(X,\nu_{X})=(-1)^{\dim
5 down X}\chi(X). $$ Another good situation is probably when $X$ can be written as the critical locus of some function. In this case we can use topological techniques (such as Milnor number) to
vote compute the function $\nu_{X}$.
Thanks, but I do not see how it helps to my question. This $\chi(X,\nu_X)$ is some other generalization, not what I meant – Dmitry Kerner Aug 1 '12 at 12:33
I don't know what you meant by "generalized Euler characteristic", but my example is defined for any $\mathbb{C}$-schemes and satisfies some stratification property (with a bit care). –
Atsushi Kanazawa Nov 30 '12 at 0:13
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry at.algebraic-topology euler-characteristics or ask your own question. | {"url":"http://mathoverflow.net/questions/103495/higher-euler-characteristics-possible-generalizations?sort=votes","timestamp":"2014-04-21T12:59:33Z","content_type":null,"content_length":"53588","record_id":"<urn:uuid:34df2e96-a7e7-40d7-8420-ccd17c15898f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comments on A Neighborhood of Infinity: How to Divide by ThreeThat paper does go on a bit, doesn't it? I'm vague...Actually, it's true for any finite set, not just 3...I seem to recall reading on the Foundations of Mat...
tag:blogger.com,1999:blog-11295132.post115092762818413045..comments2014-04-03T09:41:41.120-07:00Dan Piponihttps://plus.google.com/
107913314994758123748noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-11295132.post-1151795788538596142006-07-01T16:16:00.000-07:002006-07-01T16:16:00.000-07:00That paper does go on a bit,
doesn't it? I'm vaguely interested in the result, but I really can't be bothered to dig through the waffle to hunt down the actual bits where they prove things... :)David R. MacIverhttp://
www.blogger.com/profile/12893796777558636623noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-1150997240160043762006-06-22T10:27:00.000-07:002006-06-22T10:27:00.000-07:00Actually, it's true
for any finite set, not just 3. But 2 is easier and the method needs to be varied a bit for larger finite sets.sigfpehttp://www.blogger.com/profile/
08096190433222340957noreply@blogger.comtag:blogger.com,1999:blog-11295132.post-1150982174223703252006-06-22T06:16:00.000-07:002006-06-22T06:16:00.000-07:00I seem to recall reading on the Foundations
of Mathematics e-mail list that this is possible for three but not for two, or something equally crazy. (Of course, it's trivial as long as at least one of A and B is known to be finite.)Kennyhttp:// | {"url":"http://blog.sigfpe.com/feeds/115092762818413045/comments/default","timestamp":"2014-04-16T08:08:30Z","content_type":null,"content_length":"6877","record_id":"<urn:uuid:991186ba-3f27-4b4a-b972-ad5d37f5c487>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
PDGA rating question
So basically with the standard deviation, you get rewarded for being more consistent in your rounds? Also, when I am trying to figure out my standard deviation in excel, do I include the round that
is subject to being dropped or not for calculating it or no?
From there, I multiply the number that comes up by 2.5 and if my round is more rating points away than that it does not count. Is that correct?
Also, when figuring this out, do you use my previous updates rating or how does that work? Because right now my SD is only 20.49 which means I could shoot a round 51.225 worse than my rating and it
would drop?
Re: PDGA rating question
YNY. Current rounds not previous.
You're only going to get close due to rounding within the ratings process which is done using ratings on a per hole basis in the database not the ratings you see displayed.
Re: PDGA rating question
So basically with the standard deviation, you get rewarded for being more consistent in your rounds?
Yes the more rounds you have around the same rating then lower your standard deviation will be. If everyone of superplayer rounds was b/w 1000 and 1001 then your standard deviation is like 0.7. So if
our superplayer shoots a 997 then it isn't counted against him/her.
I am a little bummed I found this information out. If you shoot too good of a round is it counted in your average? I guess stats are stats they always lie.
Chuck is the rating on the website and the one calculated from individual holes the same? Do they round up or round down? Not that it matters much just wondering
Re: PDGA rating question
We do not exclude rounds that are exceptionally good, just those lower than 2.5SD. Anyone can forcibly shoot a bad round (for sandbagging) but can't go out and say they are going to shoot a round 3SD
above their rating and actually pull it off. There are a couple of processes that involve truncating and rounding based on a player's "per hole" ratings but using the regular ratings works out pretty
close just may not be exact.
Re: PDGA rating question
So, the question is: Why do you care?
Say you've got 20 rounds that count toward your average, and you shoot 60 points below your average... that lowers your average by a total of 3 points... (more if it is one of your 5 most recent
rounds, and actually less if it isn't)
Whoopdie doo... unless you are a sponsored player, what's the problem?
99% of the putts that you leave short never go in.
The other 1% never had a chance.
Re: PDGA rating question
Hey guys, the search isn't helping me out with a question I have... so Ima just post here... seems relevant?
So I found out the rating for a course I play is (on the PDGA for tourneys is about 47-48) if I do the math on the "distance" of the course it's a 50-51. I think I'm understanding all of it
correctly, but if the rating is for example... 50 and I shoot a 70. That means I'm 20 shots over the rating, so 20 times 10 (for 10pts per shot?) means that my rating would be an 800? Or have I
missed something?
Re: PDGA rating question
Yes. A 70 gets an 800 rating on a course with a 50 SSA.
Re: PDGA rating question
Cool thanks! | {"url":"http://www.discgolfreview.com/forums/viewtopic.php?p=235541","timestamp":"2014-04-18T18:47:42Z","content_type":null,"content_length":"26954","record_id":"<urn:uuid:64f062ee-206d-4afc-b1b3-7e653b931179>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
Show That a Function is Contractive
May 4th 2007, 04:10 PM #1
Feb 2007
Show that the following function is contractive on the indicated intervals. Determine the best values of [lamda] in Equation (2).
abs(x)^(2/3) on abs(x) < or = 1/3
Equation (2):
abs(F(x)-F(y)) < or = [lamda]*abs(x-y)
Any help with this problem would be greatly appreciated! Thanks!
Show that the following function is contractive on the indicated intervals. Determine the best values of [lamda] in Equation (2).
abs(x)^(2/3) on abs(x) < or = 1/3
Equation (2):
abs(F(x)-F(y)) < or = [lamda]*abs(x-y)
Any help with this problem would be greatly appreciated! Thanks!
I do not think this function is contractive.
Assume that it is,
|F(x)-F(y)| <= K*|x-y| for -1/3<=x,y<=1/3
Take y=0 to get,
|F(x)|<= K*|x| for -1/3<=x<=1/3
Then certainly,
|F(x)|<= K*|x| for 0<x<=1/3
x^{2/3} <= K*x for 0<x<=1/3
But then,
x^{-1/3} <= K
Then certainly
x^{-1/3} <= K+3
But this is false if you choose,
Show that the following function is contractive on the indicated intervals. Determine the best values of [lamda] in Equation (2).
abs(x)^(2/3) on abs(x) < or = 1/3
Equation (2):
abs(F(x)-F(y)) < or = [lamda]*abs(x-y)
Any help with this problem would be greatly appreciated! Thanks!
A simpler demonstration that this is not a contraction is obtained by putting
x=1/8, and y=0, then:
abs( abs(x)^2/3 - abs(0)^2/3) = 1/4 > x - y = 1/8
May 5th 2007, 07:27 PM #2
Global Moderator
Nov 2005
New York City
May 7th 2007, 03:01 AM #3
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/calculus/14558-show-function-contractive.html","timestamp":"2014-04-20T00:10:54Z","content_type":null,"content_length":"37967","record_id":"<urn:uuid:b97c09b1-6543-4640-9fd6-000e8393e63d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
GetPixel tolerance
12-06-2007 #1
Registered User
Join Date
Nov 2007
GetPixel tolerance
I need a GetPixel tolerance code. Here’s is what I have for searching for a pixel.
int findcolor(int &x1, int &y1, int &x2, int &y2, unsigned int r, unsigned int g, unsigned int b, int &x, int &y)
int XL, YL, xcount, ycount, TLXBackup;
TLXBackup = x1;
XL = x2 - x1;
YL = y2 - y1;
xcount = 0;
ycount = 0;
POINT TR;
HDC hdc = GetDC(NULL);
DWORD color2 = GetPixel(hdc, x1, y1);
unsigned int R = GetRValue(color2);
unsigned int G = GetGValue(color2);
unsigned int B = GetBValue(color2);
if ( xcount == XL)
x1 = TLXBackup;
xcount = 0;
if ( ycount == YL )
return 0;
if ((r == R) && (g == G) && (b == B))
x = x1;
y = y1;
return 1;
goto Loop;
i have ben searching google and still am but not finding mutch.
I’m Dyslexic and I know that I don’t spell well so quit telling to learn my English because I do my best at it all right.
Windows XP with Dev-C++ for now.
Unless you comment your code, no one is going to make much of an effort to figure out what you're trying to do. Add some comments that show the logic of what you are trying to accomplish. Then
you may get more assistance. And above all, what is a GetPixel tolerance code??
Why in the abyss are you using goto for such a simple loop!? Change it, please. It looks really bad.
For information on how to enable C++11 on your compiler, look here.
よく聞くがいい!私は天才だからね! ^_^
Ok well pixel tolerance would work like.
If tolerance = 10 than find a pixel with that color that was specified or 10 shads different from that color.
I’m Dyslexic and I know that I don’t spell well so quit telling to learn my English because I do my best at it all right.
Windows XP with Dev-C++ for now.
yes ill be changing the goto in a little bit but for now it works
I’m Dyslexic and I know that I don’t spell well so quit telling to learn my English because I do my best at it all right.
Windows XP with Dev-C++ for now.
You shouldn't be using goto in the first place! You should not have written it at all for such a simple matter of using a loop here, where it is not complicated at all.
For information on how to enable C++11 on your compiler, look here.
よく聞くがいい!私は天才だからね! ^_^
Get rid of that goto. a do..while(true) will work just the same.
This all depends on what you're using it for. I recently(ish) wrote something similiar, except that it converted RGB to the HSV colourspace before doing any tolerance testing. This improved the
results a little. What are you using it for?
Btw "much" is spelt like this, and "been" has two E's.
Being not so good at spelling is fine, but I would hope that you would not purposefuly turn down assistance.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
Just searching in x1, y1 and x2, y2 for a pixel with a tolerance. so if you don’t mind I think that your code might help if you could post it or send it to me.
I’m Dyslexic and I know that I don’t spell well so quit telling to learn my English because I do my best at it all right.
Windows XP with Dev-C++ for now.
I can't I'm afraid, that code is at work and belongs to my employer.
btw I understand the description of the problem, but what do you intend to use it for? What do you plan to do once you find the pixel you are looking for?
If you were wanting to find the pixel that is the closest match to the specified colour then you would need to keep going until you reach the end, or find a perfect match.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
12-06-2007 #2
Registered User
Join Date
Jan 2007
Euless, TX
12-06-2007 #3
12-06-2007 #4
Registered User
Join Date
Nov 2007
12-06-2007 #5
Registered User
Join Date
Nov 2007
12-06-2007 #6
12-06-2007 #7
12-06-2007 #8
Registered User
Join Date
Nov 2007
12-06-2007 #9 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/96668-getpixel-tolerance.html","timestamp":"2014-04-17T11:41:22Z","content_type":null,"content_length":"76014","record_id":"<urn:uuid:8bb0b7a9-fbf3-4af7-94e9-5705598d70eb>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
The sample of 20 American feature films from 1959 that I have added to the Cinemetrics database has revealed a feature that I did not expect. (Sort the database on year to quickly find them clumped
together.) My belief had been that the cutting rate nearly always speeded up over the whole length of most ordinary commercial American features, and that any rare exceptions to this would be
failures at the box office. However, I also believed that in art films (usually called “independent films” nowadays) and also in foreign films, the cutting rate very frequently slowed down over the
course of the film. But the first degree trendline for my sample of American films shows that nearly half of them have slower cutting in the second half of the shots compared to that in the first
half of the shots. The first degree trendline actually represents graphically the degree of difference between the ASL (Average Shot Length) for the first half of the shots in a film and the ASL for
the second half of the shots in that film.
The films in my sample in which the cutting rate decreases over the length of the film are Compulsion, The Five Pennies, Gidget, Go, Johnny, Go!, Heller in Pink Tights, Never So Few, The Nun’s Story,
Ride Lonesome, and Some Like it Hot. Of these, Some Like it Hot and Gidget certainly did very well at the box office, though I seem to remember that Heller in Pink Tights did not. The others were
reasonably successful, I think. As you can see from the Cinemetrics graph, Darby O’Gill and the Little People has a flat cutting rate over its length.
Of course, down the lengths of all these films, as in all films, there are alternating stretches of slower and faster cutting, which can sometimes be a bit difficult to make out on the Cinemetrics
graphs. The higher degree trendlines highlight these fluctuations to some extent, but are not always completely successful in doing this. I can illustrate this problem with the graph for Ride
Lonesome, using the sixth degree and twelfth degree trendlines.
The sixth degree trendline very approximately picks out the stretches of faster cutting from the beginning to 3 minutes, that from 25 minutes to 27 minutes, and the final stretch from 62 minutes to
near the end of the film. But it completely misses the section of overall fast cutting from 7 minutes to 12 minutes, and from 38 minutes to 42 minutes. This is inevitable, as a sixth degree trendline
can only have 3 maxima at the most, whereas there are at least 5 fairly clear stretches of faster cutting in the film. So let us see how the twelfth degree trendline, which can have six maxima, and
five minima (or vice versa), does in this situation.
This looks a lot worse to me.
Now there is an alternative to trendlines for picking out the changes in cutting speed throughout the length of a film, and this is the moving average. This statistic was considered by Gunars Civjans
and Yuri Tsivian in the early days of Cinemetrics, but they did not develop it. I think it should be reconsidered.
The usual moving average is taken over a fixed number of quantities (shot lengths in our case) prior to the point in question, but the appropriate form of moving average in the case of Cinemetrics is
the “centred moving average”. This takes the average shot length for a range starting a certain number of shots before the point under consideration, and ending the same number of shots after the
point. Some tests suggest that a range of 20 shots works best; ten before, and ten after the point for which you want the average. Applying this to the shot length record for Ride Lonesome produces
the following graph:
The graph is inverted from the Cinemetric perspective, because that is the easy way for me to do it with the graphing tools in spreadsheets. Here, the shot lengths are indicated by black lines, and
the moving average is in green. The horizontal x-axis is calibrated in shots, not minutes and seconds. This graph can be given a smoother profile by taking a second centred moving average from it. In
fact, the best result seems to come from taking a 10 shot centred moving average first, before then applying the 20 shot centred moving average. This procedure produces the graph below.
You can see that this new curve also has no problem in following the major ups and downs in the original shot length plot, and has extra smoothing as well.
So why might we be interested in these cutting rate variations?
I hold the position that variation in cutting rate is ordinarily used as an expressive device of a conventional kind – more cuts for sections where there is more dramatic tension or action, and less
for less of the same. And I also believe that in general there is a conventional idea about alternating scenes with different dramatic character in plays and films, so that things like cutting rate
and closeness of shot which are used expressively should change on the average from scene to scene. Hence the next step is to check out the results of this idea for this film.
Ride Lonesome has 18 scenes from a dramatic viewpoint, and they are clearly marked out by dissolves between them, in the conventional way of the period when it was made. In the graph below, the shot
lengths are still given by black lines, but the ASL for each scene is indicated by the red bars, and the centred moving average by a green line.
The shot lengths are given in deciseconds, not seconds, and the maximum value shown is 20 seconds, so that the lines for some shots, and indeed some mean ASLs for the scenes, go off the top of the
graph, just as they do in the Cinemetrics graphs. The first scene runs from Shot No. 1 to Shot 55, the second scene comprises Shot 56 alone, the third scene runs from Shot 57 to Shot 141, and so on.
The green line still represents a plot of two successive centred averages.
For comparison, below is the graph with the sixth order trendline plotted in instead of the centred moving average.
So the moving average has a better correspondence with the mean ASL for the scenes than the trendlines, as well as identifying the sections of faster and slower cutting better.
Incidentally, my graphs plotting the mean ASLs for all the scenes in the film are the visual equivalent of the verbal discussion of this variation of cutting rate from scene to scene in The
Adventures of Robin Hood at the beginning of my piece The Numbers Speak from Moving Into Pictures. This piece is also in the “Current Movie Measurement Articles” section of the Cinemetrics website.
Ride Lonesome is unusual in that it only has 18 scenes in it. This is at the lower end of the range of the numbers of scenes occurring in ordinary commercial films. This range seems to run from about
20 scenes to about 50 scenes, with most films having somewhere around 30 scenes. (I shouldn’t be surprised if the number of scenes in an ordinary film follows a Normal (Gaussian) distribution.) An
example of a film with a more usual number of scenes is given by Darby O’Gill and the Little People. The Cinemetrics graph for this, with a 12^th. degree trendline, is reproduced here in inverted
form, for comparison with my “moving average” treatment.
And here is my moving centred average graph for this film, with the ASL for scenes marked in as well. Some of the lines indicating each shot’s length are shifted a bit relative to the previous
Cinemetrics graph, because my x-axis is a linear scale in ordinal shot number, rather than the time to the shot in question, as is the case with the elastic scale of the x-axis of a Cinemetric plot.
This does not affect the point being made.
The correspondence between the general shape of the moving average plot and the “mean ASL for scenes” plot is not quite so good as in the case of Ride Lonesome, but there is a rough resemblance
between the two, which is not the case for the trendline.
Darby O’Gill has 37 scenes, and to be certain of getting all the ups and downs resulting from the cutting rate changes from one to the next with a trendline, one would have to use a 38^th. degree (or
order) trendline, not a 12^th. degree one.
Barry Salt, 2010 | {"url":"http://www.cinemetrics.lv/salt_speeding_up_down.php","timestamp":"2014-04-20T08:40:09Z","content_type":null,"content_length":"21809","record_id":"<urn:uuid:b02a348f-70d8-4f81-835e-860b554cb6ca>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Ordered_graph
ordered graph
is a
with a
total order
over its nodes.
In an ordered graph, the parents of a node are the nodes that are joined to it and precede it in the ordering. More precisely, $n$ is a parent of $m$ in the ordered graph $langle N,E,< rangle$ if $\
left(n,m\right) in E$ and $n < m$. The width of a node is the number of its parents, and the width of an ordered graph is the maximal width of its nodes.
The induced graph of an ordered graph is obtained by adding some edges to an ordering graph, using the method outlined below. The induced width of an ordered graph is the width of its induced graph.
Given an ordered graph, its induced graph is another ordered graph obtained by joining some pairs of nodes that are both parents of another node. In particular, nodes are considered in turn according
to the ordering, from last to first. For each node, if two of its parents are not joined by an edge, that edge is added. In other words, when considering node $n$, if both $m$ and $l$ are parents of
it and are not joined by an edge, the edge $\left(m,l\right)$ is added to the graph. Since the parents of a node are always connected with each other, the induced graph is always chordal.
As an example, the induced graph of an ordered graph is calculated. The ordering is represented by the position of its nodes in the figures: a is the last node and d is the first.
| |- The original graph. Edge added considering the parents of $a$ Edge added considering the parents of $b$
Node $a$ is considered first. Its parents are $b$ and $c$, as they are both joined to $a$ and both precede $a$ in the ordering. Since they are not joined by an edge, one is added.
Node $b$ is considered second. While this node only has $d$ as a parent in the original graph, it also has $c$ as a parent in the partially built induced graph. Indeed, $c$ is joined to $b$ and also
precede $b$ in the ordering. As a result, an edge joining $c$ and $d$ is added.
Considering $d$ does not produce any change, as this node has no parents.
Processing nodes in order matters, as the introduced edges may create new parents, which are then relevant to the introduction of new edges. The following example shows that a different ordering
produces a different induced graph of the same original graph. The ordering is the same as above but $b$ and $c$ are swapped.
|- Same graph, but the order of $b$ and $c$ is swapped Graph after considering $a$
As in the previous case, both $b$ and $c$ are parents of $a$. Therefore, an edge between them is added. According to the new order, the second node that is considered is $c$. This node has only one
parent ($b$). Therefore, no new edge is added. The third considered node is $b$. Its only parent is $d$. Indeed, $b$ and $c$ are not joined this time. As a result, no new edge is introduced. Since
$d$ has no parent as well, the final induced graph is the one above. This induced graph differs from the one produced by the previous ordering.
See also | {"url":"http://www.reference.com/browse/wiki/Ordered_graph","timestamp":"2014-04-18T03:32:48Z","content_type":null,"content_length":"75398","record_id":"<urn:uuid:67d1f1e4-b9ff-4223-b703-6d43cac971d6>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ambiguity on the Axiom of Union...
July 27th 2011, 08:47 AM
Ambiguity on the Axiom of Union...
Hello friends,
Take a look at the image below:
Attachment 21901
http://www.imgplace.com/img847/7813/971t.th.jpgThe blue part is of Paul.R.Halmos naive set theory.
I looked for the formal statement of the axiom and i found 2 different statements on net!
The red and the green one!
Certainly they are different. In fact the red is a part of the green as you know.
I myself think the red is true and this also corresponds to Halmos assertion.
Because if the green were true, then the brown weren't so!(Since every element of B can be found in at least one of the sets contained in A!)
What do you think?!
(Sorry for different letters in different assertions-It's just a collection of copies and so i didn't change the letters!)
July 27th 2011, 12:57 PM
Re: Ambiguity on the Axiom of Union...
[I'll use 'A' and 'E' for the universal and existential quantifiers, respectively, and 'e' for 'is a member of'.]
(1) EuAz(Ex(xeC & zex) -> zeu)
(2) EuAz(Ex(xeC & zex) <-> zeu)
With an instance of the axiom schema of separation, we can derive (2) from (1):
EuAz(Ex(xeC & zex) -> zeu).
So let Az(Ex(xeC & zex) -> zeu).
Separation gives us EvAz((zeu & Ex(xeC & zex)) <-> zev).
Let Az((zeu & Ex(xeC & zex)) <-> zev).
So Az(Ex(xeC & zex) <-> zev).
So, generalizing 'v' to 'u', we get EuAz(Ex(xeC & zex) <-> zeu).
The difference then is just a slight technical detail.
July 27th 2011, 07:05 PM
Re: Ambiguity on the Axiom of Union...
But as you yourself assert and also mentioned in the Halmos book(blue), WE CAN CREATE SUCH A SET USING THE AXIOM SCHEMA OF SPECIFICATION but i mean this set is not necessarily the same that the
axiom asserts!
So i think my 1st assertion is true.
July 28th 2011, 07:25 AM
Re: Ambiguity on the Axiom of Union...
Here is the most exact answer I can give:
Axiom (1) does not name a PARTICULAR set u. It asserts only that there is at least one u such that Az(Ex(xeC & zex) -> zeu). And the axiom alone does not preclude that there may be members of
such a u other than the z such that Ex(xeC & zex). But then with the axiom schema of separation, we derive that there is at least one u such that Az(Ex(xeC & zex) <-> zeu) and thus that it IS
precluded that there may be members of a u of THAT kind other than the z such that Ex(xeC & zex). Then from the axiom of extensionality, we derive that there is exactly one such u.
Axiom (2) asserts that there is at least one u such that Az(Ex(xeC & zex) -> zeu). And the axiom alone does preclude that there are members of such a u other than the z such that Ex(xeC & zex).
Then, also, with the axiom of extentionsionality, we derive that there is a exactly one such u, a particular one that we may name as 'UC'.
So let S be the relevant instance of the axiom schema of separation. Then we have:
S |- (1) <-> (2).
And, yes, of course it is not the case that
|- (1) <-> (2)
as it is not the case that
|- (1) -> (2)
though it is the case that
|- (2) -> (1).
July 28th 2011, 09:56 AM
Re: Ambiguity on the Axiom of Union...
Here is the most exact answer I can give:
Axiom (1) does not name a PARTICULAR set u. It asserts only that there is at least one u such that Az(Ex(xeC & zex) -> zeu). And the axiom alone does not preclude that there may be members of
such a u other than the z such that Ex(xeC & zex). But then with the axiom schema of separation, we derive that there is at least one u such that Az(Ex(xeC & zex) <-> zeu) and thus that it IS
precluded that there may be members of a u of THAT kind other than the z such that Ex(xeC & zex). Then from the axiom of extensionality, we derive that there is exactly one such u.
Axiom (2) asserts that there is at least one u such that Az(Ex(xeC & zex) -> zeu). And the axiom alone does preclude that there are members of such a u other than the z such that Ex(xeC & zex).
Then, also, with the axiom of extentionsionality, we derive that there is a exactly one such u, a particular one that we may name as 'UC'.
Therefore we see that axiom 1 does not preclude the existence of some elements in u such that they are not contained in any of the sets of C. And after using the axiom schema of specification we
know there exists at least a set v(or any letter!)that all of its elements are contained in at least one of the sets of C. And in fact the set v (which derived by the axiom of specification) is
certainly a subset any kind of set that the axiom of union asserts to exist!
Therefore, since we are searching for a formal sentence for the axiom itself (not the assertion derived by using the axiom schema of specification), then the proper is the 1st.
So let S be the relevant instance of the axiom schema of separation. Then we have:
S |- (1) <-> (2).
And, yes, of course it is not the case that
|- (1) <-> (2)
as it is not the case that
|- (1) -> (2)
though it is the case that
|- (2) -> (1).
What do you mean bye |- ??
More clear please...
July 28th 2011, 10:19 AM
Re: Ambiguity on the Axiom of Union...
And after using the axiom schema of specification we know there exists at least a set v(or any letter!)that all of its elements are contained in at least one of the sets of C. And in fact the set
v (which derived by the axiom of specification) is certainly a subset any kind of set that the axiom of union asserts to exist!
Right. But I don't know why you are so excited about it.
I don't know what you mean by "proper" in this context. We are free to axiomatize any way we want.
(1) has the possible advantage that it assumes less than (2).
(2) has the possible advantage that it is self-contained in the sense that we don't need the axiom schema of separation to get the actual desired set.
In any case, with the axiom schema of separation, (1) and (2) are equivalent.
is the standard symbol in mathematical logic for "proves".
Where G is a set of formulas, and P is a formula
G |- P
stands for
There is a proof of P from G.
So, to be precise, I should have written:
{S} |- (1) <-> (2)
When there is no G on the left side, then
|- P
means that P is a theorem of pure logic alone.
July 28th 2011, 10:44 AM
Re: Ambiguity on the Axiom of Union...
Right. But I don't know why you are so excited about it.
Oh i know... Because i want to insist on the fact that the axiom of union does not assert the existence of such a set...And because the purpose of this topic is certainly to find a formal
sentence for the axiom of of union itself not any derived conclusion.
I don't know what you mean by "proper" in this context
I mean a suitable formal sentence for the axiom itself.
July 28th 2011, 11:06 AM
Re: Ambiguity on the Axiom of Union...
In a formal context, the axiom itself IS a formal sentence.
We are free to choose whatever formal sentences we wish to choose for axioms. Any given author is free to state whatever formal sentences he wants to state as the axioms. (1) happens to be weaker
than (2), but they are equivalent given the axiom schema of separation. It's not a matter of what is or is not "proper".
You'll find that there are lots of examples of axiomatizations being given differently by different authors.
July 28th 2011, 12:31 PM
Re: Ambiguity on the Axiom of Union...
Any way thank you.
Good discussion. | {"url":"http://mathhelpforum.com/discrete-math/185175-ambiguity-axiom-union-print.html","timestamp":"2014-04-19T14:10:02Z","content_type":null,"content_length":"17582","record_id":"<urn:uuid:3afb3dc9-be5c-4231-a3f5-75e18276dc0d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Talks - Marija Vucelja
Invited talks
• Irreversible Monte Carlo Algorithms for Efficient Sampling, AIMS conference, Orlando, July 2012
• Fractal contours of scalar in smooth flows, AIMS conference, Orlando, July 2012
• Fractal contours of scalar in smooth flows, Mathematics of particles and flows, WPI, Vienna, May 2012
• Fractal contours of scalar in smooth flows, Mathematical Physics Seminar, Princeton & IAS, October 2011
• Irreversible Monte Carlo Algorithms for Efficient Sampling, Soft Condensed Matter Seminar, NYU, July 2011
• Mathematical Physics Seminar, Israel Institute of Technology, Haifa, Israel, Januar 2011
• Applied Mathematics Seminar, Israel Institute of Technology, Haifa, Israel, Januar 2011
• Applied Mathematics Seminar, CUNY, NY, December 2010
• Applied Mathematics Seminar, Courant Institute, NY, November 2010
• Conference ”Frontiers in Nonlinear Waves” in honor of Vladimir Zakharov’s 70th birthday, University of Arizona, March 2010
• PCTS postdoctoral interview, Princeton University, Princeton, December 2009
• University of Maryland, October 2009
• Soft Condensed Matter Seminar, University of Pennsylvania, Philadelphia, October 2009
• Applied Math Lab Seminar, Courant Institute of Mathematical Sciences, October 2009
• University of Wisconsin, Madison, October 2009
• Harvard University, Boston, October 2009
• Computations in Science seminar, University of Chicago, August 2009
• University of Arizona, Tucson, September 2007
Contributed talks
• Emergence of clones in sexual populations, Emergent order in Biology, Cargese, July 2012
• Emergence of clones in sexual populations, Soft condensed matter group meeting, NYU, April 2012
• Fractal contours of scalar in smooth flows, informal session, APS March meeting, Boston, March 2012
• Advances in Parallel tempering, on workshop "Bridging statistical physics, inference and learning", Les Houches, February 2012
• ... | {"url":"https://sites.google.com/site/mashavucelja/talks","timestamp":"2014-04-21T14:58:31Z","content_type":null,"content_length":"22313","record_id":"<urn:uuid:28e26fd6-ee6c-4a14-9543-a0cca51dc1b4>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
Appendix B, Equation 1: State Estimates of Substance Use from the 2002 National Survey on Drug Use and Health
Skip To Content
Lower sub s and a is defined as the exponent of L sub s and a divided by the sum of 1 and the exponent of L sub s and a. Upper sub s and a is defined as the exponent of U sub s and a divided by the
sum of 1 and the exponent of U sub s and a. | {"url":"http://www.samhsa.gov/data/2k2State/HTML/eqb-1alt.htm","timestamp":"2014-04-19T12:01:08Z","content_type":null,"content_length":"1513","record_id":"<urn:uuid:8b51d9ae-2b01-44a6-96ad-080f97caa3dc>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Neural Computation 23:46-96.
• pdf
Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based
on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the
posterior distribution over the stimuli that caused an observed set of spike trains is log-concave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained
using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly non-Gaussian. Here we compare several Markov chain
Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid
Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for Gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the "hit-and-run"
algorithm performed better than other MCMC methods. Using these algorithms we show that for this latter class of priors the posterior mean estimate can have a considerably lower average error than
MAP, whereas for Gaussian priors the two estimators have roughly equal efficiency. We also address the application of MCMC methods for extracting non-marginal properties of the posterior
distribution. For example, by using MCMC to calculate the mutual information between the stimulus and response, we verify the validity of a computationally efficient Laplace approximation to this
quantity for Gaussian priors in a wide range of model parameters; this makes direct model-based computation of the mutual information tractable even in the case of large observed neural populations,
where methods based on binning the spike train fail. Finally, we consider the effect of uncertainty in the GLM parameters on the posterior estimators. | {"url":"http://pillowlab.cps.utexas.edu/pubs/abs_ahmadian11_NC.html","timestamp":"2014-04-20T00:02:47Z","content_type":null,"content_length":"4340","record_id":"<urn:uuid:3fecdd87-4f69-4254-bfea-8227ae97f92c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multilevel bioluminescence tomography based on radiative transfer equation Part 1: l1 regularization
In this paper we study an l1-regularized multilevel approach for bioluminescence tomography based on radiative transfer equation with the emphasis on improving imaging resolution and reducing
computational time. Simulations are performed to validate that our algorithms are potential for efficient high-resolution imaging. Besides, we study and compare reconstructions with boundary
angular-averaged data, boundary angular-resolved data and internal angular-averaged data respectively.
© 2010 OSA
OCIS Codes
(100.3190) Image processing : Inverse problems
(110.6960) Imaging systems : Tomography
(170.3010) Medical optics and biotechnology : Image reconstruction techniques
(170.6280) Medical optics and biotechnology : Spectroscopy, fluorescence and luminescence
ToC Category:
Medical Optics and Biotechnology
Original Manuscript: October 12, 2009
Revised Manuscript: November 18, 2009
Manuscript Accepted: January 2, 2010
Published: January 15, 2010
Virtual Issues
Vol. 5, Iss. 4 Virtual Journal for Biomedical Optics
Hao Gao and Hongkai Zhao, "Multilevel bioluminescence tomography based on radiative transfer equation Part 1: l1 regularization," Opt. Express 18, 1854-1871 (2010)
Sort: Year | Journal | Reset
1. C. H. Contag and B. D. Ross, “It’s not just about anatomy: in vivo bioluminescence imaging as an eyepiece into biology,” J. Magn. Reson. Imaging 16(4), 378–387 (2002). [CrossRef] [PubMed]
2. G. Wang, E. A. Hoffman, G. McLennan, L. V. Wang, M. Suter, and J. Meinel, “Development of the first bioluminescent CT scanner,” Radiology 229(P), 566 (2003).
3. G. Wang, Y. Li, and M. Jiang, “Uniqueness theorems in bioluminescence tomography,” Med. Phys. 31(8), 2289–2299 (2004). [CrossRef] [PubMed]
4. G. Wang, W. Cong, K. Durairaj, X. Qian, H. Shen, P. Sinn, E. Hoffman, G. McLennan, and M. Henry, “In vivo mouse studies with bioluminescence tomography,” Opt. Express 14(17), 7801–7809 (2006).
[CrossRef] [PubMed]
5. W. Cong, G. Wang, D. Kumar, Y. Liu, M. Jiang, L. Wang, E. Hoffman, G. McLennan, P. McCray, J. Zabner, and A. Cong, “Practical reconstruction method for bioluminescence tomography,” Opt. Express
13(18), 6756–6771 (2005). [CrossRef] [PubMed]
6. G. Wang, X. Qian, W. Cong, H. Shen, Y. Li, W. Han, K. Durairaj, M. Jiang, T. Zhou, J. Cheng, J. Tian, Y. Lv, H. Li, and J. Luo, “Recent development in bioluminescence tomography,” Current Medical
Imaging Reviews 2(4), 453–457 (2006). [CrossRef]
7. G. Wang, H. Shen, K. Durairaj, X. Qian, and W. Cong, “The first bioluminescence tomography system for simultaneous acquisition of multi-view and multi-spectral data,” Int. J. Biomed. Imaging
2006, 1–8 (2006). [CrossRef]
8. X. Gu, Q. Zhang, L. Larcom, and H. Jiang, “Three-dimensional bioluminescence tomography with model-based reconstruction,” Opt. Express 12(17), 3996–4000 (2004). [CrossRef] [PubMed]
9. A. J. Chaudhari, F. Darvas, J. R. Bading, R. A. Moats, P. S. Conti, D. J. Smith, S. R. Cherry, and R. M. Leahy, “Hyperspectral and multispectral bioluminescence optical tomography for small
animal imaging,” Phys. Med. Biol. 50(23), 5421–5441 (2005). [CrossRef] [PubMed]
10. H. Dehghani, S. C. Davis, S. Jiang, B. W. Pogue, K. D. Paulsen, and M. S. Patterson, “Spectrally resolved bioluminescence optical tomography,” Opt. Lett. 31(3), 365–367 (2006). [CrossRef]
11. C. Kuo, O. Coquoz, T. Troy, D. Zwarg, and B. Rice, “Bioluminescent tomography for in vivo localization and quantification of luminescent sources from a multiple-view imaging system,” Mol. Imaging
4, 370 (2005).
12. G. Alexandrakis, F. R. Rannou, and A. F. Chatziioannou, “Tomographic bioluminescence imaging by use of a combined optical-PET (OPET) system: a computer simulation feasibility study,” Phys. Med.
Biol. 50(17), 4225–4241 (2005). [CrossRef] [PubMed]
13. Y. Lv, J. Tian, W. Cong, and G. Wang, “Experimental study on bioluminescence tomography with multimodality fusion,” Int. J. Biomed. Imaging 2007, 86741 (2007). [CrossRef]
14. K. M. Case, and P. F. P. F. Zweifel, Linear Transport Theory (Addison-Wesley Educational Publishers Inc., 1967).
15. S. Chandrasekhar, Radiative Transfer (Dover Publications, 1960).
16. E. E. Lewis, and W. F. Miller, Computational Methods of Neutron Transport (Wiley, 1984).
17. A. Ishimaru, Wave Propagation and Scattering in Random Media (Academic Press, 1978).
18. H. Gao and H. K. Zhao, “A fast forward solver of radiative transfer equation,” Transp. Theory Stat. Phys. 38(3), 149–192 (2009). [CrossRef]
19. A. R. Arridge, “Optical tomography in medical imaging,” Inverse Probl. 15(2), R41–R93 (1999). [CrossRef]
20. E. J. Candes and M. B. Wakin, “A introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]
21. D. Donoho, “Compresse sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]
22. E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006).
23. E. J. Candes, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59(8), 1207–1223 (2006). [CrossRef]
24. E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005). [CrossRef]
25. E. J. Candes and T. Tao, “Near optimal signal recovery from random projections: universal encoding strategies,” IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006). [CrossRef]
26. P. Zhao and B. Yu, “On model selection consistency of Lasso,” J. Mach. Learn. Res. 7, 2541–2563 (2006).
27. R. Tibshirani, “Regression shrinkage and selection via the Lasso,” J. R. Stat. Soc., B 58, 267–288 (1996).
28. Y. Zhang, “Theory of compressive sensing via l1-minimization: a non-RIP analysis and extensions,” Rice University CAAM Technical Report TR08–11, (2008).
29. G. Bal and A. Tamasan, “Inverse source problems in transport equations,” SIAM J. Math. Anal. 39(1), 57–76 (2007). [CrossRef]
30. A. Charette, J. Boulanger, and H. K. Kim, “An overview on recent radiation transport algorithm development for optical tomography imaging,” J. Quant. Spectrosc. Radiat. Transf. 109(17-18),
2743–2766 (2008). [CrossRef]
31. A. D. Klose, V. Ntziachristos, and A. H. Hielscher, “The inverse source problem based on the radiative transfer equation in optical molecular imaging,” J. Comput. Phys. 202(1), 323–345 (2005).
32. V. Markel and J. Schotland, “Fourier-laplace structure of the linearized inverse scattering problem for the radiative transport equation,” Inv. Prob. Imag. 1, 181–189 (2007). [CrossRef]
33. N. J. McCormick, “Inverse radiative transfer problems: a review,” Nucl. Sci. Eng. 112, 185–198 (1992).
34. A. N. Panchenko, “Inverse source problem of radiative transfer: a special case of the attenuated Radon transform,” Inverse Probl. 9(2), 321–337 (1993). [CrossRef]
35. C. E. Siewert, “An inverse source problem in radiative transfer,” J. Quant. Spectrosc. Radiat. Transf. 50(6), 603–609 (1993). [CrossRef]
36. P. Stefanov and G. Uhlmann, “An inverse source problem in optical molecular imaging,” Analysis and PDE 1, 115–126 (2008). [CrossRef]
37. Z. Tao, N. J. McCormick, and R. Sanchez, “Ocean source and optical property estimation from explicit and implicit algorithms,” Appl. Opt. 33(15), 3265 (1994). [CrossRef] [PubMed]
38. R. Weissleder and U. U. Mahmood, “Molecular imaging,” Radiology 219(2), 316–333 (2001). [PubMed]
39. Y. Lu, X. Zhang, A. Douraghy, D. Stout, J. Tian, T. F. Chan, and A. F. Chatziioannou, “Source reconstruction for spectrally-resolved bioluminescence tomography with sparse a priori information,”
Opt. Express 17(10), 8062–8080 (2009). [CrossRef] [PubMed]
40. M. Boffety, M. Allain, A. Sentenac, M. Massonneau, and R. Carminati, “Analysis of the depth resolution limit of luminescence diffuse optical imaging,” Opt. Lett. 33(20), 2290–2292 (2008).
[CrossRef] [PubMed]
41. K. Levenberg, “A method for the solution of certain nonlinear problems in least squares,” Q. Appl. Math. 2, 164–168 (1944).
42. D. W. Marquardt, “An algorithm for least-squares estimation of nonlinear parameters,” J. Soc. Ind. Appl. Math. 11(2), 431–441 (1963). [CrossRef]
43. K. Madsen, H. B. Nielsen, and O. Tingleff, Methods for Non-linear Least Squares Problems (Technical University of Denmark, 1999).
44. S. Boyd, and L. Vandenberghe, Convex Optimization (Cambridge university press, 2004).
45. S. J. Kim, K. Koh, M. Lustig, and S. Boyd, “An efficient method for compressed sensing,” IEEE International Conference on Image Processing 3, 117–120 (2007).
46. S. J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, “An Interior-Point Method for Large-Scale l1-Regularized Least Squares,” IEEE J. Sel. Top. Signal Process. 1(4), 606–617 (2007).
47. W. Yin, S. Osher, D. Goldfarb, and J. Darbon, “Bregman iterative algorithms for l1-minimization with applications to compressed sensing,” SIAM J. Imaging Sciences 1(1), 143–168 (2008). [CrossRef]
48. Z. Wen, W. Yin, D. Goldfarb, and Y. Zhang, “A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation,” Rice University CAAM Technical Report TR09–01,
49. E. Esser, X. Zhang, and T. Chan, “A general framework for a class of first order primal-dual algorithms for TV minimization,” UCLA CAM Report 09–67, (2009).
50. T. Goldstein and S. Osher, “The split bregman method for l1 regularized problems,” SIAM J. Imaging Sci. 2(2), 323–343 (2009). [CrossRef]
51. S. Osher, Y. Mao, B. Dong, and W. Yin, “Fast linearized Bregman iteration for compressed sensing and sparse denoising,” Commun. Math. Sci. (to be published).
52. J. Yang, Y. Zhang, and W. Yin, “An Efficient TVL1 Algorithm for Deblurring Multichannel Images Corrupted by Impulsive Noise,” SIAM J. Sci. Comput. 31(4), 2842–2865 (2009). [CrossRef]
53. J. Demmel, Applied Numerical Linear Algebra (Cambidge Univ. Press, 1997).
54. K. D. Paulsen, P. M. Meaney, M. J. Moskowitz, and J. R. Sullivan, “A dual mesh scheme for finite element based reconstruction algorithms,” IEEE Trans. Med. Imaging 14(3), 504–514 (1995).
[CrossRef] [PubMed]
55. M. Xu and L. V. Wang, “Photoacoustic imaging in biomedicine,” Rev. Sci. Instrum. 77, 041101–1-041101–22 (2006). [CrossRef]
56. H. Gao, and H. Zhao, “A multilevel and multigrid optical tomography based on radiative transfer equation,” in Proceedings of SPIE (Munich, Germany, 2009), pp. 73690E–1-73690E–10.
57. Y. Lv, J. Tian, W. X. Cong, G. Wang, J. Luo, W. Yang, H. Li, and H. Li, “A multilevel adaptive finite element algorithm for bioluminescence tomography,” Opt. Express 14(18), 8211–8223 (2006).
[CrossRef] [PubMed]
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-18-3-1854","timestamp":"2014-04-21T12:51:22Z","content_type":null,"content_length":"280817","record_id":"<urn:uuid:b12b6679-73a9-4bc4-a19e-f817b8c927b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
I'm so behind in physics, Circular Rotation help.
I honestly have no clue how to do this one. I keep working myself in circles, which doesn't seem to be getting anything accomplished. Any help on where I should start?
It's always best to start with a diagram. Draw a cross section showing the vertical pole, a cross beam, and one cable (at 30 degrees from vertical) with a mass on the end. Identify the center of
rotation of the mass and the radius of the circle it will follow.
After that you'll be drawing an FBD and working out the components of the accelerations acting. | {"url":"http://www.physicsforums.com/showthread.php?t=536047","timestamp":"2014-04-21T12:11:18Z","content_type":null,"content_length":"65119","record_id":"<urn:uuid:981b414a-df02-47a8-97e0-4fa269c3ce6a>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
TKpublications : 27
Discrete decomposability of the restriction of A[q]
with respect to reductive subgroups and its applications
Invent. Math.
(1994), 181-205..
Let G'⊂G be real reductive Lie groups and q a θ-stable parabolic subalgebra of Lie(G)⊗C. This paper offers a sufficient condition on (G, G', q) that the irreducible unitary representation A[q] of
G with non-zero continuous cohomology splits into a discrete sum of irreducible unitary representations of a subgroup G', each of finite multiplicity. As an application to purely analytic
problems, new results on discrete series are also obtained for some pseudo-Riemannian (non-symmetric) spherical homogeneous spaces, which fit nicely into this framework. Some explicit examples of
a decomposition formula are also found in the cases where An is not necessarily a highest weight module.
preprint version(dvi)
full text(pdf)
related papers
The original publication is available at www.springerlink.com.
© Toshiyuki Kobayashi | {"url":"http://www.ms.u-tokyo.ac.jp/~toshi/pub/27.html","timestamp":"2014-04-20T08:41:05Z","content_type":null,"content_length":"2483","record_id":"<urn:uuid:f9a2f308-8e61-4f8a-9773-aa9dc3b2eb76>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rate of Convergence of Hermite-Fejér Interpolation on the Unit Circle
Journal of Applied Mathematics
Volume 2013 (2013), Article ID 407128, 8 pages
Research Article
Rate of Convergence of Hermite-Fejér Interpolation on the Unit Circle
^1Departamento de Matemática Aplicada I, Facultad de Ciencias, Universidad de Vigo, 32004 Ourense, Spain
^2Departamento de Matemática Aplicada I, E. Ingeniería Industrial, Universidad de Vigo, 36310 Vigo, Spain
Received 13 November 2012; Revised 2 March 2013; Accepted 4 March 2013
Academic Editor: Roberto Barrio
Copyright © 2013 E. Berriochoa et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The paper deals with the order of convergence of the Laurent polynomials of Hermite-Fejér interpolation on the unit circle with nodal system, the roots of a complex number with modulus one. The
supremum norm of the error of interpolation is obtained for analytic functions as well as the corresponding asymptotic constants.
1. Introduction
The paper is devoted to study the Hermite-Fejér interpolation problem on the unit circle . This topic has attracted the interest of many researchers in recent years, and it has been the subject of
several studies. In [1] Fejér's classical result is extended to the unit circle. It is well known that it ensures uniform convergence of Hermite-Fejér interpolants to continuous functions on taking
as nodal system the Chebyshev points (see [2–4]). Specifically, in [1] the authors consider the nodal system of the roots of a complex number with modulus one. Then it is proved that the Laurent
polynomials of Hermite-Fejér interpolation for a given continuous function on the unit circle uniformly converge to .
In [5] second Fejér's theorem concerning the Hermite interpolation with nonvanishing derivatives is extended, to the unit circle. New conditions for the derivatives are obtained in order that the
Hermite interpolants uniformly converge to continuous functions on the unit circle.
An algorithm for efficient computing of the coefficients of the Laurent polynomials of Hermite-Fejér and Hermite interpolation with equally spaced nodes on the unit circle was given in [6]. These
results were extended to the bounded interval, and the corresponding expressions can be evaluated using the techniques given in [7]. Some results concerning the convergence were obtained in [8]. The
convergence of the Laurent polynomials of Hermite-Fejér interpolation has been studied in [8] for analytic functions defined on open sets containing the unit disk. The results describe the behavior
outside and inside the unit disk and are extended to the case of Hermite interpolation, that is, with nonvanishing derivatives.
In the case of bounded interval, the supremum norm of the error of interpolation was studied in several papers (see [9]). In particular a lower bound for the order of convergence of Hermite-Fejér
interpolation was obtained in [10] for general nodal systems. Now, in the present paper, we study the same problem for the error of interpolation on the unit circle by taking into account the results
obtained in [8].
The organization of the paper is as follows. Section 2 is dedicated to obtain the results for the order of convergence for Laurent polynomials, in other words to the polynomial case. Section 3
contains the extension of the preceding results for analytic functions. The order of convergence and the asymptotic constants are deduced in our main result for analytic functions on an open disk
containing the unit circle. As a consequence, the result is generalized to analytic functions outside an open disk with radius less than one,and it is also generalized, in Section 4, to analytic
functions on an open annulus containing the unit circle. Finally, the last section is devoted to some numerical experiments to reveal the contributions of our results.
2. The Polynomial Case
Let be the roots of a complex number with modulus . We recall that the Hermite interpolation problem on the unit circle with nodal system consists in obtaining a Laurent polynomial of that satisfies
the following interpolation conditions: where and are sets of fixed complex numbers.
The particular case when for all is called the Hermite-Fejér interpolation problem, and the corresponding interpolation polynomial is denoted by . When , (), for a given function defined on , we
denote the Hermite-Fejér interpolation polynomial by . To estimate the interpolation error between and we consider their difference that we denote by
It is well known that can be computed in terms of the fundamental polynomial of Hermite interpolation, , as follows: where is given by and it holds that Representation (3) can be seen in [1], and (4)
can be seen in [5].
We recall that for a continuous function defined on , converges to uniformly on , as it can be seen in [1].
These results can be improved in case of polynomial functions. Indeed we can obtain nice explicit expressions for and for the polynomial case; that is, in this section we are going to use an
algebraic polynomial or a Laurent polynomial in the role of .
Theorem 1. Let be a fixed positive integer number. For the following conditions hold that(i); (ii);(iii) converges to uniformly on compact subsets of with order of convergence ;(iv);(v);(vi)
converges to uniformly on compact subsets of with order of convergence .
Proof. In order to obtain (i), take into account that when we evaluate the proposed Laurent polynomial at we have that is, the interpolation conditions for the function are fulfilled. In the same
way, when we evaluate the corresponding derivative at we obtain Thus the existence and uniqueness of the Hermite interpolation polynomial ensures (i).(ii) It is an immediate consequence of (i) and
the definition of .(iii) Take into account that where the last expression is uniformly bounded if and is large enough.
(iv), (v), and (vi) can be proved proceeding in the same way.
Remark 2. The resulting expressions for and , given in the preceding theorem, can be rewritten as follows:
Corollary 3. The following hold. (i) If is a Laurent polynomial with nonnegative powers of , that is, is an algebraic polynomial, then (a). (b) If is a compact subset of, then where is the
supremum norm on .(c) If is a compact subset of with no isolated points, then (ii) If is a Laurent polynomial with only negative powers of , then (a). (b) If is a compact subset of , then (c) If
is a compact subset of with no isolated points, then
Proof. (i) (a) It is a straightforward consequence of the previous remark.
(i) (b) First of all take into account that for each it holds that Therefore, if is attained at , then for each the following relation holds:
On the other hand, since for we have , then we obtain the result.
(i) (c) If is the point where is attained, then for each it holds that which implies that
Due to the continuity of , for each there exists a neighborhood of , , such that for it is . On the other hand, for large enough any arc of contains points with , and therefore for some , with , we
have Then we obtain To obtain (ii) (a), (b), and (c) proceed in the same way.
Theorem 4. Let be a Laurent polynomial with positive and negative powers of , and , respectively. It holds that(i); (ii) if is a compact subset of with no isolated points, then
Proof. It is clear that exists, it is positive, and it is attained at a point . We denote this maximum by . Besides, since can be represented as with , then
Due to the continuity of , for each there exists a neighborhood of , , such that for it is which implies, in particular, . Moreover, taking into account that for large enough any arc , with ,
contains points with , then we have
Remark 5. The previous result can be rewritten as follows: where means that the sequences are equivalent.
3. Rate of Convergence for Analytic Functions on a Disk
In this section we extend the previous results to analytic functions. Indeed we study the supremum norm of for analytic functions on an open disk containing along a circumference of radius .
Theorem 6. Let be a nonconstant analytic function defined on an open disk , with , and let be its Hermite-Fejér interpolation polynomial corresponding to the roots of . If is the circumference with
radius , , and is the supremum norm on , then there exist satisfying
Proof. We exclude the constant case because we have in this situation. So let be a nonconstant analytic function. Taking into account that the evaluation of at is , we have . Then by using Theorem 1
we can write
Let be a point belonging to with and , such that . Then there exists a positive constant , such that for large enough we have
Now we consider (2) and (3) as follows:
So we obtain that there exists a positive constant , such that for large enough
In order to obtain the lower bound we use the following inequality:
For an arbitrary and large enough we have
On the other hand, as the zeros of cannot have accumulation points on , then for there exist an arc and a positive constant , such that Next we study two different cases and .
(i) If it holds that and, as before, for large enough there exist positive constants and , such that for we have Thus for large enough and we have Then there exists satisfying (ii) If , we consider
an arc with for and a positive constant . Then
Proceeding in the same way as in the previous case we have . Thus there exists , such that
Taking into account (35) and (37) we have for large enough, and
Then it is straightforward that there exists , such that for large enough.
Remark 7. Notice that (i) the constants and are closely related to the supremum norm ;(ii) clearly we can obtain an analogous result for nonconstant analytic function defined on with .
4. Rate of Convergence for Analytic Functions on an Annulus
Next we deal with the case of analytic functions on an open annulus containing . We obtain explicit expressions for and the asymptotic behavior of its supremum norm; that is, we obtain the order of
convergence and the asymptotic constant.
Throughout this section we consider a function with Laurent expansion at given by which converges on an annulus containing . Then there exist and , , such that for every .
For each we denote by and by . In the same way we denote by and by (if).
Furthermore we denote by and by .
Then we have the following decompositions of for each : By using this notation for the decompositions of we obtain the following results.
Lemma 8. In our conditions it holds that
Proof. If we have and . On the other hand, by using (3) and (4) we have for Then we can write and the result is proved.
Lemma 9. In our conditions the following holds.(i) For and , such that (ii) For and , such that
Proof. (i) Take into account (i) (a) in Corollary 3 in order to prove the first equality. To obtain the second equality take into account that for , , and is bounded.
(ii) Proceed in the same way.
Theorem 10. In our conditions the following holds.(i) For and such that (ii) If is a compact subset of with no isolated points, then
Proof. (i) It is a straightforward from previous lemmas.
To prove (ii) use the same technique as in Theorem 4.
5. Numerical Tests
In this section we present some numerical experiments concerning the main results in Sections 3 and 4.
Theorem 10 ensures that under appropriate assumptions, Moreover from the proofs of Lemmas 8 and 9 we can predict where this limit can be observed. In fact near , when is the point where the maximum
of is attained, we would observe the convergence.
A second interesting point is that when the maximum is attained at a unique point, then for a compact set with : We have developed some numerical examples to see these phenomena about Theorem 10.
Example 11. Let be with . It is easy to see that the corresponding maximum with is attained at . Furthermore the maximum value is , and it is unique. For , we obtain the corresponding Hermite- Fejér
approximants (based on the roots of ) the corresponding , and we evaluate the quotient in 5000 random points of the arc . As we have said the maximum of the quotients must converge to a value less
than . As a second part of the example we evaluate the quotients in 1000 random points of the arc . This second sequence must converge to ; notice that the great number of evaluations gives an
estimate of the supremum norm.
Table 1 shows the results observed for .
Figure 1 shows the graphic of for , , and . In this graphic we can see given in thw following we can see that the maximum of the error is attained near to or near and its corresponding points in are
near to .
Table 2 shows the results observed for .
Next we are going to apply Theorem 6. This result and the details of its proof claim that for an analytic function , a compact arc of radius and under the corresponding assumptions, and the
convergence can be increasing or decreasing; really it depends on the sign of .
We must point out that outside the unit disc the algorithms for Hermite-Fejér interpolation can be unstable, so we deal with a compact set near .
Example 12. Let be , and let , , and be the arcs , , and , respectively. It is easy to see that is attained at . So we can observe tending to and tending to a number . For , we obtain the
corresponding Hermite-Fejér approximants, and the corresponding , and we obtain evaluations for the quotient in random points of the arc . As we have said the maximum of quotients must converge to a
value less than . As a second part of the example we obtain evaluations for the quotients in random points of the arc , and this second sequence must converge to . Notice that the great number of
evaluations gives an estimation of the supremum norm. Table 3 shows the observed results.
The authors want to give thanks to the referee for the valuable suggestions to improve the paper. The research was supported by Ministerio de Educación y Ciencia of Spain under Grant no. | {"url":"http://www.hindawi.com/journals/jam/2013/407128/","timestamp":"2014-04-19T02:13:33Z","content_type":null,"content_length":"647582","record_id":"<urn:uuid:a9795beb-7f6e-4063-a01c-d2cec491c62e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the coordinates of the other endpoint when you are given the midpoint and one of the endpoints. Answer in terms of a decimal rounded to the nearest tenth or an improper fraction in simplest
form. P1 M P2 (3, 5) (-2, 0) ( , ) helpppp
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fbdbc45e4b0c25bf8fc0f5a","timestamp":"2014-04-20T08:43:30Z","content_type":null,"content_length":"60985","record_id":"<urn:uuid:c0b17995-7963-4a37-8c44-160eb9259ae4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
Organised in Cambridge by Mike Gordon and Tobias Nipkow, 24-25 Aug., 2009
Post comments by sending email to: itpworkshop@googlegroups.com.
Subscribe to the group and then upload interesting papers.
It is hoped to produce a white paper reflecting the results of the workshop (e.g. future needs, ideas for actions, collaborations). The material below, which is expected to evolve, is derived from
detailed notes taken at the workshop by Matt Kaufmann, together with comments that have been received. It is possible that some participants may feel that topics have been incorrectly emphasised,
mentioned material is of low interest and important ideas have been omitted. Please send Mike Gordon input so that further versions can better reflect your views. There is a striking agreement by
almost everyone that interactive theorem provers should become integrated design, execution and verification platforms, with links to external tools. All the major ITP systems (ACL2, Coq, Isabelle,
HOL, PVS) already support this and future plans emphasise the need for more research towards improving the integration. External facilities that can (and are) being linked include SAT and SMT
solvers, proof search engines, model checkers, tools for numerical methods and computer algebra systems. There is also agreement that, if possible, external tools should provide proof certificates
that can be replayed or otherwise checked by the importing ITP system. It is especially remarkable that the major existing tools share a common vision, because they are based on widely different
software architectures (e.g. extensible small kernel versus large pre-built tool suite) and logical foundations (classical set theory versus constructive type theory). Furthermore, they were
developed within largely disjoint technical cultures. Embedded computation is not only useful for executing formal specifications; it is required for proofs needing lemmas proved by calculation (e.g.
in the Four-Colour proof). A key component is having a programming language as part of the term structure of the logic supported by the theorem prover. Besides being used for computations within
proofs, executable terms are also used for checking examples and counterexamples and for producing certified artifacts (e.g. trustworthy data). All major current ITP tools support computation to some
extent, though they have quite different methods of executing programs and moving between computation and deduction. ITP systems have a diverse variety of uses that range from computer system
verification to artificial intelligence. Examples include: assisting the creation and checking of mathematical proofs, design, verification and optimisation of hardware and software, formal synthesis
of implementations from specifications, checking properties, finding errors and counterexamples, planning and constraint solving. Although full proof of correctness is often seen as the ultimate goal
of ITP applied to verification, simpler specification consistency checking can have great value (e.g. type checking, shape analysis, model checking, termination analysis etc). Experiments in
seamlessly linking these to theorem proving have been done, but further work is needed, especially large case studies. There are a number of new ways of interacting with theorem provers in progress
or being planned. These derive both from the way computing is evolving (e.g. availability of efficient parallel execution thanks to multi-core processors) and from the growing size of applications,
particularly checking huge mathematics proofs. However, the research challenges of improving interaction may be tool and application specific and may be orthogonal to ITP and logical issues. There is
some support for the idea of adopting modern software engineering methods for managing large proofs, both system architecture ideas and existing software development platforms (e.g. IDEs like Visual
Studio, Eclipse, NetBeans). It is not clear, however, if off-the-shelf software development interaction paradigms can be adapted for proof development. Interactive proof may need new kinds of
interaction - some interesting ideas are already emerging. Hierarchical, graphical tools can help with organising, understanding and constructing large proofs. The Emacs-based Proof General front end
is an existence proof that tool- and application-independent ITP interfaces can be successful, though opinions on the value and long-term prospects for Emacs are divided. Theorem proving has now
reached a stage where it is possible to perform (or attempt to perform) interactive proofs of very complex existing mathematical results such as Goedel's Theorems, the Four-Colour Theorem, the Kepler
Conjecture (the Flyspeck project) and the classification of finite simple groups. This application area is putting demands on ITP tools to become more friendly to mathematicians. Although this work
is "blue sky" research, it may open up new verification possibilities in software areas underpinned by sophisticated mathematical theories (e.g. cryptography). Challenge goals are: (i) having all of
college mathematics formalised and available off-the-shelf for use by theorem provers (an estimate of 140 person years was given); (ii) automating high school mathematics using a combination of
artificial intelligence and decision procedures. If progress is made on these challenges there may be applications to education (both generating course material and providing better computer support
for teaching and grading students). The interaction needed to find proofs is seen as a positive creative process and not as a necessary evil. Support for prototyping proofs, like automatic generation
of finite counter-examples and complex type checking, are already useful. Exploring the duality between proving and refuting may be a good route to getting more effective proof finding interaction
with tools. Currently only finite counter-examples are supported, but there are new ideas emerging that might lead to ways for automatically generating symbolic counterexamples in areas such as
continuous mathematics. Methods for visualising counterexamples would also be useful. Since most programs and initial conjectures contain errors, there was a general consensus that counter-example
finding deserves a higher priority than it has traditionally enjoyed. There is the possibility that ITP tools could aid in the creation of new mathematics rather than just check old material. Perhaps
some kinds of future mathematics may only be possible with computer support (by analogy with how computer algebra systems enable previously intractable physics and engineering symbolic calculations).
However, the current logics supported by existing ITP tools are recognised as not being attractive to mathematicians, so it will be hard to recruit people from the mathematical community to use them.
The pseudo-natural language of Mizar and the outsourcing of Flyspeck to cheap smart Vietnamese mathematicians are successful, but very different, methodologies for engaging mathematicians. Neither of
these approaches are likely to work on a larger scale, especially as the wider mathematical community is thought to be both sceptical and poorly motivated towards computer tools. For ITP to support
large formal specification and verification applications, real world specifications of, for example, accurate semantics of widely used programming languages, networking protocols or modern processor
architectures need to be available. This is a major challenge. Formal specifications are developed in a variety of formalisms and currently it is hard to port these between systems, so one can become
"locked in" to a particular tool. If one has a project needing to reuse specifications in the formalisms of different tools, then complete formality of translation may be impractical and various
semi-formal pragmatic compromises may be necessary. Current standards tend to be informal and (sometimes deliberately and beneficially) ambiguous. Standards based on formal specification methods
could enhance the accuracy of models used for verification, and benefits to the wider assurance certification community can be envisioned, e.g. the possibility of reference specifications and
implementations being verified compliant to standards. A stepping stone to standards supported by formal specifications would be a better understanding of the semantical relationships between the
specification formalisms of different tools. Semantically justified formal translations between theories may be too demanding, but even tools just for translating definitions (specifications) could
be a useful. Translations would be aided by recognised standard formats for different kinds of specifications. Proposals were made for: (i) extending the SMT-LIB format towards the kind of logics
used for ITP, (ii) interchanging theories between different ITP logics, and (iii) exchanging judgements between ITP and automatic tools via a "toolbus". Such standard formats would enable common
repositories for specification, reference implementations etc. to be better supported. A major challenge would be to have a stardard representation for general mathematics. This is much discussed,
but there are no concrete proposals accepted within the ITP community. Having reference examples and demonstrator case studies in a standard format and in the public domain would provide a poweful
resource for students, researchers and evaluators. The Internet and "cloud computing" provide new distributed possibilities for creating mechanised theories. An inspiration is the success of
experiments in the mathematics community of collaborative problem solving (e.g. Polymath). Could projects be run like Polymath to accelerate the development of formal theory infrastructure and also
to stress existing tools? A clear challenge is how to fit together contributions built using different theorem proving software. Perhaps this could be seen as an add-on to QPQ or linked to the vdash
concept. Although the source code of all the major ITP tools is in the public domain, the degree to which the software is a traditional 'open source' development varies. More collaboration between
different ITP communities is advocated as a way to boost progress, and optimistic signs are emerging.
Morning of Monday 24 August
Freek Wiedijk Three wishes
Alan Bundy A Hiproof Interface for Viewing and Constructing Proofs
Joe Hurd Theory Engineering: Proving in the Large
Andy Gordon Some Challenges for Future ITP
Afternoon of Monday 24 August
Mechanising mathematics: scientific challenge and potential applications
Laurent Théry Formalising Mathematics
Cameron Freer Mechanised Mathematics: Four Provocations
Rob Arthan
John Harrison Grumpy Old Man
Morning of Tuesday 25 August
Search for bugs and search for proofs (discussion lead by Tony Hoare)
Daniel Kroening SMT-LIB for HOL
Interfacing ITP to other tools and the real world
Daniel Kroening Interfacing ITP to the Real World
Peter Sewell Interfacing ITP to other tools and the real world
Tobias Nipkow
Shankar Interaction and Automation
Afternoon of Tuesday 25 August
Konrad Slind ITP Uses and Challenges at Rockwell Collins
Future dreams for current tools:
J Moore, Matt Kaufmann ACL2
Bruno Barras Coq
John Harrison HOL Light
Makarius Wenzel Isabelle
Shankar PVS
Mark Adams (mark AT proof-technologies.com)
Mihhail Aizatulin (avatar AT hot.ee)
Rob Arthan (rda AT lemma-one.com)
Bruno Barras (bruno.barras AT inria.fr)
Nick Benton (nick AT microsoft.com)
Alan Bundy (bundy AT staffmail.ed.ac.uk)
Cameron Freer (freer AT math.mit.edu)
Mohan Ganesalingam (mg262 AT cl.cam.ac.uk)
Georges Gonthier (gonthier AT microsoft.com)
Andy Gordon (adg AT microsoft.com)
Mike Gordon (Mike.Gordon AT cl.cam.ac.uk)
David Greaves (David.Greaves AT cl.cam.ac.uk)
John Harrison (johnh AT ichips.intel.com)
Tony Hoare (thoare AT microsoft.com)
Peter Homeier (palantir AT trustworthytools.com)
Joe Hurd (joe AT galois.com)
Paul Jackson (Paul.Jackson AT ed.ac.uk)
Cliff Jones (cliff.jones AT ncl.ac.uk)
Matt Kaufmann (kaufmann AT cs.utexas.edu)
Andrew Kennedy (akenn AT microsoft.com)
Daniel Kroening (kroening AT comlab.ox.ac.uk)
J Strother Moore (Moore AT cs.utexas.edu)
Magnus Myreen (Magnus.Myreen AT cl.cam.ac.uk)
Tobias Nipkow (nipkow AT in.tum.de)
Scott Owens (Scott.Owens AT cl.cam.ac.uk)
Matthew Parkinson (Matthew.Parkinson AT cl.cam.ac.uk)
Larry Paulson (Larry.Paulson AT cl.cam.ac.uk)
Andy Pitts (Andrew.Pitts AT cl.cam.ac.uk)
Claudio Russo (crusso AT microsoft.com)
Peter Sewell (Peter.Sewell AT cl.cam.ac.uk)
Konrad Slind (klslind AT rockwellcollins.com)
Natarajan Shankar (shankar AT csl.sri.com)
Georg Struth (g.struth AT dcs.shef.ac.uk)
Laurent Théry (Laurent.Thery AT sofia.inria.fr)
Thomas Tuerk (Thomas.Tuerk AT cl.cam.ac.uk)
Viktor Vafeiadis (viktorva AT mic rosoft.com)
Tjark Weber (tw333 AT cam.ac.uk)
Makarius Wenzel (wenzelm AT in.tum.de)
Freek Wiedijk (freek AT cs.ru.nl)
Please email Mike Gordon if there are any errors, or if you attended, but are not listed above and would like your name added. Maintained by Mike Gordon.
This page last updated on September 15, 2009. | {"url":"http://www.cl.cam.ac.uk/~mjcg/GC6/Meeting.ITP.Summer09.html","timestamp":"2014-04-19T12:13:49Z","content_type":null,"content_length":"22958","record_id":"<urn:uuid:c5515aa9-b333-43ee-836d-d41ce88bfba7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
Talking about the Computational Future at SXSW 2013
March 19, 2013
Last week I gave a talk at SXSW 2013 in Austin about some of the things I’m thinking about these days—including quite a few that I’ve never talked publicly about before. Here’s a video, and a
slightly edited transcript:
Well, this is a pretty exciting time for me. Because it turns out that a whole bunch of things that I’ve been working on for more than 30 years are all finally converging, in a very nice way. And
what I’d like to do here today is tell you a bit about that, and about some things I’ve figured out recently—and about what it all means for our future.
This is going to be a bit of a wild talk in some ways. It’s going to go from pretty intellectual stuff about basic science and so on, to some really practical technology developments, with a few
sneak peeks at things I’ve never shown before.
Let’s start from some science. And you know, a lot of what I’ll say today connects back to what I thought at first was a small discovery that I made about 30 years ago. Let me tell you the story.
I started out at a pretty young age as a physicist. Diligently doing physics pretty much the way it had been done for 300 years. Starting from this-or-that equation, and then doing the math to figure
out predictions from it. That worked pretty well in some cases. But there were too many cases where it just didn’t work. So I got to wondering whether there might be some alternative; a different
At the time I’d been using computers as practical tools for quite a while—and I’d even created a big software system that was a forerunner of Mathematica. And what I gradually began to think was that
actually computers—and computation—weren’t just useful tools; they were actually the main event. And that one could use them to generalize how one does science: to think not just in terms of math and
equations, but in terms of arbitrary computations and programs.
So, OK, what kind of programs might nature use? Given how complicated the things we see in nature are, we might think the programs it’s running must be really complicated. Maybe thousands or millions
of lines of code. Like programs we write to do things.
But I thought: let’s start simple. Let’s find out what happens with tiny programs—maybe a line or two of code long. And let’s find out what those do. So I decided to do an experiment. Just set up
programs like that, and run them. Here’s one of the ones I started with. It’s called a cellular automaton. It consists of a line of cells, each one either black or not. And it runs down the page
computing the new color of each cell using the little rule at the bottom there.
OK, so there’s a simple program, and it does something simple. But let’s point our computational telescope out into the computational universe and just look at all simple programs that work like the
one here.
Well, we see a bunch of things going on. Often pretty simple. A repeating pattern. Sometimes a fractal. But you don’t have to go far before you see much stranger stuff.
This is a program I call “rule 30“. What’s it doing? Let’s run it a little longer.
That’s pretty complicated. And if we just saw this somewhere out there, we’d probably figure it was pretty hard to make. But actually, it all comes just from that tiny program at the bottom. That’s
it. And when I first saw this, it was my sort of little modern “Galileo moment”. I’d seen something through my computational telescope that eventually made me change my whole world view. And made me
realize that computation—even as done by a tiny program like the one here—is vastly more powerful and important than I’d ever imagined.
Well, I’ve spent the past few decades working through the consequences of this. And it’s led me to build a new kind of science, to create all sorts of practical technology, and to make me think about
almost everything in a different way. I published a big book about the science about ten years ago. And at the time when the book came out, there was a quite a bit of “paradigm shift turbulence“. But
looking back it’s really nice to see how well the science has taken root.
And for example there are models based on my kinds of simple programs showing up everywhere. After 300 years of being dominated by Newton-style equations and math, the frontiers are definitely now
going to simple programs and the new kind of science.
But there’s still one ultimate app out there to be done: to figure out the fundamental theory of physics—to figure out how our whole universe works. It’s kind of tantalizing. We see these very simple
programs, with very complex behavior.
It makes one think that maybe there’s a simple program for our whole universe. And that even though physics seems to involve more and more complicated equations, that somewhere underneath it all
there might just be a tiny little program. We don’t know if things work that way. But if out there in the computational universe of possible programs, the program for our universe is just sitting
there waiting to be found, it seems embarrassing not to be looking for it.
Now if there is indeed a simple program for our universe, it’s sort of inevitable that it has to operate kind of underneath our standard notions like space and time and so on. Maybe it’s a little
like this.
A giant network of nodes, that make up space a bit like molecules make up the air in this room. Well, you can start just trying possible programs that create such things. Each one is in a sense a
candidate universe.
And when you do this, you can pretty quickly say most of them can’t be our universe. Time stops after an instant. There are an infinite number of dimensions. There can’t be particles or matter. Or
other pathologies.
But what surprised me is that you don’t have to go very far in this universe of possible universes before you start finding ones that are very plausible. And that for example seem like they’ll show
the standard laws of gravity, and even some features of quantum mechanics. At some level it turns out to be irreducibly hard to work out what some of these candidate universes will do. But it’s quite
possible that already caught in our net is the actual program for our universe. The whole thing. All of reality.
Well, if you’d asked me a few years ago what I thought I’d be doing now, I’d probably have said “hunting for our universe”. But fortunately or unfortunately, I got seriously sidetracked. Because I
realized that once one starts to understand the idea of computation, there’s just an incredible amount of technology one can build—that’s to me quite fascinating, and that I think is also pretty
important for the world. And in fact, right off the bat, there’s a whole new methodology one can use for creating technology.
I mean, we’re used to doing traditional engineering—where we build things up step by step. But out there in the computational universe, we now know that there are all these programs lying around that
already do amazing things. So all we have to do is to go out and mine them, and find ones that fit whatever technological purpose we’re trying to achieve.
And actually we’ve been using this kind of automated algorithm discovery for quite some time now. By now Mathematica and Wolfram|Alpha are full of algorithms and programs that no human would ever
have come up with, but were just found by systematically searching the computational universe. There’s a lot that can be done like this. Not just for algorithms, but for art, like this, and for
physical structures and devices too.
Here’s an important point that comes from the basic science. 75 years ago Alan Turing gave us the idea of universal computation. Which is what showed that software was possible, and eventually
launched the whole computer revolution. Well, from the science I’ve done comes what I call the Principle of Computational Equivalence. Which among other things implies that not only are universal
computers possible; they’re actually really common out there in the computational universe. Like this is the simplest cellular automaton we know is a universal computer—with that tiny little rule at
the bottom there.
And from a very successful piece of crowdscience that we did a few years ago, we know this is the simplest possible universal Turing machine.
Tiny things. That we can reasonably expect exist all over the natural world. But that are computationally just as powerful as any computer we can build, or any brain, for example. Which explains, by
the way, why so much of nature seems so hard for us to decode.
And actually, this starts to get at some big old questions. Like free will. Or like the nature of intelligence. And one of the things that comes out of the Principle of Computational Equivalence is
that there really can’t be something special that is intelligence—it’s all just computation. And that has important consequences for thinking about extraterrestrial intelligence. And also for
thinking about artificial intelligence.
For me it was this philosophical breakthrough that led to a very practical piece of technology: Wolfram|Alpha. Ever since I was kid I’d been interested in seeing how to take as much of the knowledge
that’s been accumulated in our civilization as possible and make it computable. Somehow make it so that if there’s a question that can be answered on the basis of this knowledge, it can be done
For years I thought that doing that would require building something like a brain. And every decade or so I would ask myself if it was time yet, and I would conclude that it was just too hard. But
finally from the Principle of Computational Equivalence I realized that, no, it all had to be doable just with computation. And that’s how I came to start building Wolfram|Alpha.
I hope you’ve mostly seen Wolfram|Alpha—on the web, in Siri, in apps, or wherever.
The idea is: you ask a question, in natural language, and Wolfram|Alpha tries to compute the answer, and generate a report, using knowledge that it has. At some level, this is an insanely difficult
thing to make work. And if we hadn’t managed to do it, I might have thought it was pretty much impossible.
First, you’ve got to get all that data, on all sorts of things in the world. And no, you can’t just forage it from the web. You have to actually go interact with all the primary sources. Really
understand the data, with actual human experts. And curate it to the point where it can reliably be used to compute from. And by now I think we’ve got more bytes of raw data inside Wolfram|Alpha than
there is meaningful text content on the whole web.
But that’s only the beginning. Most questions people have aren’t answered just by retrieving a piece of data. They need some kind of computation. And for that we’ve had to take all those methods and
models and algorithms that come from science and engineering and financial analysis and whatever and implement them. And by now it’s more than ten million lines of very high-level Mathematica code.
So we can compute lots of things. But now we’ve got to know what to compute. And the only realistic way for humans to interface with something this broad is through humans’ natural language. It’s not
just keywords; it’s actual pieces of structured language, written or spoken. And understanding that stuff is a classic hard problem.
But we have two secret weapons. First, a bunch of methods from my new kind of science. And second, actual underlying knowledge, a bit like us humans have, that lets us decode and disambiguate.
Over the 3 years since Wolfram|Alpha launched I’m pleased at how far we’ve managed to get. It’s hard work, but now more than 90% of the queries that come to our website we can completely understand.
We’ve really cracked the natural language problem, at least for these small snippets.
So once we’ve understood the input, what do we do? Well, what we’ve found is that people almost never want just one answer—42 or whatever. They want a whole custom report built for them. And we’ve
developed a methodology now for automatically figuring out what information to present, and how to present it.
Many millions of people use this every day. A few web tourists. An awful lot of students, and professionals, and people wanting to figure all kinds of things out. It’s kind of nice to see how few of
the queries we get are things that you can just search for on the web. People are asking us fresh, new, questions whose answers have never been written down before. So the only way to get those
answers would be to find a human expert to ask—or to have Wolfram|Alpha compute them. It’s a huge project that I personally expect to keep working on forever.
It’s fascinating of course. Combining all these different areas of human knowledge. Figuring out things like how to curate and make computable human anatomy, or the 3 million or so theorems that
exist in the literature of mathematics. I’m quite proud of how far we’ve got already, and how much faster we’re getting at doing things.
And, you know, it’s not just about public knowledge. We’re also now able to bring in uploaded material, and use our algorithms and knowledge to analyze it. We can bring in a picture. And Wolfram|
Alpha will tell us things about it.
And we could explicitly tell Wolfram|Alpha to do some image computation. It works really nicely on a phone. Or we could upload a spreadsheet. And Wolfram|Alpha can use its linguistics to decode
what’s in it, and then automatically generate a report about what’s interesting in the data.
Or we could get data from some internal database and ask natural language questions about it. And get custom reports automatically generated that can use external data as well as internal data. It’s
incredibly powerful. And actually we have quite a business going building custom versions of Wolfram|Alpha for companies and other organizations.
It’s gradually getting more and more automated, and actually we’re planning to spin off a company specifically to do this kind of thing.
And you know, given the Wolfram|Alpha technology stack, there are so many places to go. Like having Wolfram|Alpha not just generate information, but actually do things too. You tell it something in
natural language. And it uses algorithms and knowledge to figure out what to do.
Here’s a sophisticated case. As part of our high-end business, last year we released Wolfram SystemModeler.
Which is a tool for letting one design and simulate complex devices with tens of thousands of components. Like airplanes or turbines. Well, hooking this up to Wolfram|Alpha, we’ll be able to just ask
questions to Wolfram|Alpha, and have it go to SystemModeler to automatically simulate a device, and then figure out how to do something.
Here’s a different direction: set Wolfram|Alpha loose on something like a document, where it can use our natural language technology to automatically add computation.
You know, today Wolfram|Alpha operates as an on-demand system: you say something to it, and it’ll respond. But in the future, it’s increasingly going to be used in a preemptive way. It’s going to
sense or see something, and it’s automatically going to show you what it thinks you should know. Right now, the main issue that we see in people using Wolfram|Alpha is that they don’t understand all
the things it can do. But in this preemptive mode, there’s no issue with that kind of discovery. Wolfram|Alpha is just going to automatically be figuring out what to show people. And once the
hardware for augmented reality is there, this is going to be really neat. I mean, within Mathematica we now have what I think is the world’s most powerful image computation system. And combining this
with Wolfram|Alpha capabilities, we’re going to be able to do a lot.
I mentioned Mathematica here. It’s sort of our secret weapon. It’s how we’ve managed to do everything we’ve done. Including build that outrageously complex thing that is Wolfram|Alpha. Many of you I
hope have heard of Mathematica. This June it’ll be the 25th anniversary of the original release of Mathematica. And I’m proud of how many inventions and discoveries have now been made in the world
using Mathematica over that period of time. As well as how many students have been educated with it.
You know, I originally built Mathematica for a kind of selfish reason: I wanted to have it myself. And my goal was to make it broad enough that it could handle sort of any kind of computation I’d
ever want to do. My approach was kind of a typical natural-science one. Think about all those different kinds of computations, drill down and try to understand the primitives that lie beneath them,
and then implement those primitives in the system. And in a sense my plan was ultimately just to implement anything systematic and algorithmic that could be implemented.
Now I had a very important principle right from the beginning: as the system grew, it must always remain consistent and unified. Every new capability that was added must coherently fit into the
structure of the system. And it was a huge amount of work to maintain that kind of design discipline. But I have to say that particularly in the last 10 years or so, it’s unbelievably paid off.
Certainly it’s important in letting people learn what’s now a very big system. But even more important is that it’s allowed us to have a very powerful kind of recursive development process, in which
anything we add now can “for free” use those huge blocks of functionality that we’ve already built.
The result is that we’ve been covering huge algorithmic areas incredibly fast, and with much more powerful algorithms than have ever been possible before. Actually, a lot of the time we’re really
building not just algorithms, but meta-algorithms. Because another big principle we have is that everything should be as automated as possible.
You as a human want to just tell Mathematica what task you’re trying to perform. And there might be 200 different algorithms that could in principle be used. But it’s up to Mathematica to figure out
automatically what the best one is. Internally, Mathematica is using very sophisticated algorithms—many of which we’ve invented. But the great thing is that a user doesn’t have to know anything about
the details; that’s all handled automatically.
You know, Mathematica has by far the largest set of interconnected algorithmic capabilities that’s ever existed. And it’s not just algorithms that are built in; it’s also knowledge. Because all the
knowledge in Wolfram|Alpha is directly accessible, and progressively more closely integrated, in Mathematica. It’s really quite a transformational thing. I call it knowledge-based computing. Whether
you’re using the Wolfram|Alpha API or Mathematica, you’re able to do computing in which you can in effect start from the knowledge of the world, and then build from there.
I have to say that I’ve increasingly realized that Mathematica has been rather undersold. People think of it as that great tool for doing math. Which it certainly is. But it’s so much more than that.
It was designed that way from the beginning, and as the years go by “math” becomes a smaller and smaller fraction of what the capabilities of Mathematica are about.
Really there are several parts to Mathematica. The most fundamental is the language that Mathematica embodies. It’s ultimately based on the idea that everything can be represented as a symbolic
expression. Whether it’s an array of data, an image, a document, a program, an interface, whatever. This is an idea that I had more than 25 years ago—and over the years I’ve gradually realized just
how powerful it is: having a small set of primitives that can seamlessly handle all those different kinds of things, and that provides in a sense an elegant “fusion” of many popular modern
programming paradigms.
In addition to the symbolic character of the language, there’s another key point. Essentially every other computer language has just a small set of built-in operations. Yes, it has all sorts of
mechanisms for handling in a sense the “infrastructure” of programming. But when it comes to algorithms and so on, there’s very little there. Maybe there are libraries, but they’re not unified, and
they’re not really part of the language. Well, the point in our language is that all those algorithms are actually built right into the language. And that’s not all, there’s actual knowledge and data
also built into the language.
It’s really a new kind of language. Something very different than others. And something incredibly productive for people who use it. But I have to say, in a sense I think it’s been rather hidden all
these years. Not that there aren’t millions of people using the language through Mathematica. But there really should be a lot more—including lots who won’t be caught dead doing anything that anyone
might think had “math” in it.
Really anyone who’s doing anything algorithmic or computational should be using it. Because it’s inevitably just much more efficient than anything else—because it has so much already built in. So one
of the new things that we’re doing is to break out the language that Mathematica is based on, and give it a separate life. We’ve been thinking about this for more than 20 years. But now it’s finally
going to happen.
We agonized for a long time about what to call the language. We came up with all kinds of names—clever, whimsical, whatever—and actually just recently on my blog I asked people for their comments and
suggestions. And I suppose the result was a little embarrassing. Because after all the effort we put it in, by far the most common response about the name we should use is the most obvious and
straightforward one. We should call it the Wolfram Language.
So that’s what it’ll be. The language we’ve built for Mathematica, with that huge network of built-in algorithms and knowledge, will be called the Wolfram Language. It’ll use .wolf files, and of
course that means its icon has to be something like this:
What’s going to happen with this language? Well, here’s where things really get interesting. The language was originally built for the desktop platform that’s the current way most people use
Mathematica. But in Wolfram|Alpha, for example, the language is running on a large scale in the cloud. And what’s going to be happening over the next few months is that we’ll be releasing a full
cloud version. And not only that, there’ll also be a version running locally on mobile, first under iOS.
Why is that important? Well, it really opens up the language, both its use and its deployment. So, for example, we’re going to have the Wolfram Programming Cloud, in which you can freely write code
in the language—anything from a pithy one-liner to something giant—right there in the cloud. And then immediately deploy in all sorts of ways.
If you wanted, you could just run it in an interactive session, like in standard Mathematica. But you can also generate an instant API. That you can call from anywhere, to just seamlessly run code in
our cloud. Or you can embed the code in a page, or have the code just run in the background, periodically generating reports or whatever. And then you can take the exact same code, and deploy it on
mobile too.
Now something else that we’ve built and refined over the years in Mathematica is our dynamic interface, that uses symbolic expressions to represent controls and interactivity. Not every use of the
Wolfram Language uses that interface. But what’s happening is that we’re reinterpreting the interface to optimize it not just for the desktop, but also for the cloud and for mobile.
One place the interface is used big time is in what we call “CDF“: our computable document format. We introduced this a couple of years ago. Underneath it’s Wolfram Language code. On top, it’s a
dynamic interactive interface that one can use to make reports and presentations and interactive documents of any kind. Right now, they can be in a plugin in a browser, or they can be standalone on a
desktop. What’s happening now is that they can also be on mobile, or, with cloud CDF, they can operate in a pure web page, with no plugin, but just sending every computation to the cloud.
It might sound a bit abstract here. But I think the whole deployment of the Wolfram Language is going to be quite a revolution in programming. There’ve been seeds of this in Mathematica for a quarter
of a century. But it’s a kind of convergence of cloud and mobile technology—and frankly our own understanding of the power of what we have—that’s making all this happen now.
You know, the fact that it’s so easy to get so much done in the language is not only important for professional programmers; it’s also really important for kids and anyone else who’s learning to
program. Because you don’t have to type much in, and you’re immediately doing serious stuff. And, by the way, you get to learn all those state-of-the-art programming and algorithm concepts right
there. And also: there’s an on-ramp that’s easier than anyone’s ever had before, with free-form natural language courtesy of the Wolfram|Alpha engine. It really seems to work very well for this
purpose—as we’ve seen in our Mathematica Summer Camp for high-school kids, and our new after-school initiative for middle-school kids.
Maybe I should actually show a demo of all this stuff.
There is a whole mechanism for deploying these dynamic things using CDF.
One application area that’s fun—and topical these days—is using algorithmic processes to make things that one can 3D-print.
That was the Wolfram Language on the desktop, and CDF. Here it is in the Programming Cloud.
That’s cloud CDF. This also works on iOS, though the controls look a bit different.
In the next little while, you’ll be seeing a variety of platforms based on our technology. The Document Platform, for creating CDF documents, in the cloud or elsewhere. The Presentation Platform, for
creating full computable interactive presentations. The Discovery Platform, optimized for the workflow of discovering things with our technologies.
Many of these involve not just the pure language, but also CDF and our dynamic interface technology. But one important thing that’s just happening now is that the Wolfram Language, with all its
capabilities, is starting to fit in some very cheap hardware. Like Raspberry Pi. For years if you wanted to embed algorithms into some device, you’d have to carefully compile them into some low-level
language or some such. But here’s the great thing: for the first time, this year, embeddable processors are powerful enough that you can just run the whole Wolfram Language, right on them. So you can
be doing your image processing, or your control theory computation, right there, with all the power of everything we’ve built in Mathematica.
By the way, I might say something about devices. The whole landscape of sensors and devices is changing, with everything getting more diverse and more ubiquitous. And one important thing we’re doing
is making a general Connections Hub for sensors and devices. In effect we’re curating sensors and devices, and working with lots of manufacturers. So that the data that comes from their systems can
seamlessly flow into Wolfram|Alpha, or into anything based on the Wolfram Language. We’re building a generic analytics system that anyone can plug into. It can be used in a fully automatic way, like
in Wolfram|Alpha Pro. And it can be arbitrarily customized and programmed, using the Wolfram Language.
By the way, another component of this, primarily for researchers, is that we’re building a general Data Repository. What’s neat here is that because of our Wolfram|Alpha linguistic capabilities, we
can automatically read and align data. And then of course we can do analysis. When you read a research paper today, if you’re lucky there’ll be some URL listed where you can find data in some raw
form. But with our Data Repository people are going to be able to have genuinely “data-backed papers”. Where anyone can immediately do comparisons or new analysis.
Talking of data, I’ve been a big collector of it personally for a long time. Last year here I showed for the first time some of my nearly 25-year time series of personal analytics data. Here’s the
new version.
That’s every email I sent, including this year.
That’s keystrokes.
And that’s my whole average daily rhythm over the past year.
Oh, and here’s something useful I built actually right after South by Southwest last year, that I was embarrassed I didn’t have before: the time series of the number of pending and unanswered emails
I have. (It’s computing in real time here in our cloud platform.)
It’s sort of a proxy for busyness level. Which is pretty useful in managing my schedule and so on.
Well, bizarre as it seems to me, I may be the human who’s ended up collecting the most long-term data on themselves of anyone.
But nowadays everyone’s got lots of data on themselves. Like on Facebook, for example. And so in Wolfram|Alpha we recently released Personal Analytics for Facebook. It’ll be coming out in an app soon
too. So you can just go to Wolfram|Alpha and ask for a Facebook report, and it’ll generate actually a whole little book about you, combining analysis of your Facebook data with public computational
My personal Facebook is a mess, but here’s what the system does on it:
When we first released our Personal Analytics for Facebook we were absolutely draconian not keeping any data. And no doubt we destroyed some great sociometric science in the making. But a month or so
ago we started keeping some anonymized data, and started a Data Donor program, which has been very successful. So now we can explore quite a few things. Like here are a few friend graphs.
There’s a huge diversity. Each one tells a story. Both about personality and circumstances.
But let’s look at some aggregate information. Like here’s the distribution of the number of friends that people have.
Like this shows the distributions of ages of friends for a person of a particular age.
The distribution gets broader with age. Actually, after about age 25, there’s some sort of new law of nature one discovers: that at any age about half of people’s friends are between 25% younger and
25% older.
By the way, in Mathematica and the Wolfram Language there’s also now direct access to social media data for Facebook, LinkedIn, Twitter and so on. So you can do all kinds of interesting analysis and
Actually, talking of Personal Analytics, here’s a new dimension. I’ve been walking around South by Southwest for a couple of days wearing this cute Memoto camera, which takes a picture every 30
seconds. And last night my 14-year-old was kind enough to write a bit of code to analyze what I got. Here’s what he came up with.
You know, it’s pretty neat to see how our big technology stack makes all this possible. I mean, even just to read stuff properly from Facebook we’ve got to be able understand free-form input. Which
of course we can with the Wolfram|Alpha Engine. And then to say interesting things we’ve got to use knowledge and algorithms. Then we’ve got to have good automated visualization. And it helps to have
state-of-the-art large-graph-manipulation algorithms in the Wolfram Language Engine. And also to have CDF and our Dynamic Interface to generate complete reports.
To me it’s exciting—if a little overwhelming—to see how many things can be moved forward with our technology stack. One big one is education. Of course Wolfram|Alpha and Mathematica are extremely
widely used—and well known—in education. And they’re used as central tools in endless courses and so on.
But with our upcoming Cloud Platform lots of new things are going to become possible. And as my way to understand that, I’ve decided it’s time for me to actually make a course or two myself. You
know, I was a professor once, before I was a CEO. But it’s been 25 years. Still, I decided the first course to do was one on Data Science. An Introduction to Data Science. I’m having a great time.
Data Science is a terrific topic. Really in the modern world everyone should learn it. It’s both immediately useful, and a great way to teach programming, as well as general computational and
quantitative thinking.
Between our Cloud Platform and the Wolfram Language, we have a great way to set up the actual course. Here’s the basic setup. Below the video there’s a window where you can just immediately play with
all the code that’s shown. And because it’s just very high-level Wolfram Language code it’s realistic to learn in effect just by immersion.
And when it comes to setting up exercises and so on, it’s pretty interesting when you have Wolfram|Alpha-style natural language understanding capabilities and so on. I hope the Data Science will be
ready to test in a limited number of months. And, needless to say, it’s all being built with a very automated authoring system, that’ll allow lots of people to make courses like this. I’m thinking
about trying to do a math course, for example.
We get asked a lot about math education a lot, of course. And actually we have a non-profit spinoff called Computer-Based Math that’s been trying to create what we see as being a modern
computer-informed math curriculum. You see, the current math curriculum was mostly set a century ago, when the world was very different. Two things have changed today: first, we’ve got computers that
can automate the mechanical doing of math. And second, there are lots of new and different ways that math gets used in the world at large.
It’s going to be a long process modernizing math education, around the world. We’d been wondering what the first country really to commit to Computer-Based Math would be. Turns out it’s Estonia,
which signed up a few weeks ago.
So we’re slowly moving toward people being educated in the kind of computational paradigm. Which is good, because the way I see it, computation is going to become central to almost every field. Let’s
talk about two examples—classic professions: law and medicine. It’s funny, when Leibniz was first thinking about computation at the end of the 1600s, the thing he wanted to do was to build a machine
that would effectively answer legal questions. It was too early then. But now we’re almost ready, I think, for computational law. Where for example contracts become computational. They explicitly
become algorithms that decide what’s possible and what’s not.
You know, some pieces of this have already happened. Like with financial derivatives, like options and futures. In the past these used to just be natural language contracts. But then they got
codified and parametrized. So they’re really just algorithms, which of course one can do meta-computations on, which is what has launched a thousand hedge funds, and so on.
Well, eventually one’s going to be able to make computational all sorts of legal things, from mortgages to tax codes to perhaps even patents. Now to actually achieve that, one has to have ways to
represent many aspects of the real world, in all its messiness. Which is what the whole knowledge-based computing of Wolfram|Alpha is about.
How about medicine? To me probably the single most important short-term target in medicine is diagnosis. If you get a diagnosis wrong—and an awful lot are wrong in practice—then all the effort and
money you spend is going to be wasted, and is often even going to be harmful. Now diagnosis is a difficult thing for humans. And as more is discovered in medicine—and medicine gets more
specialized—it gets even more difficult. But I suspect that in fact diagnosis is in some sense not so hard for computers. But it’s a big project to make a credible automated diagnosis system. Because
you have to cover everything: it’s no good just doing one particular kind of disease, because then all you’re going to do is say that everyone has it.
By the way, the whole area of diagnosis is about to change—as a result of the arrival of sensor-based medicine. It used to be that you could ask a question or do a test, and the result would be one
bit, or one number. But now it’s routine to be able to get lots and lots of data. And if we’re really going to use that data, we’ve got to use computers; humans just don’t deal with that kind of
thing. It’s an ambitious project with many pieces, but I think that using our technology stack—and some ideas from science I’ve developed—we know how to do automated medical diagnosis. And we’re
actually spinning off a company to do this.
You know, it’s interesting to think about the broad theory of diagnosis. And I think an interesting model for medical diagnosis is software diagnosis—figuring out what’s going wrong with a large
running software system. In medicine we have all these standard diagnosis codes. For an operating system one might imagine having things like “diseases of the memory management system” or “diseases
of the keyboard driver”. In medicine, we’re starting to be able to measure more and more. But in software we can in principle monitor almost everything. But we need methodologies to interpret what
we’re seeing.
By the way, even though I think diagnosis is in the short term a critical point in medicine, I think in the long term it’s simply going to go away. In fact, from my science—as well as the software
analogy—I think it’s clear that the idea of discrete diseases is just wrong. Of course, today we have just a few thousand drugs and surgeries we can use. But I think more and more we’ll be using
algorithmic treatments. Whether it’s medical devices that behave according to algorithms, or whether it’s even programmable drugs that effectively do a computation at the molecular scale to work out
how to act. And once the treatments are algorithmic, we’re really going to want to go directly from data on symptoms to working out the treatment, often adaptively in real time.
My guess is it’s going to end up a bit like a financial portfolio. You watch what the stocks do, and you have algorithms to decide how to respond. And you don’t really need to have a verbal
description—like the technical trader’s “head and shoulders” pattern or something—of what the stock chart is doing.
By the way, when you start thinking about medicine in fundamentally computational terms, it gives you a different view of human mortality. It’s like the operating system that’s running, and over the
course of time has various kinds of trauma and infections, starts running slower, and eventually crashes, and dies. If we’re going to avoid mortality, we need to understand how to intervene to keep
the operating system—or the human—up and running. There are lots of interim steps. Taking over more and more biological functions with technology. And figuring out how to reprogram pieces of the
molecular machine that is our body. And figuring out if necessary how to “hit the pause button” to freeze things, presumably with cryonics.
By the way, it’s bizarre how few people work on this. Because I’m sure that, just like cloning, there’s just going to be a wacky procedure that makes it possible—and once we know it, we’re just going
to be able to do it quite routinely, and it’s going to be societally very important. But in the end, we want to solve the problem of keeping all the complexity that is a human running indefinitely.
There are some fascinating basic science problems here. Connected to concepts like computational irreducibility, and a bit to the traditional halting problem. But I have no doubt that eventually
it’ll be solved, and we’ll achieve effective human immortality. And when that happens I expect it’ll be the single biggest discontinuity in human history.
You know, as one thinks about such things, one can’t help wondering about the general future of the human condition. And here’s something someone like me definitely thinks about. I’m spending my life
trying to automate things. Trying to make it possible to do automatically with computation things that humans used to have to do themselves.
Now, if we look at the arc of human history, the biggest systematic change through time is the arrival of more and more technology, and the automation of more and more kinds of tasks. So here’s a
question: what if we succeed in automating everything? What will happen then? What will the humans do? There’s an ultimate—almost philosophical—version of this question. And there’s also a practical
next-few-decades version.
Let’s start with the ultimate version. As we go on and build more and more technology, what will the end point be? We might assume that we could somehow go on forever, achieving more and more. But
the Principle of Computational Equivalence tells us that we cannot. One we have reached a certain level, everything is already in a sense possible. And even though our current engineering has not yet
reached this point, the Principle of Computational Equivalence also tells us that this maximal level of computational sophistication is not particularly rare. Indeed it happens in many places in the
physical world, as well as in systems like simple cellular automata.
And it’s not too hard to see that as we improve our technology, getting down to the smallest scales, and removing everything that seems redundant, that we might wind up with something that looks just
like a physical process that already happens in nature. So does this mean that in the ultimate future, with all that great automation and technology, all we’ll achieve is just to produce something
that’s indistinguishable from zillions of things that already exist in nature?
In some sense, yes. It’s a sort of ultimate Copernicanism: not only is our Earth not the center of the universe, and our bodies not made of something physically unique. But also, what we can achieve
and create with our intelligence is not in a fundamental sense different from what nature is already doing.
So is there any meaningful ultimate future for us? The answer is yes. But it’s not about doing some kind of scientific utopian thing, and achieving some ultimate perfect state that’s independent of
our history. Rather, it’s about doing things that depend on all those messy details of us humans and our history.
Here’s a way to understand this. Imagine our technology has got us a complete AI sitting in a box on a desk. It can do all sorts of incredible things; all sorts of sophisticated computations. The
question is: what will it choose to do? It has no intrinsic way to decide. It needs some kind of goal, some kind of purpose, imposed on it. And that’s where we humans and our history come in. I mean,
for humans, there is again no absolute purpose abstractly defined. We get our notion of purpose from the details of our existence and our history. And to achieve ultimate technology is in a sense
empty unless purposes are defined for it, and that’s where we humans come in.
We can begin to see this pretty well even right now. In the past, our technology was such that we typically had to define quite explicitly what systems we build should do, say by writing code that
defines each step they should take. But today we’ve increasingly got much more capable systems, that can do all kinds of different things. And we interact with them in a sense by injecting purpose.
We define a purpose or a goal, and then the system figures out how it can best achieve that goal.
Well, of course, human purposes have evolved quite a bit over the course of human history. And often their evolution is connected to the arrival of technology that makes more things possible. So it’s
not too clear what the limit of this kind of co-evolving system will be, and whether it will turn out to be wonderful or terrible. But in the nearer term, we can ask what effect increasing automation
will have on people and society. And actually, as I was thinking about this recently, I thought I’d pull together some data about what’s happened with this historically. So here are some plots over
the past 150 years of what fractions of people in the US have been in different kinds of occupations. Blue for males; pink for females.
There are lots of interesting details here, like the pretty obvious direct and indirect effects of larger government over the last 50 years. But there’s also a clear signature of automation, with a
variety of kinds of occupations simply going away. And this will continue. And indeed my expectation is that over the coming years a remarkable fraction of today’s occupations will successfully be
automated. In the past, there’ve always been new occupations that took the place of ones that were automated away. And my guess, or perhaps hope, is that for most people some hybrid of avocation and
occupation will emerge.
Which brings me to something I’ve been thinking about quite a lot recently. I’m mostly a science, technology and ideas guy. But I happen also to be very interested in people. And over the years I’ve
had the good fortune to work with—and mentor—a great many very talented people. But here’s something I’ve noticed. Many people—and young people in particular—have an incredibly difficult time picking
a good occupation—or avocation—for themselves. It’s a bit of a puzzle. People have certain sets of talents and interests. And there are certain niches that exist in the world at any given time. The
problem is to match a given person with a niche.
Now sometimes people—and I was an example—pick out a pretty clear niche by the time they’re early teenagers. But an awful lot of people don’t. Usually there are two problems. First, people don’t
really identify their skills and interests. And second, people don’t know what’s possible to do in the world. And in the end, an awful lot of people pick directions—almost at random—that aren’t in
fact very good for them. And I suspect in terms of wasted resources in the world, this is pretty high up there.
You know, I have a kind of optimistic theory—that’s supported by a lot of personal observation—that for almost every person, there’s at least one really good thing they could be doing, that they will
find really fulfilling. They may be lucky or unlucky about what value the world places on that thing at a given time in history. But if they can find that thing—and it often isn’t so easy—then it’s
Well, needless to say, I’ve been thinking what can be done. I’ve personally worked on the problem many times. With many great results. Although I have to say that almost always I’ve been dealing with
highly capable individuals in good circumstances. And I do want to figure out how to generalize, to younger folk and less good circumstances. But whatever happens, there’s a puzzle to solve. A little
like medical diagnosis. Requiring understanding the current situation. Then knowing what’s possible. And one of the practical challenges is knowing enough about how the world is evolving, and what
new occupations and ways to operate in the world are emerging.
I’m hoping to do more in this direction. I’m also thinking a bunch about the structure of education. If people have an idea what they might like to do, how do they develop in that direction? The
current system with college and so on is pretty inflexible. But I think there are better alternatives, that involve effectively doing diverse mentored projects. Which is something we’ve seen very
successfully in the summer schools we’ve done over the past decade.
But anyway, with all this discussion about what people should do: that’s a big challenge for someone like me too. Because I’m in this situation where I’ve been building things for 30 years, and now
there are just an absurd number of things that what I’ve built makes possible. We’re pursuing a lot of things at our company. But we only have 700 people, which isn’t enough for everything we want to
do. I made a decision long ago to have a simple private company, so we could concentrate on the long term, and on what we really wanted to do. And I’m happy to say that for the last quarter century
that’s worked out very well. And it’s made possible things like Wolfram|Alpha—that probably nobody but me would ever have been crazy enough to put money into.
But now we’ve just got too many opportunities, and I’ve decided we’re just leaving too many great ideas—and great technology prototypes—on the table. So we’ve been learning how to spin off companies
to develop these things. And actually, we have a whole scheme now for setting up an outside fund to invest in spinoffs that we’re doing.
I’ve been used to architecting technical systems. But architecting these kinds of business structures is also pretty interesting. Sort of trying to extend the machine I’ve built for turning ideas
into reality. You know, I like to operate by having a whole portfolio of long-range ideas. Which I carry around with me for a long time. Like for Wolfram|Alpha it was more than 30 years. Gradually
waiting for the circumstances and the right time to pursue them. And as I said earlier, I would probably be doing my physics project now, if technology opportunities hadn’t got in the way.
Though I have to say that the architecture of that project is tricky too. Because it’s not clear how to fit it into the world. I mean, lots of people, including myself, are incredibly curious about
it. But for the physics community it’s a scary, paradigm-breaking, proposition. And it’s going to be an uphill story there.
And the issue for someone like me is: how much does the world really want something like the fundamental theory of physics done? It’s always great feedback for me doing projects where people really
like the results. I don’t know about this one. I’ve been thinking about trying to find out by putting up a Kickstarter project or something for finding the fundamental theory of physics. It’s kind of
funny how one goes from that level of practicality, to thinking about the structure of our whole universe. It’s fun—and to me—it’s invigorating.
Well, there are lots more things it’d be fun to talk about. But let me stop here, and hope that you’ve enjoyed hearing a little about what’s going on these days in my small corner of the world.
5 more comments. Show all »
1. Great talk. I look forward to trying out the new functionality.
george woodrow
2. Masterpiece Stephen, explaining very well the nearly unlimited computational possibilities and future expectations, thanx!
Richard de Jeu Metrigroup NL
4. https://plus.google.com/communities/112845006884148391862
Makers Hackers Artists & Engineers
a Google+ community
My curiosity is stimulated
But i like to use my hands & more tools than a keyboard. Thanks for your speculations.
jon sanford
5. Where is the stochastic / chaotic reductionist approach which can accept a picture or data from communications channel(s) resulting or emitted from even the most trivial deterministic automata,
find partitions and classify and extract any inherent stochastic, chaotic, and deterministic factors successively, until (any) noise is gone, and only (any) underling determinism remains??
Running Automata forward in time makes for pretty pictures.
Analytically ‘sensing’ perhaps mapping, time series, corpi of data, etc. across permutations of non-isotropic and varying-continuous dimension higher-order tensor fields; finding all possible
partitions and teasing out the abstractions which describe deterministic, chaotic, or purely random ‘essences’ seems, at least intuitively, possible, and valuable.
Any sort of ‘unstructured’ data, even encrypted, encoded, compressed, temporal, spatial/dimensional data, formal and natural syntactic structures and languages, etc. which has any information
content should fall to such universal analysis.
Does the reduction of such analysis to a singular determinism-chaos-randomness spectrum make this approach computable?
Perhaps such an approach would stop short of positing the existence of the Earth, the Sun, and attributing causality resulting from these things to the varying motions and colorings encoded in an
encrypted MPEG video bytestream of sunlit leaves on a tree wavering in a breeze, but such an approach should be at least capable of discriminating and parameterizing the many possible
deterministic/chaotic/random underpinnings which might abstractly describe such, or any data.
Bob Montgomery
Hide comments » | {"url":"http://blog.stephenwolfram.com/2013/03/talking-about-the-computational-future-at-sxsw-2013/","timestamp":"2014-04-18T00:59:15Z","content_type":null,"content_length":"95849","record_id":"<urn:uuid:5735cb42-8091-4af3-ae5d-4ab8f3830f32>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rowlett Algebra 2 Tutor
Find a Rowlett Algebra 2 Tutor
...I had to use math to a great degree in my professional career and tutored students while I was in college and on active duty as they pursued advancement and self-improvement. I love math and
want to be able to teach students to not be afraid of it, but to consider it a language of numbers that c...
11 Subjects: including algebra 2, geometry, algebra 1, precalculus
...I teach in plain English with lots of common examples to make science come alive. My passion includes physics, chemistry, biology, astronomy, and weather. My masters degree is in Electrical and
Computer Engineering.
48 Subjects: including algebra 2, chemistry, physics, calculus
...With the ability to do that, you can solve similar problems on your quizzes, lab reports, tests, exams (SAT, SAT 2). If you have problems in Gen. Chemistry, Gen. Physics, Algebra, Calculus I
and II, I could be a resource for you.
24 Subjects: including algebra 2, English, reading, calculus
...I look forward to meeting with you and your student to see how we can advance their mathematics skills. My first visit will be free; this will be a chance for us to meet and determine what your
student's needs are and how I can best help them. Thanks for your time.On top of my BS in Mathematics...
4 Subjects: including algebra 2, geometry, algebra 1, prealgebra
...I have more than 15 years of experience with Excel. I have developed charts and pivot tables for many uses. I have created and used spreadsheets to figure depreciation, job and product costs,
and numerous other things.
8 Subjects: including algebra 2, physics, algebra 1, tax preparation | {"url":"http://www.purplemath.com/rowlett_tx_algebra_2_tutors.php","timestamp":"2014-04-18T23:34:10Z","content_type":null,"content_length":"23612","record_id":"<urn:uuid:30e40e20-502c-4156-8e79-81902230b1e3>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find the load, then size the bolts
In a previous column, David Dearth, a consulting analyst and president of Applied Analysis & Technology, Huntington Beach, Calif. discussed four ways to handle fasteners and preloads with FEA in
assemblies. This article continues by solving a bolt problem by hand and using an FEA program.
In a previous column, David Dearth, a consulting analyst and president of Applied Analysis & Technology, Huntington Beach, Calif. ( AppliedAT@aol.com), discussed four ways to handle fasteners and
preloads with FEA in assemblies. This article continues by solving a bolt problem by hand and using an FEA program.
Dearth suggests this problem as an example of using FEA to determine reactions at bolts. "I recommend working through a few sample or warm-up problems with textbook solutions before tackling the real
one. This problem simulates a 300-lb load on a bracket and tube. The task is to estimate reactions at the four mounting locations so bolts can be sized for them. This problem contains features of
real-life engineering challenges and can be solved with pencil and paper using conventional static analysis summing the forces of a free body diagram. A second task estimates reactions at the bolt
locations using FEA. Then compare results," he says.
Hand calculations rely on equations found in most engineering textbooks with sections on finding reactions in assemblies. "To check your work, the detailed static calculations can be downloaded from
machinedesign.com or by requesting a copy from me by e-mail," says Dearth.
Formulating a rigid model, the second task, generates results using an FEA model and compares them to the hand calculations. The rigid model shows the geometry of the loaded tube. "The stick-figure
model uses rigid elements," says Dearth. "Solve for the reactions to four decimal places using simple equations that sum static forces and a spreadsheet to minimize roundoff in the arithmetic," he
adds. The calculation-table summarizes and comparesresults from manual calculations and FEA outputs for the rigid body. The FEA model was processed using MSC/Nastran.
Comparing a Static Calculation and Rigid-Body FEA Results
BOLT AXIAL BOLT SHEAR BOLT SHEAR RESULTANT SHEAR
Bolt Solutions X total Y total Z total Net shear
A By hand -367.617 67.500 -60.000 90.312
Rigid FEA model -367.617 67.500 -60.000 90.312
% difference 0.0 0.0 0.0 0.0
B By hand 151.999 7.500 -60.000 60.467
Rigid FEA model 151.9999 7.500 -60.000 60.467
% difference 0.0 0.0 0.0 0.0
C By hand -281.902 67.500 60.000 90.312
Rigid FEA model -281.903 67.500 60.000 90.312
% difference 0.0 0.0 0.0 0.0
D By hand 237.713 7.500 60.000 60.467
Rigid FEA model 237.713 7.500 60.000 60.467
% difference 0.0 0.0 0.0 0.0
Sum Net forces -259.808 150.000 0.000 N/A
Estimates of reactions at the bolt locations using conventional equations and FEA mathematical idealization agree well with each other. Summation of external forces from the applied loading should be
[x]=300 cos (30°) = -259.808 lb and [y]=300 sin (30°) = 150.00 lb. Resultant shear = (X^2 + Y^2) ^1/2. All loads are in lb.
Readers can refer to a document titled Bolt Reactions HandCalcs.pdf . There are four additional files, among them, detailed hand calculations with summary spreadsheet arithmetic (
Part1_RunNotes_RigidModel_BoltReactions.pdf and Part 2_RunNotes_FlexModel_BoltReactions.pdf). The files are also available from Dearth at AppliedAT@aol.com. Other files include run notes and
keystroke summaries for models. The FEA models, "RigidMdl_BoltReactions_v2004.mod" and "FlexMdl_BoltReactions_v2004.mod", are small enough to process using limited-node or demo versions of MSC/
Nastran v2004. However, the files will also work in any version of Nastran. To obtain a free copy of this demo software, log on to: (mscsoftware.com/offers/master/contact.cfm) or telephone MSC
Software at (866) 672-1549.
Discuss this Article 0
Post new comment | {"url":"http://machinedesign.com/archive/find-load-then-size-bolts","timestamp":"2014-04-20T01:22:53Z","content_type":null,"content_length":"92528","record_id":"<urn:uuid:f431c8cc-9b20-4ab7-8100-e657bed6eff3>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Common Core Math
Replies: 16 Last Post: Jan 22, 2013 7:27 AM
Messages: [ Previous | Next ]
Kirk Weiler Re: Common Core Math
Posted: Jan 22, 2013 7:27 AM
Posts: 12
From: I am working under the assumption that the transition will be similar to the transition from Math A/B to the "new" Integrated Algebra 1, Geometry, and Algebra 2 with Trig, i.e. that it
Arlington will be cohort based and will NOT allow schools to wait an additional year, although that would be nice.
High School
Registered: The attachment to this post is a slide show from the NYSED December meeting. On page 18 is the all too familiar block diagram of the CCSS transition timeline for New York State. In
10/29/07 Footnote #3, it states, in no uncertain terms, that the transition will be "Cohort Based." I take that to mean that any student entering high school from September 2013 onward will have
to take the Common Core based math exams and not the current tests. Those who entered prior to September 2013 will be elligible for the current exams. The only outstanding issue in my
mind is the cohort that entered in September of 2012. I would assume that NYSED will mandate that this cohort must still take the new CCSS aligned Geometry Regents exam, but will have the
option of the older Integrated Algebra exam or the newer CCSS aligned exam, should they fail the older exam.
I have heard no suggestion that students will have to take both the new exams and the current ones next year (other than on this listserv). NYSED does some very strange things, but this
would make no sense and have no precedent in the last decade of transitions we've had to deal with. | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2429716&messageID=8122200","timestamp":"2014-04-20T21:45:11Z","content_type":null,"content_length":"37228","record_id":"<urn:uuid:4834e1e0-0972-44fd-9274-87035416e915>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hand In The Problems Below .the Work You Submit ... | Chegg.com
Ece 201
Question number 1 thanks
Image text transcribed for accessibility: Hand in the problems below .the work you submit must be your own and on paper (stapled) .electronic submission are not accepted .show all work used to arrive
at an answer .Box the answers. Text refers to the 7 th edition of Thomas and Rosa. A PI controller (Proportional - integral controller )" is a feedback controller which drives the plant to be
controlled with a weighted sum of the error (difference between the output and desire set-point ) and the integral of that value " (from ) such that v0(t) = vs(t) +K o v s (x) dx where K is a
constant and vs is the error signal. Draw this circuit on your solutions and show the applications of KVL, KCL and element constraints that are used to show that the PI output above is achieved by
the circuit . Also, determine the value of K for this circuit. Assume that both vo and vs are equal to 0 at t=0. In this circu, R1 = 200 Ohm, R2 =300 Ohm, and R3 =50 Ohm, also, Vs1 =5 V and Vs2 =10
V. The switch is in upper position . The capacitor value is 1 muF. Which if any of the following variables in the above circuit are state variables? [ic vc iR1 VR 2]?
Electrical Engineering
Answers (1) | {"url":"http://www.chegg.com/homework-help/questions-and-answers/hand-problems--work-submit-must-paper-stapled-electronic-submission-accepted-show-work-use-q3907283","timestamp":"2014-04-16T23:30:02Z","content_type":null,"content_length":"20845","record_id":"<urn:uuid:9f4a4923-e055-4971-b6a5-3b31448593ee>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] on harvey friedman's message, "Re on harvey friedman's 'number theorists'" of April 7.
Gabriel Stolzenberg gstolzen at math.bu.edu
Sun Apr 9 18:04:38 EDT 2006
Harvey begins by quoting from my message "on harvey friedman's
'number theorists'" of April 4th.
> > I'm surprised that you don't tell us how this interest is manifested
> > mathematically. Isn't that important? I'd like to see some of the
> > work that he did on questions of this kind.
> He is one of the three people I contacted, and I haven't yet heard
> from from any three. I have to admit that I am becoming pessimistic
> about hearing from them.
GS: Don't be. I don't think not hearing back for a while should count
for anything. There are things we know in our bones re mathematics but
find it difficult to articulate. This might well be one of them.
> My impression is that most of the leading senior number theorists
> have published bounds either improving previous bounds or establishing
> a bound where no bound previously existed.
GS: I would be astonished if this were not true. But, for me, the
questions are (1) what kind of bounds (e.g., improving a 3 to a 2),
(2) how much, if any, of it was done for its "intrinsic interest"
and (3) what other reasons were given?
I was just reviewing what used to be the most famous case of this,
a sign change for pi - li. The classical existence theorem was proved
by Littlewood in 1914. Then, in 1933, Skewes (who apparently was a
graduate student) worked out a humongous bound on the assumption that
the Riemann hypothesis is true. Finally, in 1955, he got a second
bound on the assumption that it is false and took the max of the two.
GS: So far as I can tell, the only reason that number theorists were
interested in Skewes' number was that the nature of Littlewood's
argument (by cases, depending on whether RH is true or false) made it
seem "intrinsically nonconstructive." Finally, in 1966, by a different
method, Sherman Lehman got a much better bound. But he was playing a
very different game.
GS: Why was Littlewood's theorem interesting to people? As Ingham
explains, according to the best table of values then available, pi(n)
was always < li(n), even though the ratio goes to 1 pretty quickly.
But, so the thinking went, if this striking relationship were to fail
for larger n, then, by analogy, the fact that the Riemann hypothesis
had also been confirmed up to a high value would no longer carry the
same evidentiary weight that it does.
> The mere existence of a an effective bound for a Pi03 is of
> intrinsic interest, and the interest to number theorists increases
> as the bounds get lower and lower.
GS: We still seem to disagree about this. However, I'm still not
clear about what you mean by "intrinsic interest." (It would help if
you quantified it.) As I've said before, if it includes fascination
and we're talking about the absence of such a bound rather than the
construction of one, then, in some cases. sure.
GS: As for an apparent interest in the construction of a bound, I
think Skewes' number is hard to beat. It's portrayed in the popular
literature as fascinating and interesting, a numerical equivalent of
a rock star. But why? Is it intrinsic?
With best regards,
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2006-April/010374.html","timestamp":"2014-04-20T13:25:35Z","content_type":null,"content_length":"6099","record_id":"<urn:uuid:585ea95f-13cb-4dcc-9e7f-7dbf8f17981f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Symmetric Difference
September 3rd 2009, 02:09 AM #1
Aug 2009
Symmetric Difference
The symmetric difference of A and B is
(a) $(A-B) \cap (B-A)$
(b) $(A-B)\cup (B-A)$
(c) $(A \cup B) - (A \cap B)$
(d) $\{(A \cup B)-A\} \cup \{(A \cup B)-B\}$
Its answer given in the book is (b). But I found that (c) and (d) are also true. Am I right? If not please tell why. Thanks a lot for giving me your time.
Symmetric Difference
Hello ninni
The symmetric difference of A and B is
(a) $(A-B) \cap (B-A)$
(b) $(A-B)\cup (B-A)$
(c) $(A \cup B) - (A \cap B)$
(d) $\{(A \cup B)-A\} \cup \{(A \cup B)-B\}$
Its answer given in the book is (b). But I found that (c) and (d) are also true. Am I right? If not please tell why. Thanks a lot for giving me your time.
You are right. (a) is the only expression not equal to the symmetric difference of the sets A and B.
The symmetric difference of A and B is
(a) $(A-B) \cap (B-A)$
(b) $(A-B)\cup (B-A)$
(c) $(A \cup B) - (A \cap B)$
(d) $\{(A \cup B)-A\} \cup \{(A \cup B)-B\}$
Its answer given in the book is (b). But I found that (c) and (d) are also true. Am I right? If not please tell why. Thanks a lot for giving me your time.
Yes you are right about (c).[I dindn'ttry to prove the others].
One of the proofs of (c):
September 3rd 2009, 04:49 AM #2
September 5th 2009, 01:09 PM #3 | {"url":"http://mathhelpforum.com/discrete-math/100357-symmetric-difference.html","timestamp":"2014-04-17T04:48:33Z","content_type":null,"content_length":"38955","record_id":"<urn:uuid:ffe1f2a8-48d5-4ec0-87ce-dc35588a47ac>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |