content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Subdivision Schemes for Variational Problems
Tuesday, November 4
Subdivision Schemes for Variational Problems
8:30 AM-9:15 AM
Chair: Tony DeRose, Pixar Animation Studios
Room: Belle Meade
The original theory of splines grew out of the study of simple variational problems. A spline was a smooth function that minimized some notion of energy subject to a set of interpolation constraints.
A more recent method for creating splines is subdivision. In this framework, a spline is the limit of a sequence of functions, each related by a simple averaging rule.
The speaker will show that the two ideas are intrinsically related. Specifically, the solution space to a wide range of variational problems can be captured as a spline space defined through
Joe Warren
Department of Computer Science
Rice University
GD97 Homepage | Program Updates| Registration | Hotel Information | Transportation |
Program-at-a-Glance | Program Overview | Speaker Index |
tjf, 7/3/97 | {"url":"http://www.siam.org/meetings/archives/gd97/ip3.htm","timestamp":"2014-04-19T07:24:21Z","content_type":null,"content_length":"1658","record_id":"<urn:uuid:e8f12d85-c0f0-47ae-ad69-10c892289d81>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
PLEASE HELP!!!!!! URGENT!!!! When there's a table of information, for two different variables (like f(x) and x), how do I find the constant ratio to determine if it's an exponential function? Thank
you SO much! - I really appreciate the help! =)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Do you have a table to use as an example?
Best Response
You've already chosen the best response.
If not, I can make one up if you want.
Best Response
You've already chosen the best response.
Sure, let me give an example . . . .
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Note: I made this up, so it most definitely won't have a constant ratio.
Best Response
You've already chosen the best response.
If this was an exponential equation, then it would be of the form y = a*b^x where a and b are some constant numbers
Best Response
You've already chosen the best response.
When x = 1, y is y = 1. So y = a*b^x 1 = a*b^1 1 = a*b Now solve for b to get b = 1/a
Best Response
You've already chosen the best response.
When x = 6, y is y = 5 So y = a*b^x 5 = a*b^6 Now plug in b = 1/a to get 5 = a*(1/a)^6 5 = a/(a^6) 5 = 1/a^5 5a^5 = 1 a^5 = 1/5 a = fifth root of 1/5 a = 0.72477 Since b = 1/a, we know that b = 1
/a b = 1/0.72477 b = 1.3797 Therefore, if this was an exponential equation (it's probably not, but let's say it is), then the equation would be approximately y = 0.72477*1.3797^x
Best Response
You've already chosen the best response.
Okay, I'm following.
Best Response
You've already chosen the best response.
Now let's test another ordered pair. Let's test (8,4). Plug in x = 8 and y = 4 to get y = 0.72477*1.3797^x 4 = 0.72477*1.3797^4 4 = 0.72477*3.62358670182697 4 = 2.62626693388313 and we're way off
So because of this, the table above is NOT modelling an exponential function
Best Response
You've already chosen the best response.
Okay, I think I understand. Thank you very much - you've been so helpful! :)
Best Response
You've already chosen the best response.
you're welcome, I'm glad I could help
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fcebe60e4b0c6963ad9cbf0","timestamp":"2014-04-16T10:16:10Z","content_type":null,"content_length":"118740","record_id":"<urn:uuid:35d51753-960d-4da3-8dca-d88d19181bd1>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics and the movie UP – floating a house
I haven’t seen the Pixar Movie “Up” yet, so don’t spoil it for me. I have, however, seen the trailer. In my usual fashion, I have to find something to complain about. There is this scene where the
old man releases balloons out of the house.
What is wrong with this scene? Also, would that be enough balloons to make the house float? Here is a shot of the balloons coming out of the house.
Ok, I was already wrong. The first time I saw this trailer I thought the balloons were stored in his house. After re-watching in slow motion, it seems the balloons were maybe in the back yard held
down by some large tarps. This is better than what I originally thought. Oh well, let me answer the question even though it is wrong. What if he had the balloons in his house and then released them?
Would that make the house float more? Here is a diagram:
So, would one of these float more than the other? What makes things float? I have talked about this in more detail in the Mythbusters and the lead balloon, so I will just say that there is a buoyancy
force when objects displace air or a fluid. This buoyancy force can be calculated with Archimedes’ principle which states: The buoyancy force is equal to the weight of the fluid displaced. The
easiest way to make sense of this is to think of some water floating in water. Of course water floats in water. For floating water, it’s weight has to be equal to it’s buoyant force. Now replace the
floating water with a brick or something. The water outside the brick will have the exact same interactions that they did with the floating water. So the brick will have a buoyancy force equal to the
weight of the water displaced. For a normal brick, this will not be enough to make it float, but there will still be a buoyant force on it. Mathematically, the buoyant force can be written as:
Ok, back to the UP house. What is being displaced? What is the mass of the object. It really is not as clear in this case. What is clear is the thing that is providing the buoyancy is the air. So,
the buoyancy force is equal to the weight of the air displaced. What is displacing air? In this case, it is mostly the house, all the stuff in the house, the balloons and the helium in the balloons.
In the two cases above, the volume of the air displaced does not change. This is because the balloons are in the air in the house. (Remember, I already said that I see that this NOT how it was shown
in the movie). So, if you (somehow) had enough balloons to make your house fly and you put them IN your house, your house would float before you let them outside.
How many balloons would you need to make the house float?
I realized while writing this that I am once again too slow. Others have already calculated this. First is the The Science and Entertainment Exchange. The other excellent coverage of this is from
Wired.com. I think with both of these I will just describe how you could do this and leave it as a homework problem. You would need to estimate:
• The size of the house.
• The mass of the house (I would assume the whole house is like 10% wood and use the volume of the house and the density of wood – my first guess)
• The volume of the air displaced. Again use the density of wood above.
• The size of each balloon. A typical house hold balloon probably has a diameter of about 30-40 cm. You would also need to know the mass of the rubber in each balloon. This shouldn’t be too hard as
you could get this from a deflated balloon (first guess 5 grams).
• The above could give you an estimated calculation for the buoyant force from each ballon plus its weight, or the net force from each balloon. You could just estimate this also.
• You would also need to estimate the amount of string needed, it would have a non-negligible weight.
Since I got “scooped” on my original investigation, I will give two bonus topics
Why doesn’t the balloon house keep rising?
The reason the balloon reaches a certain height is that the buoyant force is not constant with altitude. As the balloon rises, the density of the air decreases. This has the effect of a lower buoyant
force. At some point, the buoyant force and the weight are equal and the balloon no longer changes in altitude.
When the boy throws the GPS out the window, is it modeled correctly?
I noticed this when I was watching the preview again. I realized that it was set up perfectly for video analysis of motion. So, here I go. Here is a screen shot of the scene in question:
To analyze this video, I used my favorite Tracker Video (it’s free and runs in Windows, Mac OS X and Linux). To set the scale I said the height of the house was 10. 10 what? I don’t know, but 10. It
really doesn’t matter. I could estimate the scale by estimating the size of the house or by assuming the vertical acceleration is 9.8 m/s^2. In this case, you will see that is not necessary. Here is
a plot of the horizontal position as a function of time.
This is a shot from Video Tracker’s built in analysis tools. They are really good, I should have been using these the whole time (I used to export the data to Vernier’s Logger Pro). I fit a function
to this data, randomly choosing a parabolic fit. Is this ok? No. The horizontal position should be a straight line indicating constant velocity in the x-direction. Why would it behave this way? I
don’t know. Air resistance would not be enough to make it behave this way unless it was really light. If you want to model this for a homework problem and estimate the mass the GPS would have to have
to have a motion like this, let me know.
Now, here is the vertical position.
Again, I fit a quadratic function to the data. If the object is in free fall, the only significant force would be the gravitational force. This would give the object a constant vertical acceleration
such that it would have a position as a function of time as:
Since the data seems to fit ok, I can assume an acceleration of g and use that to find the size of the house. If you need a refresher course on finding the acceleration from a position plot, check
this out. Anyway, the function I fit to the vertical data says that the acceleration would be 2*(-1.662 U/s^2) where U is the distance unit in the video. I will assume the time step between frames is
correct. This gives:
So, going back to the scaling, this would make the height of the house 29.5 meters or 96.79 feet. I don’t think so.
I can see it now. A high school class is finally learning kinematics and getting excited. The teacher says “hey that movie Up was awesome, lets do some video analysis of that GPS out the window.” You
can imagine what happens next. We will have a generation of kids growing up not understanding kinematics.
If you can model the hairs on the head of a man in an animation, don’t you think you could use Newtonian mechanics to plot the position of the GPS? I don’t know, maybe it would have fallen too fast
or something. Oh well.
1. #1 Gina June 3, 2009
Slightly more details from the movie:
The old man goes back inside the house just before it takes off.
There are many empty helium tanks on the front lawn, so it’s not clear where he filled them.
The shot returns to the house which is breaking away from it’s various moorings (a whole other problem).
Later, you see that the balloons are tethered to the fireplace grate through the chimney.
Finally, regarding cost and experience, the old man has spent his life in the noble career of selling helium balloons to tourists, so maybe he had wholesaler’s pricing.
2. #2 Guy Srinivasan June 3, 2009
If you can model the hairs on the head of a man in an animation, don’t you think you could use Newtonian mechanics to plot the position of the GPS?
You could, of course, but the goal is not in fact realistic physics in either the GPS case or the hair case. The goal is tricking the audience’s brains into a) loving the movie during* and after
the movie, and b) having memories after the movie of the movie looking realistic.
3. #3 Steven Peters June 3, 2009
I haven’t seen the movie yet either, but I wouldn’t be too upset with them for muddying the physics a bit, since it is a cartoon after all. Road runner can run across gaps that make the coyote
fall, and run into tunnels painted on a wall while the coyote hits the wall. I don’t think we need to hold cartoons that responsible.
4. #4 sylvia martinez June 3, 2009
Sometimes real physics is boring looking. If you’ve ever seen an actor fall down vs. a real person falling down you get the point. Willy E. Coyote falling into the canyon is funny because of the
completely wrong physics. The delay creates anticipation and delivers the punch line.
When they make games or movies they model things because it makes it easier to get close to realism, but then the designers back off and undo the reality to add impact, suspense or other drama.
They wouldn’t bother modeling a single thing falling, it’s cheaper to do it by hand. But for hair, it’s easier and cheaper to use mathematically accurate models. But I’m sure in some scenes they
modified the final output by hand if the hair looked “funny”.
5. #5 Rhett June 3, 2009
Speaking of road runner reminded me of this awesome site – http://www.lghs.net/teachers/science/burns/rrphysics/Road_Runner_Physics/Movie.html
6. #6 Chad June 3, 2009
Movie was great. During a few particular scenes I questioned why the characters were doing something with the house, but I won’t bring it up yet for the sake of not posting spoilers yet.
7. #7 Anonymous Coward June 4, 2009
I can see it now. A high school class is finally learning kinematics and getting excited. The teacher says “hey that movie Up was awesome, lets do some video analysis of that GPS out the
window.” You can imagine what happens next. We will have a generation of kids growing up not understanding kinematics.
With regard to the vertical motion: only with a bad teacher. Perhaps the movie (which I haven’t seen) is taking place on a different planet, where the gravitational constant at the planet’s
surface is smaller than on earth? The student could use estimates of the house size to figure out g on that planet. Perhaps the lower value of g on that planet might also explain how the house
could be floated with such a paltry volume of balloons.
It would be a great way of distinguishing things that are fundamental and universal (Newton’s laws) from things that are specific to the surface of the earth (little g, the buoyancy of air).
With regard to the horizontal position: this is a bit trickier to reconcile. Maybe the kid throws a mean curve ball?. Or the air is quite dense on this planet, explaining both the rapid damping
of the GPS motion (you’d need to refit the vertical motion including damping to see if this is a consistent explation) and the high buoyancy of the balloons.
8. #8 Jon H December 29, 2009
“The teacher says “hey that movie Up was awesome, lets do some video analysis of that GPS out the window.” “
Why, is Up the first movie ever made where something is thrown? There are probably millions of hours of easily accessible video footage of projectiles more interesting than a thrown cartoon GPS.
It’s not a simulation, it’s art. If it’s not ‘correct’, it’s because they modified the path for aesthetic reasons.
Cartoons also exaggerate the elastic deformation of objects when they collide. Because it’s funny. Big deal.
9. #9 Lauren April 15, 2010
Actually, Pixar did have engineers figure the math for just how many balloons it would take to hold up the house in the film. The number was so astronomical, it would have looked ridiculous and
been much more difficult to animate. So, they consciously scrapped the notion of accuracy regarding the hot air balloon and went with a number that looks better on film.
10. #10 hobo bob August 11, 2010
more importantly, ITS A FUCKING MOVIE!
11. #11 C.Lee October 10, 2010
I think maybe you should consider a position with pixar for doing these types of “calculations” for their films!
In case you didn’t know, they are located in Emeryville, CA…quite close to Berkeley and San Francisco…
12. #12 Becky Costello April 13, 2011
Dear Rhett,
I found you while googling the topic of “the real up movie house”, because I had read an article about someone who really attempted to preform this. I was fascinated at first by all the thought &
calculating you must have done for this piece. Then I remembered the “Analytical Man” I am married to…LoL!! I am a very Artsy Fartsy person & studied 16th Century Glass in College so I live on
the other side of the mind. I know so many times he & I will be having the same experience & yet see it so very differently. All I can say is I pray that when you finally saw the movie, you saw
it with the heart of a 6 six year old. Saw the adventure, the lesson of the wisdom of an older generation, the lesson of learning to love again after such terrible loss, the lesson of good verses
evil & mostly the humor throughout the whole experience. Sometimes that is life, a house that could in no way be picked up & carried off yet it is….. As far as whether “mathematically is it
possible~~~ well in the famous words of someone else “Frankly Scarlett, I don’t give a…Oh never-mind that is your line…..Hugz!!
13. #13 Becky Costello April 13, 2011
Thought you might enjoy this…LoL!! | {"url":"http://scienceblogs.com/dotphysics/2009/06/03/physics-and-the-movie-up-floating-a-house/","timestamp":"2014-04-21T14:48:05Z","content_type":null,"content_length":"81223","record_id":"<urn:uuid:ed02071b-c725-4938-836c-b07cfdd3ebe7>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question on an exercise in Hartshorne: Equivalence of categories
up vote 1 down vote favorite
This is a slight reformulation of exercise II.5.9.(c) in Hartshorne's "Algebraic Geometry" which I don't understand.
Let $K$ be a field and $S=K[X_0,\ldots,X_n]$ a graded ring. Set $X=Proj(S)$ and let $M$ be a graded $S$-module. The functors $\Gamma_*$ definied by $$ \Gamma_*(\mathcal{F})=\bigoplus_{n\in\mathbb
{Z}} (\mathcal{F}(n))(X) $$ and $~\widetilde{\phantom{\cdot}}~$ (the "graded associated sheaf functor", see Hartshorne II.5. page 116 for a definition) induce an equivalence of categories between
the category $\mathcal{A}$ of quasi-finitely generated (i.e. in relation to a finitely generated module) graded $S$-modules modulo a certain equivalence relation $\approx$ and the category $\
mathcal{B}$ of coherent $\mathcal{O}_X$-modules. The equivalence relation is: $M\approx N$ if there is an integer $d$ such that $\oplus_{k\geq d}M_k\cong\oplus_{k\geq d}N_k$.
I don't know what an "equivalence of categories" is in this context. Formally an "equivalence of categories" means in particular that there are isomorphisms $$\hom_\mathcal{A}(M,N)\cong \hom_\mathcal
{B}(\widetilde{M},\widetilde{N})$$ and $$\hom_\mathcal{B}(Y,Z)\cong \hom_\mathcal{A}(\Gamma_*(Y),\Gamma_*(Z))$$ of sets. This is my problem: How is the sheaf $\mathcal{H}om_\mathcal{B}(Y,Z)$
considered as a set? Perhaps it should be $\Gamma_*(\mathcal{H}om_\mathcal{B}(Y,Z))\cong \hom_\mathcal{A}(\Gamma_*(Y),\Gamma_*(Z))$?
ag.algebraic-geometry ac.commutative-algebra
An equivalence of categories $\mathcal{C}$ and $\mathcal{D}$ consists of a pair of functors, $F:\mathcal{C}\rightarrow\mathcal{D}$ and $G:\mathcal{D}\rightarrow\mathcal{C}$, such that $F\circ G$
is naturally isomorphic to the identity on $\mathcal{D}$ and $G\circ F$ is naturally isomorphic to the identity on $\mathcal{C}$. – Keenan Kidwell Mar 23 '10 at 12:34
Read Serre's FAC... – Shizhuo Zhang Mar 23 '10 at 12:45
2 In the category of coherent $\mathcal{O}_X$ modules, the morphisms from $E$ to $F$ are the global sections of the sheaf $\mathcal{H}om(E,F)$. This object is sometimes denoted $\mathrm{Hom}(E,F)$,
to distinguish it from $\mathcal{H}om(E,F)$. – David Speyer Mar 23 '10 at 13:22
5 (and to go along with Shizhuo Zhang's comment) ...and it is probably in Euler. – Mariano Suárez-Alvarez♦ Mar 23 '10 at 15:10
add comment
1 Answer
active oldest votes
The homomorphisms in the category of sheaves are not sheaves themselves. The hom sheaves have the data of things that are only homomorphisms over open subsets. So if $Y,Z$ are coherent
up vote 4 $\mathcal{O}_X$-modules and you are looking for $\mathcal{O}_X$-module homomorphisms, you don't actually get $\mathcal{H}om(X,Y)$, what you actually get are the global sections only,
down vote because these are the only homomorphisms that are defined on the whole space, and so the only actual homomorphisms in the category of sheaves.
Thank you, Charles. Are you speaking of the "usual" global sections or do one get in fact an isomorphism of graded modules $\Gamma_*(\mathcal{H}om_\mathcal{B}(Y,Z))\cong\hom_\mathcal
{A}(\Gamma_*(Y),\Gamma_*(Z))$? – roger123 Mar 23 '10 at 13:35
1 It's the usual global sections. Remember that a homorphism of graded modules must preserve degree! – David Speyer Mar 23 '10 at 14:48
Ok. Thank you all very much. – roger123 Mar 23 '10 at 16:17
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry ac.commutative-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/19105/question-on-an-exercise-in-hartshorne-equivalence-of-categories","timestamp":"2014-04-20T09:07:33Z","content_type":null,"content_length":"60645","record_id":"<urn:uuid:765635c5-1cd4-45f3-a3d9-cf1326978d9d>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to obtain the sampling frequency of frequency data
Hi everybody, this is my first question in this forums.
I have some complex data in Matlab the absolute value of which represent a frequency response. My data extend from 1 Hz to 20 kHz and have an incremental frequency of 1 Hz.
I now need to convert this frequency responses in Impulse responses in order to measure the duration of this Impulse response. I am a bit confused though on what is the sampling frequency of my data.
I need this information I guess in order to calculate time.
Is it the highest frequency of my dataset (that is 20 kHz) or double this frequency (40 kHz). | {"url":"http://www.physicsforums.com/showthread.php?p=3920670","timestamp":"2014-04-21T09:57:17Z","content_type":null,"content_length":"22969","record_id":"<urn:uuid:dbe3281f-4c41-4a08-b744-90d6e19bbfec>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
pre-cal Math help, can you tell me if this question makes sense or not? please explain what they mean and how to solve this type of question
the question is: Layla bought a toy rocket. she calculated that the rocket's height as a function of time can be represented by h(t)=-16t2 + 4t where h is height in feet and t is time in seconds. if
the shuttle follows this, how long will it be in the air? they give no height, and do not include distance in the equation or weight of the object. how can this equation work? if the shuttle gets 0
feet in the air, won't it just be in the air for 0 seconds? yet they say the answer is .25 seconds! | {"url":"http://www.ask.com/answers/468026682/pre-cal-math-help-can-you-tell-me-if-this-question-makes-sense-or-not-please-explain-what-they-mean-and-how-to-solve-this-type-of-question","timestamp":"2014-04-18T08:20:13Z","content_type":null,"content_length":"111325","record_id":"<urn:uuid:7fac010f-73da-4880-b5ff-dcb2c7f0c5f4>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: st: RE: aweight option in kdensity
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: RE: aweight option in kdensity
From "vora n" <vora_stata@hotmail.com>
To statalist@hsphsun2.harvard.edu
Subject RE: st: RE: aweight option in kdensity
Date Thu, 14 Sep 2006 19:33:15 -0700
Again, Thank you very much for you help :)
I have looked at your paper and have just
tried the kdens module (very powerful!).
I actually have another question that I would
like to clarify (please correct me).
If I do:
.kdensity income
Then the default kernal is Epanechnikov
and the default bandwidth is the "optimal"
bandwidth (silverman).
If I do:
.kdens income
Then the default kernal is Epan2
and the default bandwidth is silverman.
[In your paper, equation 28 and equation 29
are actually the same thing?]
So if I want kdens to provide the same graph
as the default kdensity then I should do:
.kdens income, k(e)
However, it seems like the graph from [1] and [3]
are not the same. Am I missing something?
Thank you very much.
Best regards,
From: "Ben Jann" <ben.jann@soz.gess.ethz.ch>
Reply-To: statalist@hsphsun2.harvard.edu
To: <statalist@hsphsun2.harvard.edu>
Subject: st: RE: aweight option in kdensity
Date: Wed, 13 Sep 2006 10:06:55 +0200
The formula is
f(x) = {1/(h*W} {sigma [wi * K((x - xi)/h)]}
where wi are the weights (inverse of sampling probability
if the weights are sampling weights) and W is the sum of
weights over all observarions and
. kdensity income [aw=MY_W]
will give you the correct estimate.
Also see:
- http://fmwww.bc.edu/RePEc/bocode/k/kdens.pdf.
- Buskirk and Lohr (2005). Asymptotic properties of kernel
density estimation with complex survey data. Journal
of Statistical Planning and Inference 128: 165 - 190.
Vora N wrote:
> I have problem understanding the aweight option
> in kdensity command and the manual (and everywhere
> else I have read) is not helping. Please help?
> Also, please correct me if my understanding is wrong.
> Basically, if I have a survey data of people's income
> in a country. Let's say I have n observation in
> my data and each observation has its own weight
> (sampling weight -- I believe it's called probability
> weight in stata?). These weights will sum to the country's
> population. This weight variable is named MY_w.
> (sum of MY_w over all the n observations equals to the
> country's population)
> Now, I want to estimate the density of their income.
> --------------------
> if I do:
> >kdensity income
> Then, I don't take into account of those sampling
> weights. The formula is:
> [equation 1]
> f(x) = {1/nh} {sigma[(K(x - xi)/h)]}
> h--bandwidth
> n--sample size
> K--kernal function
> xi--each observation's income
> sigma--summation i = 1 to n
> So I assume that each observation is sampled at equal
> probability of 1/n -- which is wrong?
> --------------------
> I think the weight I have is pweight but kdensity
> doesn't allow pweight. It only allows fweight and
> aweight. It seems like fweight is out of the question.
> I should be doing this?
> [equation 2]
> f(x) = {1/h} {sigma [wi * K((x - xi)/h)]}
> where wi is the probability of that observation being
> sampled (so wi was 1/n in [equation 1])
> Now, I wonder if kdensity with aweight can give me
> the estimate of [equation 2]
> Should I be doing this?
> >kdensity income [aw=MY_W]
> or should I be doing this?
> >kdensity income [aw=1/MY_W]
> or I shouldn't use aweight and try to edit the command
> by my owen? Would the unweighted version be wrong?
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
MSN 8 ป้องกันไวรัสในอีเมล (ฟรี 2 เดือน*) http://join.msn.com/?page=features/virus
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-09/msg00530.html","timestamp":"2014-04-21T00:13:48Z","content_type":null,"content_length":"10371","record_id":"<urn:uuid:a11b049b-700a-46e1-b69f-921cec15963c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Itasca, IL Prealgebra Tutor
Find a Itasca, IL Prealgebra Tutor
...I have been tutoring for WyzAnt for over 4 years now and very much enjoy it! I have both lived and taught (English as foreign language) in France, and enjoy helping students from all different
backgrounds. I graduated from Missouri State University with a 3.62 overall (4 point scale), so I feel that I am qualified to tutor in many areas.
16 Subjects: including prealgebra, English, chemistry, French
...I have always had a passion for teaching and find it extremely fulfilling to help others succeed. I graduated from the University of California, San Diego with a degree in Biochemistry in
2012. Since graduation, I've worked as a vision therapist for children ages 6-18, combining my love of optometry with one-on-one tutoring.
25 Subjects: including prealgebra, chemistry, calculus, physics
...I break down math problems for the child, to make him/her understand in an easy way. I work with the child to develop his/her analytical skills. I give homework and test to assess his progress
8 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...I have been tutoring honors chemistry, AP chemistry, Intro. and regular chemistry and college level general chemistry 1 and 2 for last several years and have acquired the proficiency of making
difficult aspects easy to understand for the students.There are two main aspects in learning chemistry -...
23 Subjects: including prealgebra, chemistry, algebra 1, algebra 2
...I enjoy working one on one with my students and look forward to tutoring students outside of school. I am patient and easy-going and bring a unique real-world perspective to math through my
engineering experience. I currently teach this subject at the high school level.
2 Subjects: including prealgebra, algebra 1
Related Itasca, IL Tutors
Itasca, IL Accounting Tutors
Itasca, IL ACT Tutors
Itasca, IL Algebra Tutors
Itasca, IL Algebra 2 Tutors
Itasca, IL Calculus Tutors
Itasca, IL Geometry Tutors
Itasca, IL Math Tutors
Itasca, IL Prealgebra Tutors
Itasca, IL Precalculus Tutors
Itasca, IL SAT Tutors
Itasca, IL SAT Math Tutors
Itasca, IL Science Tutors
Itasca, IL Statistics Tutors
Itasca, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/itasca_il_prealgebra_tutors.php","timestamp":"2014-04-20T06:58:50Z","content_type":null,"content_length":"24016","record_id":"<urn:uuid:5841e2df-0b40-43fe-b890-e0d3ffa922bb>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solved Torque Problems
Here are some pre-worked out problems for you to examine before trying some out for yourself.
Example 1:
A force of 5.0 N is applied at the end of a lever that has a length of 2.0 meters. If the force is applied directly perpendicular to the the lever, as shown in the above diagram, what is the
magnitude of the torque acting the lever?
This sample is a simple matter of plugging the values into the equation:
Torque = F * l
Torque = 5.0 N * 2.0 m
Torque = 10 N*m
Example 2:
If the same force as in example 1 is applied at an angle of 30 degrees at the end of the 2.0 meter lever, what will be the magnitude of the torque?
First we must find the lever arm value using trig:
Sin30 = x/2.0
Sin30 (2.0) = x
1.0 m = x = l
Now we plug the value into the torque formula:
Torque = F * l
Torque = 5.0 N * 1.0 m
Torque = 5.0 N*m
Example 3:
What force is necessary to generate a 20.0 N*m torque at an angle of 50 degrees from along a 3.00 m rod?
Solve for the lever arm value:
Sin50 = x/3.00
Sin50 (3) = x
2.30 m = x = l
Now plug the values into the formula:
Torque = F * l
20.0 N*m = F * 2.30 m
20.0 N*m/2.30 m = F
8.70 N = F
Example 4:
Suzie applies a force of 40.0 N at an angle of 60 degrees up from the horizontal to a wooden rod using a spring scale. If she generates a torque of 73.0 N*m, how long was the rod?
First of all we create a trig equation for the value of the lever arm to use in our torque formula.
Let x = Rod Length
Let l = Lever Arm
Sin60 = l/x
Sin60 (x) = l
Now placing this into the formula, it should be easy to find the length of the rod.
Torque = F * l
73.0 N*m = 40.0 N * Sin60 (x)
x = 2.11 m
Now that you have seen some solved sample problems, you should head to the Torque Introduction Problem Page and try some on your own. | {"url":"http://www.angelfire.com/nc3/apphysics/solvedtorqueproblems.html","timestamp":"2014-04-17T21:51:45Z","content_type":null,"content_length":"13675","record_id":"<urn:uuid:c1eae80d-b2d2-4f84-a5c7-3f7fc2fe16b1>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physicists Take a New Approach to Unify Quantum Theory and Theory of Relativity
Physicists from the Max Planck Institute and the Perimeter Institute in Canada have developed a new approach to the unification of the general theory of relativity and quantum theory.
Present-day physics cannot describe what happened in the Big Bang. Quantum theory and the theory of relativity fail in this almost infinitely dense and hot primal state of the universe. Only an
all-encompassing theory of quantum gravity which unifies these two fundamental pillars of physics could provide an insight into how the universe began. Scientists from the Max Planck Institute for
Gravitational Physics (Albert Einstein Institute) in Golm/Potsdam and the Perimeter Institute in Canada have made an important discovery along this route. According to their theory, space consists of
tiny “building blocks”. Taking this as their starting point, the scientists arrive at one of the most fundamental equations of cosmology, the Friedmann equation, which describes the universe. This
shows that quantum mechanics and the theory of relativity really can be unified.
For almost a century, the two major theories of physics have coexisted but have been irreconcilable: while Einstein’s General Theory of Relativity describes gravity and thus the world at large,
quantum physics describes the world of atoms and elementary particles. Both theories work extremely well within their own boundaries; however, they break down, as currently formulated, in certain
extreme regions, at extremely tiny distances, the so-called Planck scale, for example. Space and time thus have no meaning in black holes or, most notably, during the Big Bang.
Daniele Oriti from the Albert Einstein Institute uses a fluid to illustrate this situation: “We can describe the behavior of flowing water with the long-known classical theory of hydrodynamics. But
if we advance to smaller and smaller scales and eventually come across individual atoms, it no longer applies. Then we need quantum physics.” Just as a liquid consists of atoms, Oriti imagines space
to be made up of tiny cells or “atoms of space”, and a new theory is required to describe them: quantum gravity.
Continuous space is broken down into elementary cell
In Einstein’s relativity theory, space is a continuum. Oriti now breaks down this space into tiny elementary cells and applies the principles of quantum physics to them, thus to space itself and to
the theory of relativity describing it. This is the unification idea.
A fundamental problem of all approaches to quantum gravity consists in bridging the huge dimensional scales from the space atoms to the dimensions of the universe. This is where Oriti, his colleague
Lorenzo Sindoni and Steffen Gielen, a former postdoc at the AEI who is now a researcher at the Perimeter Institute in Canada, have succeeded. Their approach is based on so-called group field theory.
This is closely related to loop quantum gravity, which the AEI has been developing for some time.
The task now consisted in describing how the space of the universe evolves from the elementary cells. Staying with the idea of fluids: How can the hydrodynamics for the flowing water be derived from
a theory for the atoms?
This extremely demanding mathematical task recently led to a surprising success. “Under special assumptions, space is created from these building blocks, and evolves like an expanding universe,”
explains Oriti. “For the first time, we were thus able to derive the Friedmann equation directly as part of our complete theory of the structure of space,” he adds. This fundamental equation, which
describes the expanding universe, was derived by the Russian mathematician Alexander Friedman in the 1920s on the basis of the General Theory of Relativity. The scientists have therefore succeeded in
bridging the gap from the microworld to the macroworld, and thus from quantum mechanics to the General Theory of Relativity: they show that space emerges as the condensate of these elementary cells
and evolves into a universe which resembles our own.
Quantum gravity could now answer questions regarding the Big Bang
Oriti and his colleagues thus see themselves at the start of a difficult but promising journey. Their current solution is valid only for a homogeneous universe – but our real world is much more
complex. It contains inhomogeneities, such as planets, stars and galaxies. The physicists are currently working on including them in their theory.
And they have planned something really big as their ultimate goal. On the one hand, they want to investigate whether it is possible to describe space even during the Big Bang. A few years ago, former
AEI researcher Martin Bojowald found clues, as part of a simplified version of loop quantum gravity, that time and space can possibly be traced back through the Big Bang. With their theory, Oriti and
his colleagues are hoping to confirm or improve this result.
If it continues to prove successful, the researchers could perhaps use it to explain also the assumed inflationary expansion of the universe shortly after the Big Bang as well, and the nature of the
mysterious dark energy. This energy field causes the universe to expand at an ever-increasing rate.
Oriti’s colleague Lorenzo Sindoni therefore adds: “We will only be able to really understand the evolution of the universe when we have a theory of quantum gravity.” The AEI researchers are in good
company here: Einstein and his successors, who have been searching for this for almost one hundred years.
3 comments:
1. Staying with the idea of fluids, the dense aether model compares the vacuum to condensing supercritical fluid, inside of which the foamy density fluctuations emerge. The energy is spread along
these foam membranes and strings, after then, which creates the space-time inhomogeneity for us.
Unfortunately for contemporary cosmologists this model reveals too, that the red shift and space-time expansions aren't result of motion of space-time and/or massive bodies in it and it favors
the tired light models of steady state Universe.
1. Cool insane rambling bro
2. Interesting. I would like to know how this is different from this approach. | {"url":"http://www.wunderworlds.com/2013/09/physicists-take-new-approach-to-unify.html","timestamp":"2014-04-17T00:49:39Z","content_type":null,"content_length":"123681","record_id":"<urn:uuid:32b98d44-5ff7-4a93-9417-73c4ceaf1c17>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Posted: Jan 22, 1994 11:07 AM
Newsgroups: geometry.pre-college
Subject: Games
Organization: Swarthmore College
I am drawn to use games as a way to entice students to practice
skills and explore concepts.
Are there some games you have found particularly useful?
I have created a domino type game for practicing facts, and names
for specific quadrialteral shapes. Each domino is divided into two parts:
one end consists of a quadrilateral shape, like isosceles trapezoid, and the
other end of the domino contains text. The text takes on two forms: NAMES
of specific quadrilateral shapes, or text that describes the properties of
the shape (e.g. exactly two sides parallel, two congruent sides).
I will use this game primarily to help my students refine the
definition of specific shapes, and their names. Hopefully, the recreational
nature of the activity will provide some motivation. Time will tell!
Can you think of other uses for this activity?
Can you suggest changes which would make this game more useful? | {"url":"http://mathforum.org/kb/thread.jspa?threadID=352189","timestamp":"2014-04-17T08:09:33Z","content_type":null,"content_length":"14433","record_id":"<urn:uuid:1648652d-1610-407a-9e74-045b26bcb441>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graph Colouring, Part Four
Comments 8
Let's give it a try. Can we colour South America with only four colours? Let's start by stating what all the edges are in the graph of South America:
const int Brazil = 0;
const int FrenchGuiana = 1;
const int Suriname = 2;
const int Guyana = 3;
const int Venezuala = 4;
const int Colombia = 5;
const int Ecuador = 6;
const int Peru = 7;
const int Chile = 8;
const int Bolivia = 9;
const int Paraguay = 10;
const int Uruguay = 11;
const int Argentina = 12;
var SA = new Dictionary<int, int[]>()
{Brazil, new[] { FrenchGuiana, Suriname, Guyana, Venezuala, Colombia, Peru, Bolivia, Paraguay, Uruguay, Argentina}},
{FrenchGuiana, new[] { Brazil, Suriname }},
{Suriname, new[] {Brazil, FrenchGuiana, Guyana}},
{Guyana, new[] {Brazil, Suriname, Venezuala}},
{Venezuala, new[] {Brazil, Guyana, Colombia}},
{Colombia, new [] {Brazil, Venezuala, Peru, Ecuador}},
{Ecuador, new[] {Colombia, Peru}},
{Peru, new[] {Brazil, Colombia, Ecuador, Bolivia, Chile}},
{Chile, new[] {Peru, Bolivia, Argentina}},
{Bolivia, new[] {Chile, Peru, Brazil, Paraguay, Argentina}},
{Paraguay, new[] {Bolivia, Brazil, Argentina}},
{Argentina, new[] {Chile, Bolivia, Paraguay, Brazil, Uruguay}},
{Uruguay, new[] {Brazil, Argentina}}
We can transform this dictionary mapping nodes to neighbours into a list of edge tuples and build the graph:
var sagraph = new Graph(13, from x in SA.Keys
from y in SA[x]
select Tuple.Create(x, y));
Notice that this illustrates both a strength and a weakness of our earlier design decision to make the graph take a list of edges as its input. The graph could just as easily have taken a dictionary
mapping nodes to neighbours, and then we wouldn't have to do this transformation in the first place! It might have been a bad design decision. Such decisions always have to be made in the context of
understanding how the user is actually going to use the type.
Anyway, now finding a four-colour solution is easy with our solver:
var sasolver = new Solver(sagraph, 4);
var solution = sasolver.Solve();
foreach (var colour in solution )
We run this and get 0, 1, 2, 1, 2, 1, 0, 2, 0, 1, 2, 1, 3 for the colours. On the graph, that looks like this:
Which is clearly a legal colouring, and manages to almost do it in three colours; only Argentina has to be the fourth. This of course is not a desirable property of real political maps; it would make
Argentina stand out unnecessarily. But the way the algorithm priortizes hypothesizing lower colours over higher colours means that it is likely that in many real-world graphs, the higher colours will
be rare with our solver.
Notice that the graph of South America is planar; it can be drawn without any of the edges crossing. In fact, all graphs that correspond to neighbouring regions on a plane or a sphere are planar
graphs. (Do you intuitively see why?) However there is nothing stopping our solver from looking at more exotic graphs. Consider a torus, a doughnut shape. We can represent a torus on this flat screen
by imagining that we "roll up" the square below so that the left edge meets the right edge, making a cylinder. And then bend the cylinder into a circle so that the the top edge meets the bottom edge:
Just to be clear: there are only seven regions on this rather bizarre map. All the regions with the same number that appear to be disjoint are connected because the torus "wraps around" to the
opposite side, if you see what I mean.
Notice how on this map every region touches every other region, and therefore cannot be coloured with fewer than seven colours. The four-colour theorem does not hold on a torus! (The seven-colour
theorem does; you can colour any map that "wraps around on both sides" with seven or fewer colours.)
You can't graph this as a planar graph, so I'm not even going to try. The graph of this thing looks like this mess:
And indeed, if we fed this graph into our graph colourizer and asked for a four-colouring, it would not find one.
Let's call a subset of a graph where every node in the subset is connected to every other node in the subset a "fully connected subset". (In graph theory jargon that's called a clique.) Clearly if
any graph has a fully connected subset of size n then the graph requires at least n colours (and perhaps more). Graphs that have a lot of fully connected subsets are interesting to a lot of people.
Next time we'll look at some particularly interesting graphs that have lots of fully connected subsets, throw our graph colourizer at them, and see what happens.
"Let's call a subset of a graph where every node in the subset is connected to every other node in the subset a "fully connected subset"
Is that not just a clique?
Yes. - Eric
How could a fully connected subset of size n require more than n colors? That doesn't seem to make sense.
I didn't say that a fully connected subset of size n ever requires more than n colours. I said "if any graph has a fully connected subset of size n then the graph requires at least n colours (and
perhaps more)" which is a completely different statement than your statement.
For example, consider the graph A--B--C--D--E--back to A. That graph has five fully connected subgraphs of size two: {A, B}, {B, C}, {C, D}, {D, E} and {E, A}. It has no fully connected subgraphs of
any size larger than two. We therefore know that it requires at least two colours; in fact, it requires three. Knowing the size of the largest clique only gives us a lower bound; the topology of the
rest of the graph is also relevant. - Eric
If you didn't want to write the backtracking algorithm yourself, you could also solve this problem using Microsoft Solver Foundation:
var context = SolverContext.GetContext();
var model = context.CreateModel();
var domain = Domain.IntegerRange(1, 4);
Decision Brazil = new Decision(domain, "Brazil");
Decision FrenchGuinea = new Decision(domain, "FrenchGuinea");
Decision Suriname = new Decision(domain, "Suriname");
Decision Guyana = new Decision(domain, "Guyana");
Decision Venezuela = new Decision(domain, "Venezuela");
Decision Colombia = new Decision(domain, "Colombia");
Decision Ecuador = new Decision(domain, "Ecuador");
Decision Peru = new Decision(domain, "Peru");
Decision Chile = new Decision(domain, "Chile");
Decision Bolivia = new Decision(domain, "Bolivia");
Decision Paraguay = new Decision(domain, "Paraguay");
Decision Uruguay = new Decision(domain, "Uruguay");
Decision Argentina = new Decision(domain, "Argentina");
model.AddDecisions(Brazil, FrenchGuinea, Suriname, Guyana, Venezuela, Colombia, Ecuador, Peru, Chile, Bolivia, Paraguay, Uruguay, Argentina);
Brazil != FrenchGuinea, Brazil != Suriname, Brazil != Guyana, Brazil != Venezuela, Brazil != Colombia, Brazil != Peru, Brazil != Bolivia,
Brazil != Paraguay, Brazil != Uruguay, Brazil != Argentina, FrenchGuinea != Suriname, Suriname != Guyana, Guyana != Venezuela,
Venezuela != Colombia, Colombia != Ecuador, Ecuador != Peru, Peru != Bolivia, Peru != Chile, Chile != Bolivia, Chile != Argentina,
Bolivia != Paraguay, Bolivia != Argentina, Paraguay != Argentina, Argentina != Uruguay);
var solution = context.Solve();
Eric - thanks. I mis-read your original statement - hence the confusion. =)
Everyone makes the "map requiring 7 colours" on the torus look so complicated and ugly, but it isn't really.
Start with a hexagonal tiling of the plane (pardon what follows...each "*" is a tile but it should really be in a monospaced font):
* * * * * * *
* * * * * * * *
* * * * * * *
* * * * * * * *
Now number the tiles in each row from 0 to 6, but stagger each row differently. Each "0" tile in each row should be two and a half tiles to the right of a "0" tile in the row below it:
Now (this is the part where ascii fails me) draw a parallelogram with vertices in the centre of four tiles marked "0" (such as the four leftmost "0" tiles in the diagram above), and identify opposite
edges as before. There's your map. If you want your torus to be constructed from a square instead of a parallelogram you can apply an affine transformation.
...and the commenting software ate the leading spaces in my awful ascii diagrams. I should really know better.
Maybe I missed something but is there any significance to the fact that you draw your graphs such that sometimes you have multiple lines leading out from a single point and sometimes not? Consider
Chile which has 3 lines leading out from two points. Does that mean anything?
It means that I am (1) not very good at using PowerPoint, and (2) too lazy to install Visio. - Eric
Ok, thanks. Wasn't trying to cause trouble, just wondering if I had missed something of significance. | {"url":"http://blogs.msdn.com/b/ericlippert/archive/2010/07/26/graph-colouring-part-four.aspx","timestamp":"2014-04-18T08:06:23Z","content_type":null,"content_length":"115343","record_id":"<urn:uuid:986aa7e2-4b2e-46ea-94ab-60aaad6f1282>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
Active Sensing and Data Driven Algorithm Selection
IEEE KIMAS Conference, Boston, October 2003
Roger Brockett
Slides [pdf]
Making use of Hidden Symmetries in Data
IEEE International Symposium on Information Theory, Japan, July 2003
Roger Brockett
It is a cornerstone of theoretical physics that the discovery and exploitation of symmetries is basic to understanding the structure of nature. On the other hand, information theorists have come to
view symmetries as both a blessing and a curse in the design of codes. In this talk we emphasize the positive aspects focusing on the use of symmetries in the algorithmic domain. Our emphasis will be
on problems involving the estimation of random processes, where the symmetries are revealed by the existence of a suitable type of Lie algebra, but we will also show that the existence of symmetries
lies behind a number of well known practical algorithms used in signal processing and linear algebra. Finally, we will discuss the use of approximate symmetries in the context of a problem arising in
neuroscience involving place cell data.
Slides [html]
Subriemannian Geodesics, Oscillations, and a Feedback Regulator Problem
Banach Center Workshop on Geometry and Nonlinear Control, Poland, June 2003
Roger Brockett
Many important engineering systems accomplish their purpose using cyclic processes whose characteristics are under feedback control. Examples involving thermodynamic cycles and electromechanical
energy conversion processes are particularly noteworthy. Likewise, cyclic processes are prevalent in nature and the idea of a pattern generator is widely used to rationalize mechanisms used for
orchestrating movements such as those involved in locomotion and respiration. In this talk, we develop a linkage between the use of cyclic processes and the control of nonholonomic systems,
emphasizing the problem of achieving stable regulation.
Slides [html]
The Weyl group as a symmetry group for subriemannian geodesics
Workshop on Geometry, Dynamics and Mechanics in Honour of the 60th Birthday of J.E. Marsden, Fields Institute, Toronto, August 2002
Roger Brockett
Audio [real audio]
Dynamical Systems and Computational Mechanisms
Royal Academy of Sciences, Belgium, July 2002
Roger Brockett
Developments in fields outside electrical engineering and computer science have raised questions about the possibility of carrying out computation in ways other than those based on digital logic.
Quantum computing and neuro-computing are examples. When formulated mathematically, these new fields relate to dynamical systems and raise questions in signal processing whose resolution seems to
require new methods. Up until now, the statement that computers are dynamical systems of the input/output type has not gotten computer scientists especially excited because it has not yet been shown
to have practical consequence or theoretical power. It is my goal here to try to show that this point of view has both explanatory value and mathematical interest.
Slides-1 [pdf], Slides-2 [pdf], Slides-3 [pdf], Slides-4 [pdf] | {"url":"http://www.hrl.harvard.edu/talks/index.html","timestamp":"2014-04-18T10:35:21Z","content_type":null,"content_length":"6006","record_id":"<urn:uuid:23658722-7305-414e-b3e3-a7f9fbcb8b07>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
July 8th 2013, 02:24 AM #1
May 2013
Flux help
Hi ,I would appreciate some guidance on this .thanks.
Evaluate the flux of the vector field F=18zi-12j+3yk across that part of the plane 2x+3y+6z=12 which is located in the first octant of the plane 2x+3y+6z=12 .Use the unit normal to the plane that
points in the positive z-direction .
So far I have found the normal to the plane ... as 2i+3j+6k/7
I was intending on using the eqn Iintergral F n ds =
not sure what to do with ds=sqrt(...)
I am not sure from where to where I am integrating ...flux
Re: Flux help
You mean "that part of the plane 2x+3y+6z=12 which is located in the first octant". The additional "of the plane 2x+ 3y+ 6z= 12" confused me!
When x and y are 0, that becomes 6z= 12 or z= 2 so one corner of the triangle is (0, 0, 2). When y and z are 0, that becomes 2x= 12 or x= 6 so (6, 0, 0) is another corner of the triangle.
And when x and z are 0, that becomes 3y= 12 or y= 4 so (0, 4, 0) is the third corner. Yes, the unit normal to the plane is (2i+ 3j+ 6k)/7 (note the parentheses1)
The "sqrt" in "sqrt(...)dx" is that "7", the length of the vector.
Another way of looking at surface integrals, which I prefer, is this: We can write any curve, in an xyz coordinate system, as x= f(s,t), y= g(s,t), z= h(s, t) where s and t are the parameters.
And we can write that as the vector equation $\vec{r}(s, t)= f(s,t)\vec{i}+ g(s,t)\vec{j}+ h(s,t)\vec{k}$. The two derivatives, $\vec{r}_s= f_s\vec{i}+ g_s\vec{j}+ h_s\vec{k}$ and $\vec{r}_t=f_t\
vec{i}+ g_t\vec{j}+ h_t\vec{k}$ are vectors lying in the tangent plane to the surface so that $\vec{r}_s\times \vec{r}_t$ is normal to the surface. Further, we have $\vec{n}dS= \vec{r}_s\times\
vec{r}_t dt ds$.
Here, our surface is the plane 2x+ 3y+ 6z= 12 which we can write as z= 2- (1/3)x- (1/2)y and write x= 3s, y= 2t, z= 2- s- t. The "position vector" or a point on the plane would be $\vec{r}= 3s\
vec{i}+ 2t\vec{j}+ (2- s- t)\vec{k}$. The derivatives are $\vec{r}_s= 3\vec{i}- vec{k}$ and $\vec{r}_t= 2\vec{j}- \vec{k}$. The cross product of those two vectors (positive z component) is $2\vec
{i}+ 3\vec{j}+ 2\vec{k}$ so that $\vec{n}dS= (2\vec{i}+ 3\vec{j}+ 2\vec{k})dsdt$.
The integrand vector function is $\vec{F}= 18z\vec{i}- 12\vec{j}+ 3y\vec{k}$ so $\vec{F}\cdot\vec{n}dS= (36z-36+ 6y)dsdt= (2- s-t- 36+6t)dsdt= (5t-s- 34)dsdt$
Projecting 2x+3y+ 6z= 12 into the xy-plane (z= 0) gives 2x+ 3y= 12 or y= 4- (2/3)x while x runs from 0 to 6. Since s= x and t= y, The integral will be
$\int_0^6\int_0^{4- (2/3)x} (5t- s- 3)dt ds$
July 8th 2013, 06:53 PM #2
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/calculus/220410-flux-help.html","timestamp":"2014-04-16T19:15:50Z","content_type":null,"content_length":"38251","record_id":"<urn:uuid:ce1aa36d-6a43-4960-9ecc-90ff765b16d9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sparse BLAS
Sparse Basic Linear Algebra Subprograms (BLAS) Library
This page contains software for various libaries developed at NIST for the Sparse Basic Linear Algebra Subprograms (BLAS), which describes kernels operations for sparse vectors and matrices. The
current distribution adheres to the ANSI C interface of the BLAS Technical Forum Standard. Older libraries corresponding to previous designs are also included for archival and historical purposes.
Libraries for BLAS Technical Forum (BLAST) interfaces
• Download ANSI C++ implementation. This is an ANSI C++ implementation of the complete ANSI C specification. The distribution is quite small (less than 14 Kbytes) and it is meant as a starting for
developing optimized and architecture-dependent version. (C++ was used, rather than C, as it has support for complex arithmetic and templates to facilitate to creation of various precision
codes.) The library includes support for all four precision types (single, double precision real and complex) and Level 1, 2, and 3 operations.
• Download ANSI C implementation. This is an earlier ANSI C implementation which supports only double-precision operations.
Original NIST Sparse BLAS (non-compliant interface)
Link to original NIST Sparse BLAS This is an implementation of a lower-level interface which was done before the BLAST Standard. (Actually, it was developed to help direct research while the
BLAST standardizaiton process.) The operations are more general and support specific storage formats. It is much larger, but includes support for compressed-column/row storage formats, and block
variants. Although it does not adhere to the BLAST Standard, the libraries are useful in their own right.
The BLAS Technical Forum Standard defines the following sparse matrix and vector operations:
Sparse Vector (Level 1) Operations
(1) r <-- op(x) * y sparse dot-product
(2) y <-- alpha * x + y sparse vector update
(3) x <-- y|x sparse gather
(4) x <-- y|x ; y|x = 0 sparse gather and zero
(5) y|x <-- x sparse scatter
where r is a scalar, x is a sparse vector, y is a dense vector, y|x denotes the elements of y that are indexed by x.
Matrix-Vector (Level 2) and Matrix-Matrix (Level 3) operations
(6) y <-- alpha * op(A) * x + y matrix-vector multiply
(7) C <--- alpha * op(A) * B + C matrix-matrix multiply
(8) x <-- alpha * op(T) ^(-1) * x matrix-vector triangular solve
(9) B <-- alpha * op(T) ^(-1) * B matrix-matrix triangular solve
where A is a sparse matrix, T is an triangular sparse matrix, x and y are dense vectors, B and C are (usually tall and thin) dense matrices, and op(A) is either A, the transpose of A, or the
Hermitian of A.
Unlike their dense-matrix counterpart routines, the underlying matrix storage format is NOT described by the interface. Rather, sparse matrices must be first constructed before being used in the
Level 2 and 3 computationalroutines.
There are various operations available for sparse matrix construction:
(A) xuscr_begin() point (scalar) construction
(B) xuscr_block_begin() block construction
(C) xuscr_variable_block_begin() variable block construction
(D) xuscr_insert_entry() insert single (i,j) value
(E) xuscr_insert_entries() insert list of (i,j) values
(F) xuscr_insert_col() insert a sparse colum of values
(G) xuscr_insert_row() insert a sparse row of values
(H) xuscr_insert_clique() insert a clique of values
(I) xuscr_insert_block() insert a block of values
(J) xuscr_end() terminate matrix construction
(K) usgp() get matrix property
(L) ussp() set matrix property
(M) usds() destroy matrix
Each BLAS routine (except for (K), (L), and (M) above) there is a specification for four floating point types (single precision, double precision, complex single precision, and complex double
precision) hence, the above specification yields 79 Sparse BLAS routines.
See the following references for further information on [1] the Sparse BLAS specification, and [2] examples and discussion about the interface from some of the key members who worked on the standard:
(1) I. Duff, M. Heroux, R. Pozo, "An Overview of the Sparse Basic Linear Algebra Subprograms: The New Standard from the BLAS Technical Forum," ACM TOMS, Vol. 28, No. 2, June 2002, pp. 239-267.
(2) The BLAS Technical Forum Standard www.netlib.org/blas/blast-forum.
Development Status: Active Maintenance
Privacy Policy
Last Modified: 01/01/2006
Author: Roldan Pozo | {"url":"http://math.nist.gov/spblas/","timestamp":"2014-04-20T05:43:12Z","content_type":null,"content_length":"6920","record_id":"<urn:uuid:daccd6be-48ec-4ae7-a61a-5625ba1e8394>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
14.02 Principles of Macroeconomics
Problem Set 5
Posted: Wednesday, March 21, 2001
Due: Wednesday, April 4, 2001
Please remember to write your TA’s name and section time on the front page or your problem set.
Part 1: True/False Questions: Decide whether each statement is true or false and justify your answer with a short argument. (2 points each, 18 points total)
1. A country cannot reduce its trade deficit through a depreciation. Examining the equality NX= Sp + Sg – I shows that the trade deficit is solely determined through local private saving, government
saving and investment, and none of these are affected by the exchange rate.
2. An expansionary fiscal policy will eventually reduce the trade deficit.
3. A real depreciation would lead to an immediate improvement in the nominal trade balance.
4. Due to the nominal appreciation of the US Dollar, I was able to do more things during my trip to Israel.
5. Technology improvement shifts the AS curve, and leads to a lower prices and higher GNP.
6. Under perfect capital mobility, when a country joins a system of fixed exchange rates, it gives up the freedom to choose its interest rate.
7. The increasing productivity and output of another country is a threat to the prosperity of the US.
8. The twin deficits refer to the government's budget deficit and the international trade deficit.
9. Similar to consumption, changes in the growth rate of capital spending are smoother than changes in the growth rate of GNP.
Part 2: Open Economy IS-LM (6 points each, 36 points total)
Consider the following open economy:
NX (Net Exports)=40-0.3*Y-30/E
P=P^*=1, where P is the domestic and P* is the foreign price level
Expected exchange rates, E^e , =1, the foreign interest rate, i^* , = 0.1, and the exchange rate is flexible.
1. Write down and graph the LM relation. Explain what it represents and whether there are any differences relative to the closed economy.
2. Write down and interpret in words the equilibrium condition of the goods market. Are there any differences relative to the closed economy?
3. Derive and graph the open economy IS curve (in the Y-i space). Interpret, and explain any differences relative to the closed economy IS curve.
4. Explain the effects of a fiscal expansion using words and graphs (no algebra): what happens to output, the interest rate and the exchange rate? What happens to investment and net exports? Answer
the same questions for a monetary expansion as well.
5. How can the government decrease interest rates without changing output? What will happen to the exchange rate and net exports?
6. Can the government achieve lower interest rates without changing output and net exports in our model? In reality?
Part 3: Exchange Rates And Expectations (8 points each, 24 points total)
Consider an open IS-LM economy with fixed exchange rates E = 1, and the domestic interest rate (i) and the foreign interest rate (i*) given below:
i=0.15, i*=0.6.
1. What's the exchange rate people expect (E^e) in this market?
2. What will happen if people are right and the Central Bank devalues the currency to the level people think is correct? In particular, what happens to interest rates and output?
3. How does your answer change if, after the devaluation, people start thinking that the Central Bank is 'weak' and believe that it will start printing lots of money?
Part 4: Volatility of Investment (8 points each, 24 points total)
Consider an economy with N firms, each of them composed of one entrepreneur and capital. All firms have the same production function: y[n,t] = A[t] k[n,t]^d^ (no labor is used).
The real interest rate (r) and the depreciation rate (d) are given as constant. Firms can adjust (either up or down) their stock every period.
1. What are the:
(a) GNP of the economy (Y);
(b) Per-period profit function;
(c) Optimal capital of the economy (K*);
(d) Capital/Output ratio (K/Y).
2. If A[t] = A[t-1], find gross and net investment at time t (I[N]).
3. Assume d=0.6, r=4%, d=10%, A[1]=A[2]=A[3]=1.0, A[4]=A[5]=A[6]=1.2, K[0]=K[1]*, I[N0]=0.Find and plot the path of GNP, K and I. | {"url":"http://web.mit.edu/course/14/14.02/www/S01/s01_ps5.htm","timestamp":"2014-04-18T15:41:17Z","content_type":null,"content_length":"30547","record_id":"<urn:uuid:998629aa-4f39-4024-97a0-35ccf9b4e72b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finitely generated subgroup implies cyclic
November 28th 2008, 07:23 AM #1
Super Member
Mar 2006
Finitely generated subgroup implies cyclic
Let $G = \frac { \mathbb {Q} } { \mathbb {Z} }$ under +, so the elements are the equivalence classes $\hat {r} = \{ s \in \mathbb {Q} : s-r \in \mathbb {Z} \}$. Write $r \equiv s \ (mod \ 1 )$ if
$r - s \in \mathbb {Z}$. If H is a finitely generated subgroup of $G = \frac { \mathbb {Q} } { \mathbb {Z} }$, then H is a finite cyclic subgroup. Also determine a generator for H.
proof so far.
Suppose that H is a finitely generated subgroup, so I need to find an element h such that <h> = H. How should I process with this? Thanks!
Let $G = \frac { \mathbb {Q} } { \mathbb {Z} }$ under +, so the elements are the equivalence classes $\hat {r} = \{ s \in \mathbb {Q} : s-r \in \mathbb {Z} \}$. Write $r \equiv s \ (mod \ 1 )$ if
$r - s \in \mathbb {Z}$. If H is a finitely generated subgroup of $G = \frac { \mathbb {Q} } { \mathbb {Z} }$, then H is a finite cyclic subgroup. Also determine a generator for H.
a finitely generated subgroup of $G$ is in the form $\frac{K}{\mathbb{Z}},$ where $\mathbb{Z} \subseteq K$ and $K$ is a finitely generated subgroup of $\mathbb{Q}.$ suppose $K=\sum_{j=1}^n r_j\
mathbb{Z},$ where $r_j=\frac{a_j}{b_j} \in \mathbb{Q}.$ let $\prod_{j=1}^n b_j=c.$
then $K=\frac{1}{c}\sum_{j=1}^nr_jc \mathbb{Z}.$ since $\sum_{j=1}^nr_jc \mathbb{Z}$ is a subgroup of $\mathbb{Z},$ it's cyclic. thus $K=\frac{b}{c}\mathbb{Z}.$ since $\mathbb{Z} \subseteq K,$ we
must have $c=bm,$ for some integer $m.$ therefore: $K=\frac{1}{m}\mathbb{Z}.$ to finish
the proof show that: $\frac{K}{\mathbb{Z}}=<\frac{1}{m} + \mathbb{Z} >=\{\frac{j}{m}+\mathbb{Z}: \ 0 \leq j < m \}. \ \ \Box$
November 29th 2008, 12:52 AM #2
MHF Contributor
May 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/61999-finitely-generated-subgroup-implies-cyclic.html","timestamp":"2014-04-18T13:29:47Z","content_type":null,"content_length":"39587","record_id":"<urn:uuid:40f36409-08e6-4f51-8a86-fbd77736d716>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra 2 Tutors in Suffolk County, NY
Bayport, NY 11705
Math Tutor--High School/College levels
...I have taught this course at the college and community college levels. In addition, I have successfully tutored high school students preparing for the
Algebra 2
/Trig New York State Regents exam, a number of which scored in the 90's. As a retired college Mathematics...
Offering 10+ subjects including algebra 2 | {"url":"http://www.wyzant.com/Suffolk_County_NY_Algebra_2_tutors.aspx","timestamp":"2014-04-24T20:50:15Z","content_type":null,"content_length":"62768","record_id":"<urn:uuid:dd597cc6-e960-4468-abed-205f917abf57>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/roadjester/answered","timestamp":"2014-04-19T17:14:42Z","content_type":null,"content_length":"127009","record_id":"<urn:uuid:ce898273-bc3f-420a-ac31-288229f41e2b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lower bounds on optimal evaluation
Lower bounds on optimal evaluation
To the Types forum:
We thought the following recent result was interesting enough that it
warranted an announcement in the Types forum. A paper on the subject
is being written as you are reading this, and will be available
shortly from the WWW page http://www.cs.brandeis.edu/~mairson. We do
not think that the result spells the end of the "optimal evaluation"
enterprise, but without doubt, the theorem motivates the need for an
"optimal reevaluation."
Yours sincerely,
Andrea Asperti
University of Bologna
Harry Mairson
Brandeis University
Parallel beta reduction is not elementary recursive
We analyze the inherent complexity of implementing L\'evy's notion of
{\em optimal evaluation} for the $\lambda$-calculus, where similar
redexes are contracted in one step via so-called {\em parallel
$\beta$-reduction.} Optimal evaluation was finally realized by
Lamping, who introduced a beautiful graph reduction technology for
sharing {\em evaluation contexts} dual to the sharing of {\em values}.
His pioneering insights have been modified and improved in subsequent
implementations of optimal reduction.
We prove that the cost of parallel $\beta$-reduction is not bounded by
any Kalmar-elementary recursive function. Not merely do we establish
that the parallel $\beta$-step cannot be a unit-cost operation, we
demonstrate that the time complexity of implementing a sequence of $n$
parallel $\beta$-steps is not bounded as $O(2^n)$, $O(2^{2^n})$,
$O(2^{2^{2^n}})$, or in general, $O(f(n))$ where $f(n)$ is any fixed
stack of 2s with an $n$ on top.
A key insight, essential to the establishment of this nonelementary
lower bound, is that by a straightforward rewriting based on
$\eta$-expansion, any simply-typed $\lambda$-term can be reduced to
normal form in a number of parallel $\beta$-steps that is only linear
in the length of the explicitly-typed term. The result follows from
Statman's theorem that deciding equivalence of typed $\lambda$-terms
is not elementary recursive.
The main theorem gives a lower bound on the work that must be done by
{\em any} technology that implements L\'evy's notion of optimal
reduction. However, in the significant case of Lamping's solution, we
make some important remarks addressing {\em how} work done by
$\beta$-reduction is translated into equivalent work carried out by
his bookkeeping nodes. In particular, we identify the computational
paradigms of {\em superposition} of values and of {\em higher-order
sharing}, appealing to compelling analogies with quantum mechanics and | {"url":"http://www.seas.upenn.edu/~sweirich/types/archive/1997-98/msg00086.html","timestamp":"2014-04-16T21:58:57Z","content_type":null,"content_length":"5063","record_id":"<urn:uuid:1d54ccbe-c2c3-47f9-858c-5f634a76c7df>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
Implicit differentiation - not defined?
June 4th 2012, 12:28 PM #1
Junior Member
May 2012
Implicit differentiation - not defined?
I have the problem:
Find the linear approximation about (-1,1) to the implicit function given by the relation: 2xy-x^8+y^2=-2
My problem is that trying implicit differentiation, I always get a slope which is not defined, because the denumerator is always (x+y) and therefore 0.
Did I do an error doing my calculation? If my calculation was right, can I answer that the tangent line is not defined at (-1,1)?
Re: Implicit differentiation - not defined?
No, there is no error. You can, however, say that the "linear approximation" is the vertical line x= -1.
Another way of looking at it is to treat this as an implicitly defining x as a function of y. Then, differentiating both sides with respect to y you would get x'= 0. That gives x= 0(y-1)- 1 or,
again, x= -1.
Last edited by HallsofIvy; June 4th 2012 at 01:45 PM.
Re: Implicit differentiation - not defined?
Hm, I always used the formula:
x-x[0]= x'/y' (x-x[0])
In that case for this example I get:
what was my mistake? Did I use the wrong formula in order to get the tangent line?
Re: Implicit differentiation - not defined?
the tangent line is x = -1
Re: Implicit differentiation - not defined?
What do x' and y' mean here? (And surely, you don't mean $x- x_0$ on both sides?)
In that case for this example I get:
what was my mistake? Did I use the wrong formula in order to get the tangent line?
I can't answer that until I know what you mean by x' and y'. Normally, I would interpret them as derivatives, but with respect to what variables? If x and y were given as functions of some other
variable, and x' and y' were the derivatives with respect to that third variable, the tangent line would be given by $y- y_0= (y'/x')(x- x_0)$ where the slope is the reciprocal of what you give.
Last edited by HallsofIvy; June 5th 2012 at 05:21 PM.
Re: Implicit differentiation - not defined?
The function in my example is given implicitly:
find the tangent line at the point: (-1,1) - I refer to this points as x[0] and y[0]
hence y'[(x)]=- (af/ax) / (af/ay)
so y=y[0]-y'[(x)] (x-x[0])
In this case y=1-0
so y=1
Trying to do the same with:
x=-1- 0*(y-1)
now my stupid question:
Why is x=-1 the tangent line and not y=1?
Re: Implicit differentiation - not defined?
first, $2xy-x^8+y^2=-2$ is not a function ... why?
second, all points on a vertical line have the same x-value, hence, the equation of the vertical line passing thru the point (-1,1) is x = -1
fyi, all points on a horizontal line have the same y-value , hence the equation of a horizontal line would be y = a constant.
Re: Implicit differentiation - not defined?
Now I try with a second one:
Find the linear approximation at (1,-1) for the implicit function y(x) given by the following relation:
than I can calculate:
and for x:
am I right?
Re: Implicit differentiation - not defined?
Re: Implicit differentiation - not defined?
no I didn't, sorry.
The correct problem is:
Re: Implicit differentiation - not defined?
Re: Implicit differentiation - not defined?
Oh I understood my error. I messed up one of the - signs :-)
Thank you!
Re: Implicit differentiation - not defined?
I have another question. Could someone check this for me please:
find the tangent line to the equation:
y^2+e^2x+xln(y)=4 through (0,2)
First I take the derivatives:
through the point (0,2) I have:
so my tangent line is:
Is this true? I really hope so, because I am trying this examples since 2 hours :-)
Re: Implicit differentiation - not defined?
This is wrong to start with. f[y]= 2y+ x/y.
No, f[x]= 2e^2x+ ln(y).
through the point (0,2) I have:
No f[y](0, 2)= 4 and f[x](0, 2)= 2+ ln(2). You have "x" and "y" reversed.
so my tangent line is:
Oddly enough, you have again mixed up "x" and "y" so that this is correct!
This, however, is not correct. You should have -((ln(2)+ 2)/4)x+ 2. The x is not just multiplied by the "2/4".
Is this true? I really hope so, because I am trying this examples since 2 hours :-)
You really need to review basic differentiation rules and algebra.
Re: Implicit differentiation - not defined?
I have another question:
Find all values for which the tangent line of the following equation is horizontal:
first I get the derivatives with respect to x and y:
Than I use implicit differentiation: - (4x+y)/(x+2y)
As I remember, if the numerator is 0 the tangent line is horizontal, so the tangent line is horiziontal for y=4x or x=y/4 - if I would be asked for the vertical tangent line than the denumerator
would have to be 0.
Is this true?
June 4th 2012, 01:42 PM #2
MHF Contributor
Apr 2005
June 5th 2012, 01:26 PM #3
Junior Member
May 2012
June 5th 2012, 02:04 PM #4
June 5th 2012, 05:10 PM #5
MHF Contributor
Apr 2005
June 6th 2012, 11:23 AM #6
Junior Member
May 2012
June 6th 2012, 12:11 PM #7
June 6th 2012, 12:52 PM #8
Junior Member
May 2012
June 6th 2012, 01:34 PM #9
June 6th 2012, 01:45 PM #10
Junior Member
May 2012
June 6th 2012, 02:04 PM #11
June 7th 2012, 01:03 PM #12
Junior Member
May 2012
June 13th 2012, 09:30 AM #13
Junior Member
May 2012
June 13th 2012, 10:05 AM #14
MHF Contributor
Apr 2005
June 16th 2012, 11:39 AM #15
Junior Member
May 2012 | {"url":"http://mathhelpforum.com/calculus/199643-implicit-differentiation-not-defined.html","timestamp":"2014-04-18T12:37:51Z","content_type":null,"content_length":"87216","record_id":"<urn:uuid:2d00d99f-eef6-4410-ab87-6dc96bbab31b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/maryjsmith/answered/1","timestamp":"2014-04-16T22:30:43Z","content_type":null,"content_length":"90470","record_id":"<urn:uuid:6a8076cb-fbf9-46eb-9295-e8241d854ec8>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Gene-incidence question/simulation
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Gene-incidence question/simulation
From Neil Shephard <nshephard@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Gene-incidence question/simulation
Date Mon, 23 Mar 2009 10:27:00 +0000
On Mon, Mar 23, 2009 at 10:15 AM, moleps islon <moleps2@gmail.com> wrote:
> Thanks for the statistical input. I truly appreciate this. However
> what I've done instead in order to get an estimate is to run a
> simulation whereby I select g random patients in my sample and "give
> them" the mutation and then do the usual calculations.
Sorry, but I see no benefit to this at all. How are you estimating
the proportion of your sample to '"give them" the mutation'?
The frequency of the allele will be pivotal to calculating the
penetrance, and from your code all you've done is pick a random sample
of g patients from the total N. This is highly unlikely to reflect
the true frequency of the polymorphism in the population, and all
you'll have is a range of estimates based on varying genotype
frequencies under a dominant model (see comment below).
Further your code doesn't seem to be explicitly accounting for any
form of genetic other than a dominant one and you may wish to consider
recessive, additive and multiplicative (after all, you presumably have
no idea about the mode of inheritance).
What organism are you looking at and what marker are you considering?
If its humans and a SNP, see if you can find the RS# on HapMap where
there will (hopefully) be an estimate of the allele frequency from
their standard populations.
I don't think you can draw any conclusions from the results that you
are obtaining here. You really need to genotype your samples for the
mutation to estimate the allele frequency, determine the frequency of
each genotype in your data set and then you can start deriving the
penetrance and joint probability of developing the two phenotypes.
"The combination of some data and an aching desire for an answer does
not ensure that a reasonable answer can be extracted from a given body
of data." ~ John Tukey (1986), "Sunset salvo". The American
Statistician 40(1).
Email - nshephard@gmail.com
Website - http://slack.ser.man.ac.uk/
Photos - http://www.flickr.com/photos/slackline/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-03/msg01208.html","timestamp":"2014-04-18T01:15:41Z","content_type":null,"content_length":"8141","record_id":"<urn:uuid:8c89b0d0-eeee-4899-bf94-208eed3e9c5e>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
RootsWeb: GENEALOGY-DNA-L Re: [DNA] D Faux's cousins
GENEALOGY-DNA-L Archives
Archiver > GENEALOGY-DNA > 2011-12 > 1322933415
From: Obed W Odom <>
Subject: Re: [DNA] D Faux's cousins
Date: Sat, 03 Dec 2011 11:30:15 -0600
References: <mailman.175.1322869475.20702.genealogy-dna@rootsweb.com><C943924D-B236-4D5B-8979-68F9EA2052B9@charter.net><4EDA4AD5.5000901@mail.utexas.edu>
In-Reply-To: <4EDA4AD5.5000901@mail.utexas.edu>
In my previous post (below) I think I overestimated the chance of *both*
siblings having no crossover on any of their 44 autosomes. The number I
gave, (1/2)exp44, was the chance of one of the siblings having no
crossover, so the chance of both siblings having no crossover would be
(1/2)exp88, making the odds of 2 siblings sharing zero segments
(1/2)exp132, or only 1 out of 5445 trillion trillion trillion.
On 12/3/2011 10:14 AM, Obed W Odom wrote:
> The odds of 2 true siblings sharing zero segments would certainly be
> less than (1/2)exp44, or 1 out of 17.6 trillion. This number assumes no
> crossing over between homologous chromosomes. If one assumes that for
> each autosomal chromosome pair there is a 50% chance of having no
> crossover, then the chance of having no crossover on *any* of the 44
> inherited autosomal chromosomes would again be (1/2)exp44, making the
> odds of 2 siblings sharing zero segments (1/2)exp88, or 1 out of 309
> trillion trillion, a truly vanishingly small number.
This thread: | {"url":"http://newsarch.rootsweb.com/th/read/GENEALOGY-DNA/2011-12/1322933415","timestamp":"2014-04-18T20:44:25Z","content_type":null,"content_length":"4775","record_id":"<urn:uuid:a1e81dbf-6785-4487-8b6d-3f34ada0f4d9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Illustration 6.3
Java Security Update: Oracle has updated the security settings needed to run Physlets.
Click here for help on updating Java and setting Java security.
Illustration 6.3: Force and Displacement
Please wait for the animation to completely load.
Two forces that we often use as examples when talking about work are the force of gravity and the elastic force of springs. We know from earlier chapters that the character of the two forces is
different. The gravitational force on an object is always mg, while the spring force is dependent on how much the spring is stretched or compressed from equilibrium. As a consequence, the form for
the work done by each force will be different. Restart.
In general, for a constant force, WORK = F · Δx = F Δx cos(θ), where F is the constant force and Δx is the displacement. F and Δx are the magnitude of the vectors, respectively.
The graph shows Fcos(θ) vs. distance for a 1-kg object near the surface of Earth (position is given in meters). By checking the box you make the graph represent the Fcos(θ) vs. distance graph for a
mass on a spring with k = 2 N/m (the equilibrium point of the spring is conveniently set at x = 0 m). You can enter in values for the starting and stopping points for the calculation of the work and
then click the "evaluate area (integral)" button to calculate the work.
Begin by looking at the Fcos(θ) vs. distance graph for gravity (F[y] vs. y). Gravity is a constant force (near the surface of Earth). Therefore, the magnitude of work done by gravity will be | mg Δy
|. So, consider a ball at y = 0 m that drops to y = -2 m. Is the work done by gravity positive or negative? Use the graph to calculate the work. It is indeed positive (a negative force in the y
direction and a negative displacement in the y direction means cos(θ) = 1). This is because the force is in the same direction as the displacement. What about lifting an object up from y = -2 m to y
= 0 m? The work done is negative since the force is in the opposite direction from the displacement [cos(θ) = -1]. We can use | F Δy | because the force does not vary over the displacement. But what
if the force does vary, as in the case of a spring?
Check the check box to see the graph representing a spring force. Enter in values for the starting and stopping points for the calculation of the work and then click the "evaluate area (integral)"
button. Enter in x = 0 m for the starting point and x = 4 m for the ending point, representing the stretching of a spring. Is the magnitude of the work done | F Δx |? Why or why not? The magnitude of
the work is not | F Δx |. In the case of the spring, the magnitude of the work is 0.5*k x^2, which is the area under the force function (it is also the integral of F dx). Note also that the work is
negative: The force and the displacement are in the opposite direction [cos(θ) = -1].
Enter in x = 4 m for the starting point and x = 0 m for the ending point. What happens to the sign of the work done by the spring now?
« previous
next » | {"url":"http://www.compadre.org/Physlets/mechanics/illustration6_3.cfm","timestamp":"2014-04-18T03:49:40Z","content_type":null,"content_length":"22639","record_id":"<urn:uuid:058c7c52-b14a-4e28-b8d1-4db84994a446>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum Dot-Sensitized Solar Cells
Valerie Collins, Fadzai Fungura, Zach Zasada
Solar energy is a very important energy topic in the United States and around the world. Advances in solar cell technology make solar energy more affordable every year. According to the U.S. Energy
Information Administration, the cost of solar photo voltaic capacity has decreased from approximately $300 per watt in 1956 to less than $5 per watt in 2009. Furthermore, with expected advances in
technology U.S. solar photo voltaic generating capacity is projected to increase from 30 megawatts in 2006 to 381 megawatts in 2030. Similarly, there are plans for large increases in the amount of
solar energy being put into use in other countries. According to the U.S. Energy Information Administration, in Japan the government has set a target for 30 percent of all households to have solar
panels installed by 2030. Quantum dot solar cells have the potential to produce high efficiencies with a relatively cheap cost to construct them. Furthermore, quantum dots have the capability of
multiple electrons being excited from one photon of light (in comparison to organic dyes which generally only have one electron excited per single photon)
Quantum Dot Theory
Quantum dots or nanocrystals are very small semiconductors, having a diameter that ranges from 2 to 10 nanometrs. The electrons in a quantum dot behave like particles in a finite spherical square
well. To understand quantum dots more let us first look at a one dimensional particle in an infinite well. Assuming L is the width of the well, then the potential V(x) = 0 if 0< x < a and V(x) = ∞
otherwise. A particle in this potential is free except at x = 0 and x = L (at the boundaries), where it cannot escape because of the infinite potential. The result of the infinite potential at the
boundaries means that we should have nodes there. We end up with sinusoidal standing waves in the well as shown in Figure 1.
The closed formula for the wavelength is λ = 2L/n. Since when inside the well the particle is completely free, the de Broglie formula p = h/λ holds where h is the planck's constant and the energy, E
= p²/2m where p is the momentum and m is the mass of the particle. Combining the equations, we end up with E = n²h²/8mL². Since the energy is a product of specific constants, a particle in a square
well can only have specific energy levels.
In a quantum dot the electron is trapped in a three dimensional finite spherical square well. Since it is 3 dimensional, the wave function of the electron has a radial, polar angle ɵ, and azimuthal
angle ϕ component, ψ(r, ɵ, ϕ). The wave function can be separated into radial and angular components ψ(r, ɵ, ϕ)= R(r)Y(ɵ, ϕ). However, we do not have a dependence on the angular components for our
energy levels, only radial dependence. After separation of variables our radial equation becomes -ħ²/2m(d²u/dr²) + Vu = Eu where u(r) = rR(r)
For a finite spherical well V(r) = -Vo for -a<r<a and V(r) = 0 for |r|>a where 2a is the width of the well and Vo is a positive constant as in Figure 2.
In this case the energy, E<0.
For r<-a, V = 0 and the radial equation becomes -ħ²/2m(d²u/dr²) = Eu and d²u/dr² = -(2mE/ ħ² )u = k²u where k = √(-2mE)/ħ and k is real and positive. The general solution is u(r) = Aexp(-kr) + Bexp
(kr) where A and B are arbitrary solutions. Aexp(-kr) blows up as r →-∞ so, the only physically allowable solution is u(r) = Bexp(kr), for r< -a.
For the region -a<r<a, the equation becomes -ħ²/2m(d²u/dr²) - V0u = Eu and d²u/dr² = -q²u where q = √(-2m(E+Vo))/ħ. Even though E<0, |E|<|Vo| therefore q>0. The general solution is
u(r) = Csin(qr) + Dcos(qr), for -a<r<a, C and D are arbitrary constants.
For r>a, V(r) = 0 and the general solution is u(r) = Fexp(-kr) + Gexp(kr) but Gexp(kr) blows up as r →∞ so, the only physically allowable solution is u(r) = Fexp(-kr), for r>a.
The boundary conditions are that u and du/dr are continuous at -a and +a. Since the potential is an even function, and the sine and cosine functions have symmetry, sine is odd and cosine is even, we
can consider the solutions separately (assume the solutions are either odd or even). In this case we can impose the boundary conditions on a and the same would be true for -a since u(-r) = ±u(r)
Looking at the cosine (even) solutions first:
Continuity of u(r) at r = a; Fexp(-ka)= Dcos(qa) (1)
Continuity of du/dr at r = ; -kFexp(-ka)= -qDsin(qa) (2)
Dividing (2) by (1) we get k = q tan(qa)
Since k and q are both functions of energy, k = q tan(qa), is a formula for the allowed energies.
Making the equation look nicer, let z = qa and zo = a√(2mVo )/ ħ Recall, k = √(-2mE)/ħ and q = √(-2m(E+Vo))/ħ. k²+q² = 2mVo / ħ² Thus k²a²+q²a² = 2mVo a²/ ħ² = zo ². Therefore ka = √( zo ² -z²) (3)
Since k = q tan(qa), ka = qa tan(qa) and substituting for z gives, ka = ztanz (4)
Equating (3) and (4) gives, ztan z = √( zo ² -z²), and tan z = √( (zo /z)² -1) (5)
Figure 3 shows a graph that gives the solution to the equation by plotting tan z and √( zo ² -z²)on the same grid.
The blue corresponds to tan z and the purple is for Sqrt[(3π/z)²-1], yellow is for Sqrt[(5π/z)²-1], and green is for Sqrt[(6/z)²-1]. The horizontal axis gives us the z values. For zo = 6, the allowed
energies occur at z = 1.35 and 4.01.
Looking at the odd/sine solutions,
Continuity of u(r) at r = a; Fexp(-ka)= Csin(qa) (6)
Continuity of du/dr at r = ; -kFexp(-ka)= Ccos(qa) (7)
Dividing (7) by (6) we get k = -q cot(qa)
Going through the same substitutions as before, the final equation becomes -cot z = √( (zo /z)² -1). Figure 4 shows the solutions
For zo = 6, the allowed energies occur at z = 2.704 and 5.222. Figure 5 shows a graph with both the odd and even solutions
Creating Quantum Dots
The research on thin-film semi-conductor solar cells at Cornell College has led our physics and chemistry students down some interesting paths: from raspberry juice to organic dyes, we now present
our research purveying the potential energy source of quantum dots as sensitizers in titanium dioxide solar cells. The production of quantum dots is relatively simple and cheap, and the adjustable
absorption spectrum of these nanoparticles is easily controlled. The most difficult part of the research will be attaching the quantum dots to the TiO2 nanoporous layer of the solar cell to absorb
energy from photons and transfer that energy to electrons to create an electric current.
Quantum dots provide an ideal location for this conversion of energy due to their small size (on the order of a few nanometers) and chemical makeup – completely inorganic, and therefore quite stable
(http://pubs.acs.org/doi/full/10.1021/jp802572b?cookieSet=1). Diameters of the quantum dots range from 2-10 nm, which means that the energy levels of an excited electron (excitron) within the QD are
discrete – as opposed to continous, as the energy levels would be in a “bulk” semiconductor. In addition, the size of the quantum dot varies the band-gap, or the difference in energy between an
excited electron within its “hole” and an excited electron outside its hole. This means that energy absorbed by electrons in a quantum dot from high-energy photons can be transferred out of the QD
via the excited electron, and the electron “hole” can be filled with a lower-energy electron. In a solar cell, titanium dioxide provides a pathway for the excited electron to leave the quantum dot,
while the electrolyte (usually an iodide solution) provides a wealth of lower-energy electrons to fill the holes left behind by the excitrons. Thus, the flow of electrons creates a current.
We used cadmium oxide and selenium to create cadmium selenide nanocrystals in a “wet-chemical” reaction, as outlined by UW-Madison, albeit modified to create a higher concentration of the reaction
solution, as shown in Figure 6 and the video clip below.
0.200 g selenium was dissolved in 2.8 mLs trioctylphosphine (TOP) and 7.2 mLs octadecene on a stirring hot plate (0.253 M). 0.300 g cadmium oxide was dissolved in 10 mLs oleic acid and 20 mLs
octadecene at 225°C in a round-bottom flask. The selenium solution was then added to the pre-heated cadmium solution, and 5-10 mLs of the reaction solution was removed and quenched at room
temperature at various intervals (10-30 seconds) until all of the reaction solution was removed from the flask. The growth of the quantum dots can be monitored visibly by the color change of the
reaction solution, which remains stable post-quenching. To purify the quantum dots, all of the aliquots were recombined and the nanocrystals were precipitated with 100% (200 proof) ethanol (1 part QD
solution to 2 parts ethanol). The precipitate was centrifuged, the solvent removed, and then the quantum dots were redissolved in a small amount of toluene (as little as needed to re-dissolve the
dots, usually 1-2 mLs). This process was repeated, and the toluene solution was filtered through a paper filter to remove large debris.
To characterize the quantum dots, a UV-vis spectrometer was used to identify the absorption of the toluene-dissolved quantum dots. Figure 7 shows the absorption spectra of 5 different intervals of
the quantum dot reaction, having been purified and analyzed separately – the change in peak absorption corresponds to a change in color as well as change in size of the quantum dots.
Figure 9 shows the purified quantum dots. It is very important to use pure (100%) ethanol when purifying the dots. 95% ethanol is not effective, as shown in Figure 8.
There is no apparent change in absorption nor contaminants of the quantum dots in toluene, having been purified with ethanol and toluene once, twice, and three times. Thus, I feel it is necessary to
purify the quantum dots simply by precipitation with ethanol, re-dissolution in toluene, and finally paper filtration.
Figure 10 shows a mixture of quantum dots purified with ethanol and dissolved in toluene, with the absorption range spanning 300 to 575 nm. This shows that a mixture of different sizes of quantum
dots used in a solar cell could be more efficient than a solar cell containing one size of quantum dots.
This is because we may be able to match the absorption spectra of the quantum dots and the emission spectra of the sun. The sun is composed of different wavelengths and frequencies of light. The sun
can be approximated to behave like a black body at 5777K. The sun emits mostly in the visible spectrum (400 to 700nm) with a maximum emission around 500nm and also infrared (near infrared 700 to
1000nm and short wave infrared 1000 to 3000nm) as shown in Figure 11.
Creating Solar Cells
How Does A Quantum Dot Solar Cell Work?
In general this type of solar cell works by a photon of light traveling into the cell and striking a quantum dot particle which in turn raises the energy of some of the electrons in the quantum dots.
These excited electrons get injected into the titanium dioxide and travel through it to the conducting surface of the electrode. While the electrons are traveling to the conducting surface of the
electrode they leave holes in the quantum dots that need to be filled by other electrons. To fill these holes the quantum dots take electrons from the electrolyte. The electron depleted electrolyte
in turn takes electrons from the counter electrode. This process creates a voltage across the cell and induces a current. Figure 12 below shows a diagram of a quantum dot solar cell.
Parts of a Solar Cell
There are many different parts that make up a solar cell. Furthermore, there are many parts of a cell that can be altered to affect how well the cell performs. One of the main components of the solar
cell is the electrode.
The electrode is the first part of the cell we deal with in constructing a solar cell. The electrode is a glass slide that has a thin conducting layer on one side. The thin conducting layer is made
of tin oxide. The electrode is special because it has to be able to conduct electricity and at the same time be transparent enough so that light can travel through it. The electrodes must be cleaned
before titanium dioxide can be attached to them. We do this by rinsing the conducting side of the electrode, once in deionized water, once in methanol, and then with ethanol. From here we bake the
electrode in the oven at 600º C for an hour to bake off any organic materials that may have held on through the rinsing. Once the electrodes cool they are ready to have the titanium dioxide attached
to them.
Titanium dioxide (TiO[2])
The titanium dioxide is a very important part of the solar cell. It holds the quantum dots in place and gives a pathway for the electrons to travel from the quantum dots to the conducting layer of
the electrode. Furthermore, the titanium dioxide has to have a large surface area so that it can hold a large number of the quantum dots in a smaller area. We use a titanium dioxide mixture that was
created by students who worked on the solar cell project this past summer. We tried three different methods for attaching the titanium dioxide to the conducting surface of the electrode.
Screen Printing
The first method we attempted was a screen printing method. This method works by placing a very finely meshed screen on top of the electrode and doctor blading the titanium dioxide paste through the
screen onto the conducting side of the electrode. We quickly abandoned this method because of various problems ranging from the titanium dioxide pooling to the titanium dioxide smearing.
Doctor Blade Method
The second method we used was known as doctor blading. For this method we taped off a small rectangle so that it covers the majority of one half the electrode. We then placed a few drops of the
titanium dioxide in the non-tapped area and used a glass slide to smooth out the titanium dioxide until it appeared to be uniform over the entire exposed surface.
New Doctor Blade Method
The current and preferred method we use for depositing the titanium dioxide is the new doctor blade method. For this method we place two electrodes edge to edge and tape the top half of one electrode
and the bottom half of the other as illustrated in Figure 13. We then smooth out the titanium dioxide on the electrodes in the same way as we did for the original doctor blade method. We prefer this
method because it produces two cells that have similar titanium dioxide depositions which aids in making the cells reproducible.
For all three methods of attaching the titanium dioxide the electrode then gets baked in the oven at 450ºC for an hour. Once the electrode cools to room temperature the electrode is then ready for
the quantum dots to be attached.
Quantum dots
The quantum dots are the most important part of the solar cell. The quantum dots are the part of the cell that adsorbs the incoming photons of light and send out the usable energy. We tried three
different methods to attach the quantum dots to the titanium dioxide. For the first method we placed the electrodes with the baked titanium dioxide in a Petri dish and deposit a few drops of the
quantum dot solution on top of the electrode. We then allowed the quantum dots to adhere over night. For a second method we tried mixing the quantum dots with the titanium dioxide paste. We then took
this paste and spread it over already deposited titanium dioxide. For the third method of attaching quantum dots we took an electrode with titanium dioxide deposited on it and soaked it in
3-mercaptopropanoic acid for a few hours. We then soaked the cells in quantum dots dissolved in toluene for a few hours. Once the Quantum dots are attached we need to then prepare the counter
Counter Electrode
The counter electrode is very similar to the electrode; however, a layer of silver nitrate must be added to the conducting side of the counter electrode. The silver nitrate catalyzes the
recombination of electrons into the electrolyte from the counter electrode. To add this layer we place a few drops of the liquid silver nitrate on the conducting side of the counter electrode and
spread them around evenly. We then bake the counter electrode in the oven for 50 minutes at 375º C. Once the counter electrode has cooled the cell is ready to be sealed.
Sealing the Cell
To seal the cell we tried two different methods. For the fist method we cut L-shaped strips of stretched Parafilm and placed them around the titanium dioxide. We made sure to leave two small holes
for adding electrolyte later. An example of this is shown in Figure 14. We then took the cell and placed it on top of a heat plate set on low heat (1 or 2) and then placed a weight on top of
the cell. We left the cell there for approximately three minutes and then allowed the cells to cool to room temperature. The current method we use for sealing the cells makes use of silicone caulk.
This method works best when pared with the new doctor blade method for depositing the titanium dioxide. We start by clamping the electrode and counter electrode together in their final resting
positions. We then smear the silicone caulk around the area where the electrodes meet making sure to leave two holes so that electrolyte can be added later. An example of this is shown in Figure 15.
Once the cell is sealed it is ready to have leads attached to it and electrolyte added.
Attaching Leads and Adding Electrolyte
To attach the leads we use silver epoxy to attach the 40 gauge wires to the electrode and counter electrode. Once the silver epoxy sets the cell is ready to have the electrolyte added. We simply drop
in electrolyte with a dropper in one of the holes we previously left in the seal and the capillary effect takes over and spreads the electrolyte out over the cell. The electrolyte uses a redox
reaction between triiodide and iodide to transfer the electrons from the counter electrode into the quantum dots. When the photon of light enters the cell and excites an electron in the quantum dot
the quantum dot needs an electron. The quantum dot then strip an electron from the iodide (oxidizes it) which turns the iodide into a triiodide. The triiodide then travels to the counter electrode
and takes an electron from the counter electrode (reduces it) which turns the triiodide back into an iodide. Once the cell has the leads attached and electrolyte added it is ready for tests to be
performed on it.
Currently I have only performed two different kinds of test on the cells. The first test that I use on the cells makes use of the digital multimeter to measure the voltage across the cells when there
is a large amount of light shining on them from a modified projector. I then measure the voltage across the cells when there is only room lighting on the cells and I find the difference of these two
measurements. If there is a high enough voltage across the cells I will then perform a second test which is taking an IV Curve for the cells. To qualify for a second test I compare the difference in
voltages across the cells with quantum dots attached to the difference in voltage across cells that only have titanium dioxide. If the difference in voltage across the cells with quantum dots is
higher than that of the titanium dioxide cells then I run the IV Curve test on them.
IV Curve
An IV Curve is a graph of all of the possible current and voltage outputs for a cell (I usually only look at IV Curves for when the current and voltage are greater than or equal to zero.) From an IV
Curve I can gather a large amount of diagnostic information about performance of the cells.
Short Circuit Current and Open Circuit Voltage
A cell produces its maximum current when there is no resistance in the circuit, i.e., when there is a short circuit between its positive and negative electrodes. This maximum current is known as the
short circuit current and is abbreviated I[sc]. When the cell is shorted, the voltage across the circuit is zero. Conversely, the maximum voltage occurs when there is a break in the circuit. This is
called the open circuit voltage (V[oc]). Under this condition the resistance is infinitely high and there is no current, since the circuit is incomplete. To make an actual IV Curve we use a program
to record the current and voltage of these two points along with a large number of intermediate values. This results in a graph similar to Figure 18.
Max Power
From the graph of values we can calculate the max power out of our cells. We first find the product of the current and voltage at every point. Where this product is a maximum is the max power for the
An example of this is given in Figure 19.
Fill Factor
The fill factor is a measure of the quality of a cell. It compares the theoretical power of the cell as derived from the short circuit current and open circuit voltage to the actual max power output
by the cell. Figure 20 gives an example of how to calculate the fill factor for a given IV Curve.
Efficiency is the main measurement for the performance of a cell. The cells efficiency is the cells ability to convert light into electricity. To measure the cells efficiency we take the cells out
into direct sunlight and use a pyroheliometer to measure the power going into the cell. Furthermore, we measure the cells short circuit current in the direct sunlight. With the cells short circuit
current and area we can calculate the power output by the cell. We then take the power output of the cell and divide it by the power going into the cell and multiply by 100. This gives us the
efficiency as a percentage value.
We were able to get results out of one of the cells we created. For the cell that was soaked in 3-mercaptopropanoic acid we got a substantial change in voltage when we performed the DMM test that
could not be accounted for by the titanium dioxide only. We also took an IV curve for this cell as shown in Figure 21. The new doctor blade method and sealing method show lots of promise and have
worked better than expected so far.
Future Plans
While we have not thoroughly tested the effects of quantum dots in the solar cells, one method may have functioned well enough to investigate further. A plain titanium dioxide solar cell was immersed
in 3-mercaptopropionic acid (MPA) overnight, then immersed in the quantum dot-toluene solution for 3 hours. The MPA acted as a ligand, or glue, to which the QDs attached to the TiO[2] layer, and the
solar cell gave a measurable voltage when exposed to light. Further research will include reproducing the MPA-capped QD solar cell, as well as investigating other ligands to bind the QDs, such as
pyridine and cystiene. Further investigation should also be made into the quantum dot synthesis itself, such as percent yield, fluorescence spectroscopy of the quantum dots (to determine actual
U.S. Energy Administration
The Encyclopedia of Alternative Energy and Sustainable Living
National Renewable Energy Laboratory National Center for Photovoltaics
National Instruments | {"url":"http://www.cornellcollege.edu/physics/courses/phy312/Student-Projects/QD-solar-cells/QD-solar-cells.html","timestamp":"2014-04-18T23:58:12Z","content_type":null,"content_length":"58203","record_id":"<urn:uuid:c1c9e01e-b5e7-47a5-a0d0-688d316a55e0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
PDL::Graphics::TriD::Rout - Helper routines for Three-dimensional graphics
This module is for miscellaneous PP-defined utility routines for the PDL::Graphics::TriD module. Currently, there are
pp_def( 'combcoords', GenericTypes => [F,D], DefaultFlow => 1, Pars => 'x(); y(); z(); float [o]coords(tri=3);', Code => ' $coords(tri => 0) = $x(); $coords(tri => 1) = $y(); $coords(tri => 2) = $z
(); ', Doc => <<EOT =for ref
Combine three coordinates into a single piddle.
Combine x, y and z to a single piddle the first dimension of which is 3. This routine does dataflow automatically.
Repulsive potential for molecule-like constructs.
repulse uses a hash table of cubes to quickly calculate a repulsive force that vanishes at infinity for many objects. For use by the module PDL::Graphics::TriD::MathGraph. For definition of the
potential, see the actual function.
Attractive potential for molecule-like constructs.
attract is used to calculate an attractive force for many objects, of which some attract each other (in a way like molecular bonds). For use by the module PDL::Graphics::TriD::MathGraph. For
definition of the potential, see the actual function.
This is the interface for the pp routine contour_segments_internal - it takes 3 piddles as input
$c is a contour value (or a list of contour values)
$data is an [m,n] array of values at each point
$points is a list of [3,m,n] points, it should be a grid monotonically increasing with m and n.
contour_segments returns a reference to a Perl array of line segments associated with each value of $c. It does not (yet) handle missing data values.
The data array represents samples of some field observed on the surface described by points. For each contour value we look for intersections on the line segments joining points of the data. When
an intersection is found we look to the adjoining line segments for the other end(s) of the line segment(s). So suppose we find an intersection on an x-segment. We first look down to the left
y-segment, then to the right y-segment and finally across to the next x-segment. Once we find one in a box (two on a point) we can quit because there can only be one. After we are done with a
given x-segment, we look to the leftover possibilities for the adjoining y-segment. Thus the contours are built as a collection of line segments rather than a set of closed polygons.
Copyright (C) 2000 James P. Edwards Copyright (C) 1997 Tuomas J. Lukka. All rights reserved. There is no warranty. You are allowed to redistribute this software / documentation under certain
conditions. For details, see the file COPYING in the PDL distribution. If this file is separated from the PDL distribution, the copyright notice should be included in the file. | {"url":"http://search.cpan.org/dist/PDL/Graphics/TriD/Rout/rout.pd","timestamp":"2014-04-20T06:07:06Z","content_type":null,"content_length":"14221","record_id":"<urn:uuid:715ea45e-5310-42a6-8a11-fab46c1f0a65>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
How can it be that ∞ always has an ) in
General Question
How can it be that ∞ always has an ) in interval notation?
How can it be that a set could include everything up to BUT NOT INCLUDING infinity?
Example of the math problem that is befuddling me:
[-3, ∞)
-3 [――――►
Observing members: 0 Composing members: 0
8 Answers
Because you can’t include everything. It never ends, how can it be fully contained?
Well. The parenthesis is used to denote that you approach the number, as opposed to the bracket denoting that you reach that number.
No matter what number you have, no matter how high it is, you will never reach “infinity”. You can only move towards it. You can reach -3. You can reach 5,000. You can reach 7,849,503,845,298. You
can’t reach infinity.
After saying “reach” so much, it doesn’t even look like a word any more.
@Sarcasm OOOOOOOOOooooooooooooohhhhhhhhhhhhhhh. Ok. Now I get it. So much better. GA.
Also note that ∞ is not itself a member of the set of real numbers.
Remember to stick to defintions. You live and die by them in mathematics. Recall that imaginary numbers have their own definition. They are the set of all real multiplies of i, the sqrt(-1). So, no,
∞ is not a member of that set either.
The appearance of ∞ in interval notation is a shorthand way of saying include all reals above the real number beforehand (a, ∞) or all reals below the one afterward (-∞, b).
The definition of a closed interval is that it contains its endpoint. As these intervals have no endpoint to contain, they are regarded as open (on the relevant side)(the example you gave [3, ∞) is a
closed on left-side and open on the right-side).
Answer this question
This question is in the General Section. Responses must be helpful and on-topic. | {"url":"http://www.fluther.com/95634/how-can-it-be-that-always-has-an-in/","timestamp":"2014-04-20T23:52:21Z","content_type":null,"content_length":"41129","record_id":"<urn:uuid:81d38191-1da3-4787-9012-f892b2eaae49>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
What a great friendly interface, full of colors, witch make Algebrator software an easy program to work with, and also it's so easy to work on, u don't have to interrupt your thoughts stream every
time u need to interact with the program.
Tom Sandy, NE
I was having problems learning quadratic equations, until I purchased your software. Now I know how to do not only do quadratics, but I also learned with the step by step examples how to do other
more difficult equations and inequalities. Great product!
M.V., Texas
I really needed a way to get help with my homework when I wasn't able to speak with my teacher. Algebrator really solved my problem :)
Tom Sandy, NE | {"url":"http://www.sofsource.com/math-homework/algebra-with-pizzazz-word-prob.html","timestamp":"2014-04-17T06:46:14Z","content_type":null,"content_length":"21364","record_id":"<urn:uuid:071e2048-f0d9-4693-aaa2-5dd31821694d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
Treatment of Missing Data Part I
David C. Howell
This is a two part document. For the second part go to http://www.uvm.edu/~dhowell/StatPages/More_Stuff/Mixed-Models-Repeated/Mixed-Models-for-Repeated -Measures2.html
When we have a design in which we have both random and fixed variables, we have what is often called a mixed model. Mixed models have begun to play an important role in statistical analysis and offer
many advantages over more traditional analyses. At the same time they are more complex and the syntax for software analysis is not always easy to set up. I will break this paper up into two papers
because there are a number of designs and design issues to consider. This document will deal with the use of what are called mixed models (or linear mixed models, or hierarchical linear models, or
many other things) for the analysis of what we normally think of as a simple repeated measures analysis of variance. Future documents will deal with mixed models to handle single-subject design
(particularly multiple baseline designs) and nested designs.
A large portion of this document has benefited from Chapter 15 in Maxwell & Delaney (2004) Designing Experiments and Analyzing Data. They have one of the clearest discussions that I know. I am going
a step beyond their example by including a between-groups factor as well as a within-subjects (repeated measures) factor. For now my purpose is to show the relationship between mixed models and the
analysis of variance. The relationship is far from perfect, but it gives us a known place to start. More importantly, it allows us to see what we gain and what we lose by going to mixed models. In
some ways I am going through the Maxwell & Delaney chapter backwards, because I am going to focus primarily on the use of the repeated command in SAS Proc Mixed. I am doing that because it fits
better with the transition from ANOVA to mixed models.
My motivation for this document came from a question asked by Rikard Wicksell at Karolinska University in Sweden. He had a randomized clinical trial with two treatment groups and measurements at pre,
post, 3 months, and 6 months. His problem is that some of his data were missing. He considered a wide range of possible solutions, including "last trial carried forward," mean substitution, and
listwise deletion. In some ways listwise deletion appealed most, but it would mean the loss of too much data. One of the nice things about mixed models is that we can use all of the data we have. If
a score is missing, it is just missing. It has no effect on other scores from that same patient.
Another advantage of mixed models is that we don't have to be consistent about time. For example, and it does not apply in this particular example, if one subject had a follow-up test at 4 months
while another had their follow-up test at 6 months, we simply enter 4 (or 6) as the time of follow-up. We don't have to worry that they couldn't be tested at the same intervals.
A third advantage of these models is that we do not have to assume sphericity or compound symmetry in the model. We can do so if we want, but we can also allow the model to select its own set of
covariances or use covariance patterns that we supply. I will start by assuming sphericity because I want to show the parallels between the output from mixed models and the output from a standard
repeated measures analysis of variance. I will then delete a few scores and show what effect that has on the analysis. I will compare the standard analysis of variance model with a mixed model.
Finally I will use Expectation Maximization (EM) and Multiple Imputation (MI) to impute missing values and then feed the newly complete data back into a repeated measures ANOVA to see how those
results compare. (If you want to read about those procedures, I have a web page on them at Missing.html).
The Data
I have created data to have a number of characteristics. There are two groups - a Control group and a Treatment group, measured at 4 times. These times are labeled as 1 (pretest), 2 (one month
posttest), 3 (3 months follow-up), and 4 (6 months follow-up). I created the treatment group to show a sharp drop at post-test and then sustain that drop (with slight regression) at 3 and 6 months.
The Control group declines slowly over the 4 intervals but does not reach the low level of the Treatment group. There are noticeable individual differences in the Control group, and some subjects
show a steeper slope than others. In the Treatment group there are individual differences in level but the slopes are not all that much different from one another. You might think of this as a study
of depression, where the dependent variable is a depression score (e.g. Beck Depression Inventory) and the treatment is drug versus no drug. If the drug worked about as well for all subjects the
slopes would be comparable and negative across time. For the control group we would expect some subjects to get better on their own and some to stay depressed, which would lead to differences in
slope for that group. These facts are important because when we get to the random coefficient mixed model the individual differences will show up as variances in intercept, and any slope differences
will show up as a significant variance in the slopes. For the standard ANOVA, and for mixed models using the Repeated command, the differences in level show up as a Subject effect and we assume that
the slopes are comparable across subjects.
The program and data used below are available at the following links. I explain below the differences between the data files.
http://www.uvm.edu/~dhowell/StatPages/More_Stuff/Mixed-Models-Repeated/Wicksell.sas http://www.uvm.edu/~dhowell/StatPages/More_Stuff/Mixed-Models-Repeated/WicksellWide.dat http://www.uvm.edu/
~dhowell/StatPages/More_Stuff/Mixed-Models-Repeated/WicksellLong.dat http://www.uvm.edu/~dhowell/StatPages/More_Stuff/Mixed-Models-Repeated/WicksellWideMiss.dat http://www.uvm.edu/~dhowell/
Many of the printouts that follow were generated using SAS Proc Mixed, but I give the SPSS commands as well. (I also give syntax for R, but I warn you that running this problem under R, even if you
have Pinheiro & Bates (2000), is very difficult. I only give these commands for one analysis, but they are relatively easy to modify for related analyses.
The data follow. Notice that to set this up for ANOVA (Proc GLM) we read in the data one subject at a time. (You can see this in the data shown. Files like this are often said to be in "wide format"
for reasons that are easy to see.) This will become important because we will not do that for mixed models. There we will use the "long" format.
│1 │1 │296 │175 │187 │192 │
│1 │2 │376 │329 │236 │76 │
│1 │3 │309 │238 │150 │123 │
│1 │4 │222 │60 │82 │85 │
│1 │5 │150 │271 │250 │216 │
│1 │6 │316 │291 │238 │144 │
│1 │7 │321 │364 │270 │308 │
│1 │8 │447 │402 │294 │216 │
│1 │9 │220 │70 │95 │87 │
│1 │10 │375 │335 │334 │79 │
│1 │11 │310 │300 │253 │140 │
│1 │12 │310 │245 │200 │120 │
│2 │13 │282 │186 │225 │134 │
│2 │14 │317 │31 │85 │120 │
│2 │15 │362 │104 │144 │114 │
│2 │16 │338 │132 │91 │77 │
│2 │17 │263 │94 │141 │142 │
│2 │18 │138 │38 │16 │95 │
│2 │19 │329 │62 │62 │6 │
│2 │20 │292 │139 │104 │184 │
│2 │21 │275 │94 │135 │137 │
│2 │22 │150 │48 │20 │85 │
│2 │23 │319 │68 │67 │12 │
│2 │24 │300 │138 │114 │174 │
A plot of the data follows:
The cell means and standard errors follow:
----------------------------------------- group=Control ---------------------------------
Variable N Mean Std Dev Minimum Maximum
time1 12 304.3333333 79.0642240 150.0000000 447.0000000
time2 12 256.6666667 107.8503452 60.0000000 402.0000000
time3 12 215.7500000 76.5044562 82.0000000 334.0000000
time4 12 148.8333333 71.2866599 76.0000000 308.0000000
xbar 12 231.3958333 67.9581638 112.2500000 339.7500000
---------------------------------------- group=Treatment ----------------------------------
Variable N Mean Std Dev Minimum Maximum
time1 12 280.4166667 69.6112038 138.0000000 362.0000000
time2 12 94.5000000 47.5652662 31.0000000 186.0000000
time3 12 100.3333333 57.9754389 16.0000000 225.0000000
time4 12 106.6666667 55.7939934 6.0000000 184.0000000
xbar 12 145.4791667 42.9036259 71.7500000 206.7500000
The results of a standard repeated measures analysis of variance with no missing data and using SAS Proc GLM follow. You would obtain the same results using the SPSS Univariate procedure. Because I
will ask for a polynomial trend analysis, I have told it to recode the levels as 0, 1, 3, 6 instead of 1, 2, 3, 4. I did not need to do this, but it seemed truer to the experimental design. It does
not affect the standard summary table. (I give the entire data entry parts of the program here, but will leave it out in future code.)
Options nodate nonumber nocenter formdlim = '-';
libname lib 'C:\Users\Dave\Documents\Webs\StatPages\More_Stuff\MixedModelsRepeated';
Title 'Analysis of Wicksell complete data';
data lib.WicksellWide;
infile 'C:\Users\Dave\Documents\Webs\StatPages\More_Stuff\MixedModelsRepeated\WicksellWide.dat' firstobs = 1;
input group subj time1 time2 time3 time4;
xbar = (time1+time2+time3+time4)/4;
Proc Format;
Value group
1 = 'Control'
2 = 'Treatment'
Proc Means data = lib.WicksellWide;
Format group group.;
var time1 -- time4 xbar;
by group;
Title 'Proc GLM with Complete Data';
proc GLM ;
class group;
model time1 time2 time3 time4 = group/ nouni;
repeated time 4 (0 1 3 6) polynomial /summary printm;
Proc GLM with Complete Data
The GLM Procedure
Repeated Measures Analysis of Variance
Tests of Hypotheses for Between Subjects Effects
Source DF Type III SS Mean Square F Value Pr > F
group 1 177160.1667 177160.1667 13.71 0.0012
Error 22 284197.4583 12918.0663
Proc GLM with Complete Data
The GLM Procedure
Repeated Measures Analysis of Variance
Univariate Tests of Hypotheses for Within Subject Effects
Adj Pr > F
Source DF Type III SS Mean Square F Value Pr > F G - G H - F
time 3 373802.7083 124600.9028 45.14 <.0001 <.0001 <.0001
time*group 3 74654.2500 24884.7500 9.01 <.0001 0.0003 0.0001
Error(time) 66 182201.0417 2760.6218
Greenhouse-Geisser Epsilon 0.7297
Huynh-Feldt Epsilon 0.8503
Proc GLM with Complete Data
Analysis of Variance of Contrast Variables
time_N represents the nth degree polynomial contrast for time
Contrast Variable: time_1
Source DF Type III SS Mean Square F Value Pr > F
Mean 1 250491.4603 250491.4603 54.27 <.0001
group 1 2730.0179 2730.0179 0.59 0.4500
Error 22 101545.1885 4615.6904
Contrast Variable: time_2
Source DF Type III SS Mean Square F Value Pr > F
Mean 1 69488.21645 69488.21645 35.37 <.0001
group 1 42468.55032 42468.55032 21.62 0.0001
Error 22 43224.50595 1964.75027
Contrast Variable: time_3
Source DF Type III SS Mean Square F Value Pr > F
Mean 1 53823.03157 53823.03157 31.63 <.0001
group 1 29455.68182 29455.68182 17.31 0.0004
Here we see that each of the effects in the overall analysis is significant. We don't care very much about the group effect because we expected both groups to start off equal at pre-test. What is
important is the interaction, and it is significant at p = .0001. Clearly the drug treatment is having a differential effect on the two groups, which is what we wanted to see. The fact that the
Control group seems to be dropping in the number of symptoms over time is to be expected and not exciting, although we could look at these simple effects if we wanted to. We would just run two
analyses, one on each group. I would not suggest pooling the variances to calculate F, though that would be possible.
In the printout above I have included tests on linear, quadratic, and cubic trend that will be important later. However you have to read this differently than you might otherwise expect. The first
test for the linear component shows an F of 54.27 for "mean" and an F of 0.59 for "group." Any other software that I have used would replace "mean" with "Time" and "group" with "Group × Time." In
other words we have a significant linear trend over time, but the linear × group contrast is not significant. I don't know why they label them that way. (Well, I guess I do, but it's not the way that
I would do it.) I should also note that my syntax specified the intervals for time, so that SAS is not assuming equally spaced intervals. The fact that the linear trend was not significant for the
interaction means that both groups are showing about the same linear trend. But notice that there is a significant interaction for the quadratic.
Mixed Model
The use of mixed models represents a substantial difference from the traditional analysis of variance. For balanced designs (which roughly translates to equal sample sizes) the results will come out
to be the same, assuming that we set the analysis up appropriately. But the actual statistical approach is quite different and ANOVA and mixed models will lead to different results whenever the data
are not balanced or whenever we try to use different, and often more logical, covariance structures.
First a bit of theory. Within Proc Mixed the repeated command plays a very important role in that it allows you to specify different covariance structures, which is something that you cannot do under
Proc GLM. You should recall that in Proc GLM we assume that the covariance matrix meets our sphericity assumption and we go from there. In other words the calculations are carried out with the
covariance matrix forced to sphericity. If that is not a valid assumption we are in trouble. Of course there are corrections due to Greenhouse and Geisser and Hyunh and Feldt, but they are not
optimal solutions.
But what does compound symmetry, or sphericity, really represent? (The assumption is really about sphericity, but when speaking of mixed models most writers refer to compound symmetry, which is
actually a bit more restrictive.) Most people know that compound symmetry means that the pattern of covariances or correlations is constant across trials. In other words, the correlation between
trial 1 and trial 2 is equal to the correlation between trial 1 and trial 4 or trial 3 and trial 4, etc. But a more direct way to think about compound symmetry is to say that it requires that all
subjects in each group change in the same way over trials. In other words the slopes of the lines regressing the dependent variable on time are the same for all subjects. Put that way it is easy to
see that compound symmetry can really be an unrealistic assumption. If some of your subjects improve but others don't, you do not have compound symmetry and you make an error if you use a solution
that assumes that you do. Fortunately Proc Mixed allows you to specify some other pattern for those covariances.
We can also get around the sphericity assumption using the MANOVA output from Proc GLM, but that too has its problems. Both standard univariate GLM and MANOVA GLM will insist on complete data. If a
subject is missing even one piece of data, that subject is discarded. That is a problem because with a few missing observations we can lose a great deal of data and degrees of freedom.
Proc Mixed with repeated is different. Instead of using a least squares solution, which requires complete data, it uses a maximum likelihood solution, which does not make that assumption. (We will
actually use a Restricted Maximum Likelihood (REML) solution.) When we have balanced data both least squares and REML will produce the same solution if we specify a covariance matrix with compound
symmetry. But even with balanced data if we specify some other covariance matrix the solutions will differ. At first I am going to force sphericity by adding type = cs (which stands for compound
symmetry) to the repeated statement. I will later relax that structure.
The first analysis below uses exactly the same data as for Proc GLM, though they are entered differently. Here data are entered in what is called "long form," as opposed to the "wide form" used for
Proc GLM. This means that instead of having one line of data for each subject, we have one line of data for each observation. So with four measurement times we will have four lines of data for that
Because we have a completely balanced design (equal sample sizes and no missing data) and because the time intervals are constant, the results of this analysis will come out exactly the same as those
for Proc GLM so long as I specify type = cs. The data follow. I have used "card" input rather than reading a file just to give an alternative approach.
/*This will put output in C:\Documents and Settings\David Howell until I learn how to change it */
ods listing;
ods html;
data wicklong;
input subj time group dv;
timecont = time;
│1 0 1.00 296.00 1 1 1.00 175.00 1 3 1.00 187.00 1 6 1.00 192.00│9 0 1.00 220.00 9 1 1.00 70.00 9 3 1.00 95.00 9 6 1.00 87.00 10 │17 0 2.00 263.00 17 1 2.00 94.00 17 3 2.00 141.00 17 6 2.00 │
│2 0 1.00 376.00 2 1 1.00 329.00 2 3 1.00 236.00 2 6 1.00 76.00 │0 1.00 375.00 10 1 1.00 335.00 10 3 1.00 334.00 10 6 1.00 79.00 │142.00 18 0 2.00 138.00 18 1 2.00 38.00 18 3 2.00 16.00 18 6 │
│3 0 1.00 309.00 3 1 1.00 238.00 3 3 1.00 150.00 3 6 1.00 123.00│11 0 1.00 310.00 11 1 1.00 300.00 11 3 1.00 253.00 11 6 1.00 │2.00 95.00 19 0 2.00 329.00 19 1 2.00 62.00 19 3 2.00 62.00 19 6│
│4 0 1.00 222.00 4 1 1.00 60.00 4 3 1.00 82.00 4 6 1.00 85.00 5 │140.00 12 0 1.00 310.00 12 1 1.00 245.00 12 3 1.00 200.00 12 6 │2.00 6.00 20 0 2.00 292.00 20 1 2.00 139.00 20 3 2.00 104.00 20 │
│0 1.00 150.00 5 1 1.00 271.00 5 3 1.00 250.00 5 6 1.00 216.00 6│1.00 120.00 13 0 2.00 282.00 13 1 2.00 186.00 13 3 2.00 225.00 │6 2.00 184.00 21 0 2.00 275.00 21 1 2.00 94.00 21 3 2.00 135.00 │
│0 1.00 316.00 6 1 1.00 291.00 6 3 1.00 238.00 6 6 1.00 144.00 7│13 6 2.00 134.00 14 0 2.00 317.00 14 1 2.00 31.00 14 3 2.00 │21 6 2.00 137.00 22 0 2.00 150.00 22 1 2.00 48.00 22 3 2.00 │
│0 1.00 321.00 7 1 1.00 364.00 7 3 1.00 270.00 7 6 1.00 308.00 8│85.00 14 6 2.00 120.00 15 0 2.00 362.00 15 1 2.00 104.00 15 3 │20.00 22 6 2.00 85.00 23 0 2.00 319.00 23 1 2.00 68.00 23 3 2.00│
│0 1.00 447.00 8 1 1.00 402.00 8 3 1.00 294.00 8 6 1.00 216.00 │2.00 144.00 15 6 2.00 114.00 16 0 2.00 338.00 16 1 2.00 132.00 │67.00 23 6 2.00 12.00 24 0 2.00 300.00 24 1 2.00 138.00 24 3 │
│ │16 3 2.00 91.00 16 6 2.00 77.00 │2.00 114.00 24 6 2.00 174.00 │
; /* The following lines plot the data */ Symbol1 I = join v = none r = 12; Proc gplot data = wicklong; Plot dv*time = subj/ nolegend; By group; Run; /* This is the main proc mixed procedure. */
proc mixed data = wicklong; class group subj time; model dv = group time group*time; repeated /subject = subj type = cs rcorr; run;
I have put the data in three columns to save space, but the real syntax statements would have 48 lines of data.
The first set of commands plots the results of each individual subject broken down by groups. Earlier we saw the group means over time. Now we can see how each of the subjects stands relative the
means of his or her group. In the ideal world the lines would start out at the same point on the Y axis (i.e. have a common intercept) and move in parallel (i.e. have a common slope). That isn't
quite what happens here, but whether those are chance variations or systematic ones is something that we will look at later. We can see in the Control group that a few subjects decline linearly over
time and a few other subjects, especially those with lower scores decline at first and then increase during follow-up.
Plots (Group 1 = Control, Group 2 = Treatment)
For Proc Mixed we need to specify that group, time, and subject are class variables. (See the syntax above.) This will cause SAS to treat them as factors (nominal or ordinal variables) instead of as
continuous variables. The model statement tells the program that we want to treat group and time as a factorial design and generate the main effects and the interaction. (I have not appended a "/
solution" to the end of the model statement because I don't want to talk about the parameter estimates of treatment effects at this point, but most people would put it there.) The repeated command
tells SAS to treat this as a repeated measures design, that the subject variable is named "subj", and that we want to treat the covariance matrix as exhibiting compound symmetry, even though in the
data that I created we don't appear to come close to meeting that assumption. The specification "rcorr" will ask for the estimated correlation matrix. (we could use "r" instead of "rcorr," but that
would produce a covariance matrix, which is harder to interpret.)
The results of this analysis follow, and you can see that they very much resemble our analysis of variance approach using Proc GLM.
Proc Mixed with complete data.
The Mixed Procedure
Estimated R Correlation Matrix for subject 1
Row Col1 Col2 Col3 Col4
1 1.0000 0.4791 0.4791 0.4791
2 0.4791 1.0000 0.4791 0.4791
3 0.4791 0.4791 1.0000 0.4791
4 0.4791 0.4791 0.4791 1.0000
Covariance Parameter Estimates
Cov Parm Subject Estimate
CS subject 2539.36
Residual 2760.62
Fit Statistics
-2 Res Log Likelihood 1000.8
AIC (smaller is better) 1004.8
AICC (smaller is better) 1004.9
BIC (smaller is better) 1007.2
Null Model Likelihood Ratio Test
DF Chi-Square Pr > ChiSq
1 23.45 <.0001
Type 3 Tests of Fixed Effects
Num Den
Effect DF DF F Value Pr > F
group 1 22 13.71 0.0012
time 3 66 45.14 <.0001
group*time 3 66 9.01 <.0001
On this printout we see the estimated correlations between times. These are not the actual correlations, which appear below, but the estimates that come from an assumption of compound symmetry. That
assumption says that the correlations have to be equal, and what we have here are basically average correlations. The actual correlations, averaged over the two groups using Fisher's transformation,
Estimated R Correlation Matrix for subj 1
Row Col1 Col2 Col3 Col4
1 1.0000 0.5695 0.5351 -0.01683
2 0.5695 1.0000 0.8612 0.4456
3 0.5351 0.8612 1.0000 0.4202
4 -0.01683 0.4456 0.4202 1.0000
Notice that they are quite different from the ones assuming compound symmetry, and that they don't look at all as if they fit that assumption. We will deal with this problem later. (I don't have a
clue why the heading refers to "subject 1." It just does!)
There are also two covariance parameters. Remember that there are two sources of random effects in this design. There is our normal σ^2, which reflects random noise. In addition we are treating our
subjects as a random sample, and there is thus random variance among subjects. Here I get to play a bit with expected mean squares. You may recall that the expected mean squares for the error term
for the between-subject effect is E(MS[w/in subj]) = σ[e]^2 + aσ[π]^2 and our estimate of σ[e]^2, taken from the GLM analysis, is MS[residual], which is 2760.6218. The letter "a" stands for the
number of measurement times = 4, and MS[subj w/in grps] = 12918.0663, again from the GLM analysis. Therefore our estimate of σ[π]^2 = (12918.0663 + 2760.6218)/4 = 2539.36. These two estimates are our
random part of the model and are given in the section headed Covariance Parameter Estimates. I don't see a situation in this example in which we would wish to make use of these values, but in other
mixed designs they are useful.
You may notice one odd thing in the data. Instead of entering time as 1,2, 3, & 4, I entered it as 0, 1, 3, and 6. If this were a standard ANOVA it wouldn't make any difference, and in fact it
doesn't make any difference here, but when we come to looking at intercepts and slopes, it will be very important how we designated the 0 point. We could have centered time by subtracting the mean
time from each entry, which would mean that the intercept is at the mean time. I have chosen to make 0 represent the pretest, which seems a logical place to find the intercept. I will say more about
this later.
Missing Data
I have just spent considerable time discussing a balanced design where all of the data are available. Now I want to delete some of the data and redo the analysis. This is one of the areas where mixed
designs have an important advantage. I am going to delete scores pretty much at random, except that I want to show a pattern of different observations over time. It is easiest to see what I have done
if we look at data in the wide form, so the earlier table is presented below with '.' representing missing observations. It is important to notice that data are missing completely at random, not on
the basis of other observations.
Group Subj Time0 Time1 Time3 Time6
1 5 150 . 250 216
1 8 447 402 . 216
1 11 310 300 253 .
Group Subj Time0 Time1 Time3 Time6
2 15 362 104 . .
2 19 329 . . 6
2 20 292 139 104 .
2 23 319 68 67 .
If we treat this as a standard repeated measures analysis of variance, using Proc GLM, we have a problem. Of the 24 cases, only 17 of them have complete data. That means that our analysis will be
based on only those 17 cases. Aside from a serious loss of power, there are other problems with this state of affairs. Suppose that I suspected that people who are less depressed are less likely to
return for a follow-up session and thus have missing data. To build that into the example I could deliberately have deleted data from those who scored low on depression to begin with, though I kept
their pretest scores. (I did not actually do this here.) Further suppose that people low in depression respond to treatment (or non-treatment) in different ways from those who are more depressed. By
deleting whole cases I will have deleted low depression subjects and that will result in biased estimates of what we would have found if those original data points had not been missing. This is
certainly not a desirable result.
To expand slightly on the previous paragraph, if we using Proc GLM , or a comparable procedure in other software, we have to assume that data are missing completely at random, normally abbreviated
MCAR. (See Howell, 2008.) If the data are not missing completely at random, then the results would be biased. But if I can find a way to keep as much data as possible, and if people with low pretest
scores are missing at one or more measurement times, the pretest score will essentially serve as a covariate to predict missingness. This means that I only have to assume that data are missing at
random (MAR) rather than MCAR. That is a gain worth having. MCAR is quite rare in experimental research, but MAR is much more common. Using a mixed model approach requires only that data are MAR and
allows me to retain considerable degrees of freedom. (That argument has been challenged by Overall & Tonidandel (2007), but in this particular example the data actually are essentially MCAR. I will
come back to this issue later.)
Proc GLM results
The output from analyzing these data using Proc GLM follows. I give these results just for purposes of comparison, and I have omitted much of the printout.
Analysis of Wicksell missing data 25
13:39 Wednesday, March 31, 2011
The GLM Procedure
Repeated Measures Analysis of Variance
Tests of Hypotheses for Between Subjects Effects
Source DF Type III SS Mean Square F Value Pr > F
group 1 92917.9414 92917.9414 6.57 0.0216
Error 15 212237.4410 14149.1627
The GLM Procedure
Repeated Measures Analysis of Variance
Univariate Tests of Hypotheses for Within Subject Effects
Adj Pr > F
Source DF Type III SS Mean Square F Value Pr > F G - G H - F
time 3 238578.7081 79526.2360 32.42 <.0001 <.0001 <.0001
time*group 3 37996.4728 12665.4909 5.16 0.0037 0.0092 0.0048
Error(time) 45 110370.8507 2452.6856
Greenhouse-Geisser Epsilon 0.7386
Huynh-Feldt Epsilon 0.9300
Notice that we still have a group effect and a time effect, but the F for our interaction has been reduced by about half, and that is what we care most about. (In a previous version I made it drop to
nonsignificant, but I relented here.) Also notice the big drop in degrees of freedom due to the fact that we now only have 17 subjects.
Proc Mixed
Now we move to the results using Proc Mixed. I need to modify the data file by putting it in its long form and to replacing missing observations with a period, but that means that I just altered 9
lines out of 96 (10% of the data) instead of 7 out of 24 (29%). The syntax would look exactly the same as it did earlier. The presence of "time' on the repeated statement is not necessary if I have
included missing data by using a period, but it is needed if I just remove the observation completely. (At least that is the way I read the manual.) The results follow, again with much of the
printout deleted:
Proc Mixed data = lib.wicklongMiss;
class group time subject;
model dv = group time group*time /solution;
repeated time /subject = subject type = cs rcorr;
Estimated R Correlation Matrix for subject 1
Row Col1 Col2 Col3 Col4
1 1.0000 0.4640 0.4640 0.4640
2 0.4640 1.0000 0.4640 0.4640
3 0.4640 0.4640 1.0000 0.4640
4 0.4640 0.4640 0.4640 1.0000
Covariance Parameter Estimates
Cov Parm Subject Estimate
CS subject 2558.27
Residual 2954.66
Fit Statistics
-2 Res Log Likelihood 905.4
AIC (smaller is better) 909.4
AICC (smaller is better) 909.6
BIC (smaller is better) 911.8
Null Model Likelihood Ratio Test
DF Chi-Square Pr > ChiSq
1 19.21 <.0001
Analysis with random intercept and random slope.
Type 3 Tests of Fixed Effects
Num Den
Effect DF DF F Value Pr > F
group 1 22 16.53 0.0005
time 3 57 32.45 <.0001
group*time 3 57 6.09 0.0011
This is a much nicer solution, not only because we have retained our significance levels, but because it is based on considerably more data and is not reliant on an assumption that the data are
missing completely at random. Again you see a fixed pattern of correlations between trials which results from my specifying compound symmetry for the analysis.
Other Covariance Structures
To this point all of our analyses have been based on an assumption of compound symmetry. (The assumption is really about sphericity, but the two are close and Proc Mixed refers to the solution as
type = cs.) But if you look at the correlation matrix given earlier it is quite clear that correlations further apart in time are distinctly lower than correlations close in time, which sounds like a
reasonable result. Also if you looked at Mauchly's test of sphericity (not shown) it is significant with p = .012. While this is not a great test, it should give us pause. We really ought to do
something about sphericity.
The first thing that we could do about sphericity is to specify that the model will make no assumptions whatsoever about the form of the covariance matrix. To do this I will ask for an unstructured
matrix. This is accomplished by including type = un in the repeated statement. This will force SAS to estimate all of the variances and covariances and use them in its solution. The problem with this
is that there are 10 things to be estimated and therefore we will lose degrees of freedom for our tests. But I will go ahead anyway. For this analysis I will continue to use the data set with missing
data, though I could have used the complete data had I wished. I will include a request that SAS use procedures due to Hotelling-Lawley-McKeon (hlm) and Hotelling-Lawley-Pillai-Samson (hlps) which do
a better job of estimating the degrees of freedom for our denominators. This is recommended for an unstructured model. The results are shown below.
Results using unstructured matrix
Proc Mixed data = lib.WicksellLongMiss;
class group time subject;
model dv = group time group*time ;
repeated time /subject = subject type = un hlm hlps rcorr;
Estimated R Correlation Matrix for subject 1
Row Col1 Col2 Col3 Col4
1 1.0000 0.5858 0.5424 -0.02740
2 0.5858 1.0000 0.8581 0.3896
3 0.5424 0.8581 1.0000 0.3971
4 -0.02740 0.3896 0.3971 1.0000
Covariance Parameter Estimates
Cov Parm Subject Estimate
UN(1,1) subject 5548.42
UN(2,1) subject 3686.76
UN(2,2) subject 7139.94
UN(3,1) subject 2877.46
UN(3,2) subject 5163.81
UN(3,3) subject 5072.14
UN(4,1) subject -129.84
UN(4,2) subject 2094.43
UN(4,3) subject 1799.21
UN(4,4) subject 4048.07
Fit Statistics
-2 Res Log Likelihood 883.7
AIC (smaller is better) 903.7
AICC (smaller is better) 906.9
BIC (smaller is better) 915.5
Same analysis but specifying an unstructured covariance matrix.
The Mixed Procedure
Null Model Likelihood Ratio Test
DF Chi-Square Pr > ChiSq
9 40.92 <.0001
Type 3 Tests of Fixed Effects
Num Den
Effect DF DF F Value Pr > F
group 1 22 17.95 0.0003
time 3 22 28.44 <.0001
group*time 3 22 6.80 0.0021
Type 3 Hotelling-Lawley-McKeon Statistics
Num Den
Effect DF DF F Value Pr > F
time 3 20 25.85 <.0001
group*time 3 20 6.18 0.0038
Same analysis but specifying an unstructured covariance matrix.
The Mixed Procedure
Type 3 Hotelling-Lawley-Pillai-Samson
Num Den
Effect DF DF F Value Pr > F
time 3 20 25.85 <.0001
group*time 3 20 6.18 0.0038
Notice the matrix of correlations. From pretest to the 6 month follow-up the correlation with pretest scores has dropped from .46 to -.03, and this pattern is consistent. That certainly doesn't
inspire confidence in compound symmetry.
The Fs have not changed very much from the previous model, but the degrees of freedom for within-subject terms have dropped from 57 to 22, which is a huge drop. That results from the fact that the
model had to make additional estimates of covariances. Finally, the hlm and hlps statistics further reduce the degrees of freedom to 20, but the effects are still significant. This would make me feel
pretty good about the study if the data had been real data.
But we have gone from one extreme to another. We estimated two covariance parameters when we used type = cs and 10 covariance parameters when we used type = un. (Put another way, with the
unstructured solution we threw up our hands and said to the program "You figure it out! We don't know what's going on.' There is a middle ground (in fact there are many). We probably do know at least
something about what those correlations should look like. Often we would expect correlations to decrease as the trials in question are further removed from each other. They might not decrease as fast
as our data suggest, but they should probably decrease. An autoregressive model, which we will see next, assumes that correlations between any two times depend on both the correlation at the previous
time and an error component. To put that differently, your score at time 3 depends on your score at time 2 and error. (This is a first order autoregression model. A second order model would have a
score depend on the two previous times plus error.) In effect an AR(1) model assumes that if the correlation between Time 1 and Time 2 is .51, then the correlation between Time 1 and Time 3 has an
expected value of .512^2 = .26 and between Time 1 and Time 4 has an expected value of .513^3 = .13. Our data look reasonably close to that. (Remember that these are expected values of r, not the
actual obtained correlations.) The solution using a first order autoregressive model follows.
Title 'Same analysis but specifying an autoregressive covariance matrix.';
Proc Mixed data = lib.WicksellLongMiss;
class group subject time;
model dv = group time group*time;
repeated time /subject = subject type = AR(1) rcorr;
Same analysis but specifying an autoregressive covariance matrix.
Estimated R Correlation Matrix for subject 1
Row Col1 Col2 Col3 Col4
1 1.0000 0.6182 0.3822 0.2363
2 0.6182 1.0000 0.6182 0.3822
3 0.3822 0.6182 1.0000 0.6182
4 0.2363 0.3822 0.6182 1.0000
Covariance Parameter Estimates
Cov Parm Subject Estimate
AR(1) subject 0.6182
Residual 5350.25
Fit Statistics
-2 Res Log Likelihood 895.1
AIC (smaller is better) 899.1
AICC (smaller is better) 899.2
BIC (smaller is better) 901.4
Null Model Likelihood Ratio Test
DF Chi-Square Pr > ChiSq
1 29.55 <.0001
Type 3 Tests of Fixed Effects
Num Den
Effect DF DF F Value Pr > F
group 1 22 17.32 0.0004
time 3 57 30.82 <.0001
group*time 3 57 7.72 <.0002
Notice the pattern of correlations. The .6182 as the correlation between adjacent trials is essentially an average of the correlations between adjacent trials in the unstructured case. The .3822 is
just .61822^2 and .2363 = .61823^3. Notice that tests on within-subject effects are back up to 57 df, which is certainly nice, and our results are still significant. This is a far nicer solution than
we had using Proc GLM.
Now we have three solutions, but which should we choose? One aid in choosing is to look at the "Fit Statistics' that are printed out with each solution. These statistics take into account both how
well the model fits the data and how many estimates it took to get there. Put loosely, we would probably be happier with a pretty good fit based on few parameter estimates than with a slightly better
fit based on many parameter estimates. If you look at the three models we have fit for the unbalanced design you will see that the AIC criterion for the type = cs model was 909.4, which dropped to
903.7 when we relaxed the assumption of compound symmetry. A smaller AIC value is better, so we should prefer the second model. Then when we aimed for a middle ground, by specifying the pattern or
correlations but not making SAS estimate 10 separate correlations, AIC dropped again to 899.1. That model fit better, and the fact that it did so by only estimating a variance and one correlation
leads us to prefer that model.
SPSS Mixed
You can accomplish the same thing using SPSS if you prefer. I will not discuss the syntax here, but the commands are given below. You can modify this syntax by replacing CS with UN or AR(1) if you
wish. (A word of warning. For some reason SPSS has changed the way it reads missing data. In the past you could just put in a period and SPSS knew that was missing. It no longer does so. You need to
put in something like -99 and tell it that -99 is the code for missing. While I'm at it, they changed something else. In the past it distinguished one value from another by looking for "white space."
Thus if there were a tab, a space, 3 spaces, a space and a tab, or whatever, it knew that it had read one variable and was moving on to the next. NOT ANYMORE! I can't imagine why they did it, but if
you put two spaces in your data file to keep numbers lined up vertically, it assumes that the you have skipped a variable. Very annoying. Just use one space or one tab between entries.)
dv BY Group Time
/CRITERIA = CIN(95) MXITER(100) MXSTEP(5) SCORING(1)
SINGULAR(0.000000000001) HCONVERGE(0, ABSOLUTE) LCONVERGE(0,
PCONVERGE(0.000001, ABSOLUTE)
/FIXED = Group Time Group*Time | SSTYPE(3)
/METHOD = REML
/PRINT = DESCRIPTIVES SOLUTION
/REPEATED = Time | SUBJECT(Subject) COVTYPE(CS)
/EMMEANS = TABLES(Group)
/EMMEANS = TABLES(Time)
/EMMEANS = TABLES(Group*Time) .
Analyses Using R
The following commands will run the same analysis using the R program (or using S-PLUS). The results will not be exactly the same, but they are very close. Lines beginning with # are comments.
# Analysis of Wicksell Data with missing values
data <- read.table(file.choose(), header = T) # WicksellLongMiss.dat
Time = factor(Time)
Group = factor(Group)
Subject = factor(Subject)
model2 <- lmer(dv~Time + Group + Time:Group + (1|Subject))
# Now use the following model which allows for subjects to have different slopes over time.
# Compare this with the SAS model with an unstructured correlation matrix.
model3 <- lmer(dv~Time + Group + Time:Group + (Time|Subject))
# The following uses the nlme package, which was also written by Bates but
# has been replaced by the newer mle4 package above.
data <- read.table(file.choose(), header = T)
Time = factor(Time)
Group = factor(Group)
Subject = factor(Subject)
model1 <- lme(dv ~ Time + Group + Time*Group, random = ~1 | Subject)
# This model is very close to the one produced by SAS using compound symmetry,
# when it comes to F values, and the log likelihood is the same. But the AIC
# and BIC are quite different. The StDev for the Random Effects are the same
# when squared. The coefficients are different because R uses the first level
# as the base, whereas SAS uses the last.
# http://www.bodowinter.com/tutorial/bw_LME_tutorial2.pdf
Where do we go now?
This document is sufficiently long that I am going to create a new one to handle this next question. In that document we will look at other ways of doing much the same thing. The reason why I move to
alternative models, even though they do the same thing, is that the logic of those models will make it easier for you to move to what are often called single-case designs or multiple baseline designs
when we have finished with what is much like a traditional analysis of variance approach to what we often think of as traditional analysis of variance designs.
For the second part go to http://www.uvm.edu/~dhowell/StatPages/More_Stuff/Mixed-Models-Repeated/Mixed-Models-for-Repeated-Measures2.html
Guerin, L., and W.W. Stroup. 2000. A simulation study to evaluate PROC MIXED analysis of repeated measures data. p. 170-203. In Proc. 12th Kansas State Univ. Conf. on Applied Statistics in
Agriculture. Kansas State Univ., Manhattan.
Howell, D.C. (2008) The analysis of variance. In Osborne, J. I., Best practices in Quantitative Methods. Sage.
Little, R. C., Milliken, G. A., Stroup, W. W., Wolfinger, R. D., & Schabenberger, O. (2006). SAS for Mixed Models. Cary. NC. SAS Institute Inc.
Maxwell, S. E. & Delaney, H. D. (2004) Designing Experiments and Analyzing Data: A Model Comparison Approach, 2nd edition. Belmont, CA. Wadsworth.
Overall, J. E., Ahn, C., Shivakumar, C., & Kalburgi, Y. (1999). Problematic formulations of SAS Proc.Mixed models for repeated measurements. Journal of Biopharmaceutical Statistics, 9, 189-216.
Overall, J. E. & Tonindandel, S. (2002) Measuring change in controlled longitudinal studies. British Journal of Mathematical and Statistical Psychology, 55, 109-124.
Overall, J. E. & Tonindandel, S. (2007) Analysis of data from a controlled repeated measurements design with baseline-dependent dropouts. Methodology, 3, 58-66.
Pinheiro, J. C. & Bates, D. M. (2000). Mixed-effects Models in S and S-Plus. Springer.
Some good references on the web are:
The following is a good reference for people with questions about using SAS in general.
Downloadable Papers on Multilevel Models
Good coverage of alternative covariance structures
The main reference for SAS Proc Mixed is
Little, R.C., Milliken, G.A., Stroup, W.W., Wolfinger, R.D., & Schabenberger, O. (2006) SAS for mixed models, Cary, NC SAS Institute Inc.
See also
Maxwell, S. E. & Delaney, H. D. (2004). Designing Experiments and Analyzing Data (2nd edition). Lawrence Erlbaum Associates.
The classic reference for R is Penheiro, J. C. & Bates, D. M. (2000) Mixed-effects models in S and S-Plus. New York: Springer.
Return to Dave Howell's Statistical Home Page
Last revised 12/15/2012 | {"url":"http://www.uvm.edu/~dhowell/StatPages/More_Stuff/Mixed-Models-Repeated/Mixed-Models-for-Repeated-Measures1.html","timestamp":"2014-04-19T07:40:26Z","content_type":null,"content_length":"61866","record_id":"<urn:uuid:9fcb3eac-f112-4374-aab4-e2eedbb184f0>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Tutors
Oakland, CA 94618
Effective Tutor for Math, Computer Science, and German
...I have been teaching math and computer science as a tutor and lecturer in multiple countries at high schools, boarding schools, and universities. I am proficient in teaching math at any high
school, college, or university level.
1 is the first math class...
Offering 10+ subjects including algebra 1 and algebra 2 | {"url":"http://www.wyzant.com/Emeryville_algebra_tutors.aspx","timestamp":"2014-04-19T22:27:40Z","content_type":null,"content_length":"59088","record_id":"<urn:uuid:cd426cef-0a8d-45a6-a822-d13e70077510>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
First order logic : definitions
What is the difference between terms and atoms, I read lot's of differents definitions, then when I think that I've understood, I find an exemple where both are used without any difference (for
ordering by instance).
I'm not a logician, so I'm unsure of how standardized such terminology is.
I'll give your thread a bump by noting that the current Wikipedia article on first order logic does not define the term "atom". It does define terms where the adjective "atomic" appears, such as
"atomic formulas". The fact the adjective "atomic" appears in terminology doesn't require that the definition of the noun "atom" must be established.
Perhaps if you quote or cite some of the material that confuses you, another forum member can sort it out. | {"url":"http://www.physicsforums.com/showthread.php?p=4203023","timestamp":"2014-04-17T15:40:06Z","content_type":null,"content_length":"23912","record_id":"<urn:uuid:27ea4ca4-1aaf-4bff-9c44-d0c5d6d97fd5>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Eigenvalue degeneracies of parameterized Hermitian matrices
Replies: 1 Last Post: Oct 10, 2011 4:40 PM
Messages: [ Previous | Next ]
Eigenvalue degeneracies of parameterized Hermitian matrices
Posted: Oct 10, 2011 9:58 AM
If H(x,y) is an NxN parameterized Hermitian matrix, where x,y are
what are the maximum number of points at which the two smallest
are degenerate in the xy-plane if m (<N) entries of H(x,y) are
(a) linear in x and y?
(b) quadratic in x and y?
Date Subject Author
10/10/11 Eigenvalue degeneracies of parameterized Hermitian matrices Lou Pagnucco
10/10/11 Re: Eigenvalue degeneracies of parameterized Hermitian matrices Lou Pagnucco | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2304918","timestamp":"2014-04-18T08:50:09Z","content_type":null,"content_length":"17307","record_id":"<urn:uuid:f28eba8c-0396-4629-adf8-cc676414121a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem from a job interview
September 14th 2009, 06:50 PM #1
Junior Member
Nov 2008
Problem from a job interview
3 girls and 2 boys are in a room one more child is added , nurse picks a kid randomly and it is a boy
What is the probability that the added child is boy?
This problem is either too hard or too easy.
Can you tell me what is the correct answer?
Thank you.
I think it is 1/2, because the second part does not affects the first part.
It's fairly easy.
Assuming it is equally likely for the added child to be a boy or to be a girl.
Probability that nurse adds a boy = 1/2
Probability that nurse adds a girl = 1/2
Since we haven't been told which child was added (that is why we are assigning probabilities above), we have to account for both the possibilities of nurse adding a boy and nurse adding a girl.
By basic application of Bayes Theorem and the law of total probability, we can find that
Probability of selecting a boy = (Prob of selecting a boy given that added child was a girl)*(Probability of adding a girl child) + (Prob of selecting a boy given that added child was a boy)*
(Probability of adding a boy child)
After having calculated the probability of selecting a boy, we proceed to calculating the probability of added child being a boy given picked child is boy.
Note that situation is inverse that of the one in which we calculated the probability of selecting a boy.
Using the conditional probability formula,
Probability of the boy being the added child given the picked child was a boy = Probability of the boy being the added child and the boy being the picked child / Probability of picking a boy
The Probability of the boy being the added child and the boy being the picked child can be found by multiplying the probability of picking the boy given the added child was a boy and the
probability of adding a boy child.
I hope the method is clear!
September 14th 2009, 07:04 PM #2
Sep 2009
September 14th 2009, 08:04 PM #3
Jul 2009 | {"url":"http://mathhelpforum.com/advanced-statistics/102295-problem-job-interview.html","timestamp":"2014-04-19T23:14:49Z","content_type":null,"content_length":"33472","record_id":"<urn:uuid:cc85cb4e-af31-4e5e-affd-ab2ba3f75761>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Approximate Gaussian Integration using Expectation Propagation
John Cunningham, Philipp Hennig and Simon Lacoste-Julien
arXiv 2011.
While Gaussian probability densities are omnipresent in applied mathematics, Gaussian cumulative probabilities are hard to calculate in any but the univariate case. We offer here an empirical study
of the utility of Expectation Propagation (EP) as an approximate integration method for this problem. For rectangular integration regions, the approximation is highly accurate. We also extend the
derivations to the more general case of polyhedral integration regions. However, we find that in this polyhedral case, EP's answer, though often accurate, can be almost arbitrarily wrong. These
unexpected results elucidate an interesting and non-obvious feature of EP not yet studied in detail, both for the problem of Gaussian probabilities and for EP more generally. | {"url":"http://eprints.pascal-network.org/archive/00008872/","timestamp":"2014-04-18T21:06:23Z","content_type":null,"content_length":"6855","record_id":"<urn:uuid:43b504c9-af54-4e28-a7ec-c8b25656ec31>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
Two questions (one just checking formula)
October 4th 2006, 11:06 AM
Two questions (one just checking formula)
To solve this question:
Suppose you deposit $100 in an account that earns 0.5% each month. You make no withdrawals from the account and deposit no more money into the account. How much money will you have in the account
after 4 years?
Do I use A = P(1 + rt) to solve this question? If so, is my answer right:
A = P(1 + rt)
A = 100[1 + (.5 x 48)]
A = 100[1 + 24]
A = 100[25]
A = 2500
Second question: (I don't even know where to begin & I have to show my work!)
A projectile is fired directly upward with amuzzle velocity of 860 feet per second from a height of 7 feet above the ground.
a. Determine a function for the height of the projectile "t" seconds after it's released.
b. How long does it take the projectile to reach a height of 100 feet on its way up?
c. How long is the projectile in the air?
October 4th 2006, 12:10 PM
To solve this question:
Suppose you deposit $100 in an account that earns 0.5% each month. You make no withdrawals from the account and deposit no more money into the account. How much money will you have in the account
after 4 years?
Do I use A = P(1 + rt) to solve this question? If so, is my answer right:
A = P(1 + rt)
A = 100[1 + (.5 x 48)]
A = 100[1 + 24]
A = 100[25]
A = 2500
No the formula is given below.
October 4th 2006, 12:31 PM
Second question: (I don't even know where to begin & I have to show my work!)
A projectile is fired directly upward with amuzzle velocity of 860 feet per second from a height of 7 feet above the ground.
a. Determine a function for the height of the projectile "t" seconds after it's released.
b. How long does it take the projectile to reach a height of 100 feet on its way up?
c. How long is the projectile in the air?
The standard function is a quadradic:
v is initial velocity which is 860 feet per second which is going to be positive since it is going up.
h is initial height which is 7 feet.
g is the decelleration from gravity. Which is 32 feet per second per second.
The obejct reaches 100 feet when s(t)=100 for some value t.
Important. When you solve this using quadradic formula you shall get two values for t. Select the first one because the problem say "..going up...".
That is equivalent to saying amount of time until it comes down.
s(t)=0 for some positive t value.
Apply quadradic formula.
October 4th 2006, 03:39 PM
The standard function is a quadradic:
v is initial velocity which is 860 feet per second which is going to be positive since it is going up.
h is initial height which is 7 feet.
g is the decelleration from gravity. Which is 32 feet per second per second.
The obejct reaches 100 feet when s(t)=100 for some value t.
Important. When you solve this using quadradic formula you shall get two values for t. Select the first one because the problem say "..going up...".
That is equivalent to saying amount of time until it comes down.
s(t)=0 for some positive t value.
Apply quadradic formula.
What he said, except with 870 turned into 860. :) | {"url":"http://mathhelpforum.com/math-topics/6128-two-questions-one-just-checking-formula-print.html","timestamp":"2014-04-19T17:41:01Z","content_type":null,"content_length":"9029","record_id":"<urn:uuid:b23b3a71-f5cc-44a5-9959-1eedff41b204>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability of 0 bit in ASCII text files
N=8A+B, B<8, means there are A whole bytes and B odd bits, so either A or A+1 MSBs included.
See what gaps you can fill in here:
Prob that this includes A+1 MSB's = ....?; if A+1 MSBs, prob that all N bits are 0 is ...?
Prob that this includes A MSB's = ....?; if A MSBs, prob that all N bits are 0 is ...?
Adding this up, prob that all N bits are 0 is ...?
if A+1 MSBs, prob that all N bits are 0 is ...?
That is not easy for me Sir.
Analysis: Since the N bits are drawn consecutively from ASCII, there is only 1 character (out of 2^256), which is are all 0. So only 1 MSB. Thus, the Prob =1/2^256.
Others seem to follow the conception , or I misunderstood your point?
1) Pr[0] in ASCII (assume each character appears with same ratio) equals = 1/8+1/2=5/8. Is it OK?
2) Successive 7 bits are drawn at random from ASCII bits (e.g. no bias of character distribution), what is Pr[0] in the 7 bits?
Successive 4 bits are drawn (same condition with above), what is Pr[0] in the 4 bits?
So, say, Successive N bits are drawn (same condition) , what is Pr[0] in the N bits?
Do you think it is same case? remember you explained that N/8*2^-N + (1-N/8)*2^-(N+1) . Does the formula apply to the case of 2).
Shed some lights on please. | {"url":"http://www.physicsforums.com/showthread.php?p=4182573","timestamp":"2014-04-20T21:32:12Z","content_type":null,"content_length":"82117","record_id":"<urn:uuid:b8791951-b594-4796-a6c8-79c8f11fdf50>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binomial expansion comparison with legendre polynomial
April 15th 2012, 08:12 PM
Binomial expansion comparison with legendre polynomial
I've been working on this question which asks to show that
${{P}_{n}}(x)=\frac{1}{{{2}^{n}}n!}\frac{{{d}^{n}}} {d{{x}^{n}}}{{\left( {{x}^{2}}-1 \right)}^{n}}$
So first taking the n derivatives of the binomial expansions of (x^2-1)^n
$\frac{{{d}^{n}}}{d{{x}^{n}}}...=\sum\limits_{k=0}^ {n}{{{(-1)}^{k}}\frac{n!}{k!(n-k)!}(2n-2k)(2n-2k-1)...(2n-2k-n+1){{x}^{2n-2k}}}$
and comparing it with
$=\frac{1}{{{2}^{n}}}\sum\limits_{m=0}^{\frac{n}{2} }{{{(-1)}^{m}}\frac{(2n-2m)!}{m!(n-m)!(n-2m)!}{{x}^{n-2m}}}$
I'm having trouble with the final part,
It's clear that there's a factor of 1/n!2^n difference between them but also
the Pn(x) series has m=0...n/2, and also x^n , where as the n'th derivative series has k=0...n and x^2n.
How can you rewrite one in terms of the other so they both have the same sum limits?
I've tried setting k=2s in the n'th derivative series and a bunch of other similar changes, but non will change the n'th powers of x.
The reason I noticed this was because the last terms of the series arn't the same,
the first series has last term, (-n)! on the bottom, means 1/infinity right?
and the second
$\frac{1}{{{2}^{n}}}{{(-1)}^{\frac{n}{2}}}\frac{n!}{n!(\frac{n}{2})!0!}{{x }^{0}}$
Have I made a mistake early on or is there a clever way to combine the two series? | {"url":"http://mathhelpforum.com/differential-equations/197356-binomial-expansion-comparison-legendre-polynomial-print.html","timestamp":"2014-04-21T09:10:31Z","content_type":null,"content_length":"7144","record_id":"<urn:uuid:088b60c1-8ba9-41c2-9541-89f9237f5857>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00627-ip-10-147-4-33.ec2.internal.warc.gz"} |
The inverse fallacy: an account of deviations from Bayes's theorem and the additivity principle
Villejoubert, Gaelle and Mandel, David R. (2002) The inverse fallacy: an account of deviations from Bayes's theorem and the additivity principle. Memory & Cognition, 30(2), pp. 171-178. ISSN (print)
Full text available as:
VilMan02.pdf - Published Version
Restricted to Repository staff only
Download (244Kb)
Text (Erratum)
VilMan02_erratum.pdf - Other
Restricted to Repository staff only
Download (32Kb)
In judging posterior probabilities, people often answer with the inverse conditional probability�a tendency named the inverse fallacy. Participants (N = 45) were given a series of probability
problems that entailed estimating both p(H|D) and p(~H|D). The findings revealed that deviations of participants' estimates from Bayesian calculations and from the additivity principle could be
predicted by the corresponding deviations of the inverse probabilities from these relevant normative benchmarks. Methodological and theoretical implications of the distinction between inverse fallacy
and base-rate neglect and the generalization of the study of additivity to conditional probabilities are discussed.
Actions (Repository Editors) | {"url":"http://eprints.kingston.ac.uk/6308/","timestamp":"2014-04-17T04:33:40Z","content_type":null,"content_length":"21558","record_id":"<urn:uuid:2a03edea-c0a3-4723-9d15-df2bb0e40a46>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abstract: This thesis has three interrelated goals:
The main goal is an analysis of Czech clitics, units of grammar on the borderline between morphology and syntax with rather peculiar ordering properties both relative to the whole clause and to each
other. We examine the actual set of clitics, their rather rigid ordering properties, and finally the properties of so-called clitic climbing. The analysis evaluates previous research, but it also
provides new insights, especially in the position of the clitic cluster and in the constraints on clitic climbing. We show that many of the constraints regarding position of the clitic cluster
suggested in previous research do not hold. We also argue that cases when clitics do not follow the first constituent are in fact not exceptions in clitic placement but instead unusual frontings.
The second goal is the development of a framework within Higher Order Grammar (HOG) supporting a transparent and modular treatment of word order. Unlike previous versions of HOG, we work with signs
(containing phonological, syntactic and potentially other information) as actual objects of the grammar. Apart from that, we build on the simplicity and elegance of the pre-formal part of the
linearization framework within Head-driven Phrase Structure Grammar.
Finally, the third objective is to test the result of the second goal by applying it on the results of the first goal. | {"url":"http://www.ling.ohio-state.edu/~hana/hog/","timestamp":"2014-04-19T02:42:05Z","content_type":null,"content_length":"21916","record_id":"<urn:uuid:87e9ef4b-a02a-4eb9-b3bf-52e02346eb5a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Ok, I need to understand HOW to do this. My teacher confuses the living day lights out of me... and I need help!
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a12133e4b05517d5369c9b","timestamp":"2014-04-16T07:49:44Z","content_type":null,"content_length":"54919","record_id":"<urn:uuid:f05a65fe-0d51-4dcc-bbab-ae2f28903ed7>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Codimension zero immersions
up vote 13 down vote favorite
Given an immersion of the n-1-sphere into a (closed) n-manifold, when does it extend to an immersion of the n-disk?
Remark: If the sphere had dimension k smaller than n-1, then such an immersion would exist if and only if the corresponding map from the k-sphere to the Stiefel manifold is 0-homotopic. This is the
Hirsch-Smale Theorem and in fact an example of an h-principle. However the case k=n-1 is exactly the exceptional case which does NOT obey an h-principle. Easy examples (Figure 8.1. in the book by
Eliashberg-Mishachev) show that there exist immersions of the circle in the plane which have a formal extension but not a genuine extension to the 2-disk. So, is there anything known about sufficient
conditions for extendability?
gt.geometric-topology dg.differential-geometry ds.dynamical-systems at.algebraic-topology
4 I'm probably ignorant, but can you say why this question is tagged ds.dynamical-systems? – Willie Wong Jun 16 '11 at 16:02
Related: mathoverflow.net/questions/57215/… , mathoverflow.net/questions/43743/… and the work of Koschorke on singularities of bundle morphisms. But I think the question is a subtle one. – Mark
Grant Jun 16 '11 at 16:14
@MG: Wasn't Koschorke's work about codimension one immersions, which then reduces to the study of vector bundle monomorphisms? – user15817 Jun 16 '11 at 16:43
@unknown (google): Koschorke's work is quite general, but I'm not sure it applies to this problem. See Chapter 1.3 of "Vector fields and other vector bundle morphisms—a singularity approach",
where a complete obstruction to a map being homotopic to an immersion is constructed. It is an element in a certain normal bordism group. – Mark Grant Jun 17 '11 at 16:02
add comment
3 Answers
active oldest votes
This is subtle, even for $n=2$. In this case, clearly the problem reduces to $S^2$ or $\mathbb{R}^2$ since every surface has one of these as a universal cover. Samuel Blank found a
criterion to determine if a curve in $\mathbb{R}^2$ bounds an immersed disk. An exposition has been given by Valentin Poenaru, and the criterion has been extended to $S^2$ by Frisch.
up vote 13 down There is also a bit of discussion in these papers about the higher dimensional problem.
vote accepted
add comment
Smale-Hirsch is not just a theorem about existence of immersions. It's a theorem about the homotopy-type of the space of all immersions.
Given an immersion $$S^{n-1} \to \mathbb R^n$$
you get a bundle monomorphism
$$TS^{n-1} \to \mathbb R^n$$
up vote There's a cute trick that shows the space of all such bundle monomorphisms has the homotopy-type of $Maps(S^{n-1}, SO_n)$. Here's how it goes. Given a bundle monomorphism $f : TS^{n-1} \to \
3 down mathbb R^n$ the associated map $G(f) : S^{n-1} \to SO_n$ is defined by, given $p \in S^{n-1}$ and $v \in \mathbb R^n$. Then $G(f)(p)(v)$ is defined by letting $v_\perp \in \mathbb R$ and $V_
vote {||} \in T_pS^{n-1}$ be the orthogonal component and tangent-space orthogonal projection of $v$, and $G(f)(p)(v) = f(p)(v_{||}) + v_{\perp}f(p)^+$ where $f(p)^+$ is the unit vector normal to
$f(p)(T_pS^{n-1})$ chosen so that $G(f) \in SO_n$ i.e. that it is not orientation-reversing. You can reverse this construction as well, to go from maps $S^{n-1} \to SO_n$ to bundle
immersions $TS^{n-1} \to \mathbb R^n$.
It's basically by design, a homotopy of $G(f)$ can be re-interpreted as a $1$-parameter family of immersions $S^{n-1} \to \mathbb R^n$ equipped with a normal vector field.
Perhaps you can't extend this 1-parameter family to an immersion $S^{n-1} \times [0,1] \to \mathbb R^n$. Is that the key issue?
I'm not so sure whether this works. The problem might be that a homotopy of immersions of S does not necessarily yield an immersion of Sx[0,1]. One needs that the derivative in [0,1]
-direction is linearly independent of the derivatives in the S-direction. – user15817 Jun 16 '11 at 23:46
There's a key difference. A map $X \to V_{n,j}$ may not lift to a map $X \to V_{n,j+1}$. But a map $X \to V_{n,n-1} \equiv SO_n$ always lifts to a map $X \to V_{n,n} \equiv O_n$. Let me
edit my answer a bit to make the key step less "insiderish". – Ryan Budney Jun 17 '11 at 0:23
I don't think constructing the 1-parameter family of immersions with a normal vector field is the problem. But I changed my answer. It's only a partial response to your question, not
really everything you were looking for. What do you mean by "formal extension" -- is that the 1-parameter family of immersions with normal vector field? – Ryan Budney Jun 17 '11 at 1:13
Smale-Hirsch describes the homotopy types of these spaces of immersions, as Ryan says, so it gives a test for whether a given immersion of $S^{n-1}$ in an $n$-manifold is homotopic
4 through immersions to one that extends to an immersion of $D^n$. But it does not answer the question of whether a given immersion can be so extended. You might think that the restriction
map from the space of immersions of $D^n$ to the space of immersions of $S^{n-1}$ is a fibration, but it's not. It is if the disk has positive codimension, and this is a key step in
proving Smale-Hirsch. But it's false in codim $0$. – Tom Goodwillie Jun 17 '11 at 2:07
@ RB : "formal immersion" means a vector bundle monomorphism TM--->TN which does not necessarily come from an immersion M--->N. If dim(M)<dim(N), then every formal immersion is homotopic
to an immersion, but for dim(M)=dim(N) this is not always true. – user15817 Jun 18 '11 at 11:26
add comment
Christian Pappas gave a Morse-theoretic method for constructing all extensions of a codimension 1 immersion $f:\partial N\to W$ to an immersion $F:N\to W$ with $F|_{\partial N}=
up vote 0 down vote
add comment
Not the answer you're looking for? Browse other questions tagged gt.geometric-topology dg.differential-geometry ds.dynamical-systems at.algebraic-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/67961/codimension-zero-immersions/67979","timestamp":"2014-04-19T09:51:36Z","content_type":null,"content_length":"71470","record_id":"<urn:uuid:66665b15-4b9a-4010-9753-c9472c00a5f3>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multidimensional array, extracting the Nth dimension
Hi all
I have an array of doubles A which has size [6,8,88].
I want to plot the vector A[1,1,:], but I get the following error:
Error using plot. Data may not have more than 2 Dimensions.
I thought that the operation A[1,1,:] would return a simple vector of length 88, as suggested by the tutorial page here
but it actually has size [1,1,88], which is causing the error.
I've thought of using a loop to extract each element, but this doesn't suit my purpose very well, as I would also like to plot A[i,j,:] for arbitrary i,j, and also perform other operations on these
Is there a simple way to access the vector A[i,j,:], and have it return a true vector?
I've reproduced the problem with a simpler example, which gives the same error.
A = zeros(3,3,4)
plot(1:4, A(1,1,:))
Sorry for simple question, I'm fairly new at this.
Many thanks for your help
0 Comments
No products are associated with this question.
3 Answers
Accepted answer
plot(1:4, squeeze(A(1,1,:)))
You might need to transpose the squeezed A, I can't remember :P
2 Comments
This solutions seems to suit my purpose best, many thanks for very fast reply. This forum is amazing!
A big part of getting a fast, useful reply was that you formulated your question clearly and specifically.
You could use the permute command:
A_plot = permute(A(1,1,:),[3 1 2]);
to shuffle the 3rd dimension to become the first dimension.
0 Comments
You could use the reshape command:
A_plot = reshape(A(1,1,:),88,1);
to create reshape to 88x1. More robust, in case the 3rd dimension is not always 88 in length, would be
A_plot = reshape(A(1,1,:),[],1);
which infers the length of the first dimension from the number of elements.
0 Comments | {"url":"http://www.mathworks.com/matlabcentral/answers/79700-multidimensional-array-extracting-the-nth-dimension","timestamp":"2014-04-18T15:41:47Z","content_type":null,"content_length":"31875","record_id":"<urn:uuid:3e228753-2fdc-4315-99cb-ca8ec43dcdb5>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
A simple analysis of thermodynamic properties for classical plasmas: I. theory
Penfold, R., Nordholm, S. and Nichols, N. (2005) A simple analysis of thermodynamic properties for classical plasmas: I. theory. Journal of Statistical Mechanics: Theory and Experiment, 2005. P06009.
ISSN 1742-5468
Full text not archived in this repository.
To link to this article DOI: 10.1088/1742-5468/2005/06/P06009
By eliminating the short range negative divergence of the Debye–Hückel pair distribution function, but retaining the exponential charge screening known to operate at large interparticle separation,
the thermodynamic properties of one-component plasmas of point ions or charged hard spheres can be well represented even in the strong coupling regime. Predicted electrostatic free energies agree
within 5% of simulation data for typical Coulomb interactions up to a factor of 10 times the average kinetic energy. Here, this idea is extended to the general case of a uniform ionic mixture,
comprising an arbitrary number of components, embedded in a rigid neutralizing background. The new theory is implemented in two ways: (i) by an unambiguous iterative algorithm that requires numerical
methods and breaks the symmetry of cross correlation functions; and (ii) by invoking generalized matrix inverses that maintain symmetry and yield completely analytic solutions, but which are not
uniquely determined. The extreme computational simplicity of the theory is attractive when considering applications to complex inhomogeneous fluids of charged particles.
Deposit Details
Centaur Editors: Update this record | {"url":"http://centaur.reading.ac.uk/27536/","timestamp":"2014-04-20T06:24:20Z","content_type":null,"content_length":"25048","record_id":"<urn:uuid:bf59a62c-6cf3-464a-9bb0-9e95a5c044f4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
How do i deal with this separable differential equation?
June 19th 2010, 01:45 PM #1
How do i deal with this separable differential equation?
I need to solve this separable differential equation
$4y^3\frac{dy}{dx}=x^{-2}~~~~~,y(1) = 1$
We have $4y^3~dy = \frac{1}{x^2}dx$
$y^4 = -\frac{1}{x}$
But what is $\bigg(-\frac{1}{x}\bigg)^{1/4}$?
The correct answer is $y^4 = - \frac{1}{x} + C$. Then you substitute y(1) = 1 to get the value of C. Then you make y the subject (if that's what the question expects).
This is confused,
Don't you want the answer in terms of y?
You want y to be the subject usually. ie: y = f(x).
Find C using the way Mr F said: $1 = -\frac{1}{1}+C$
Once the value of C is known you can take the 4th root (obviously substitute in the value of C calculated in the previous step) : $y = \sqrt[4]{-\frac{1}{x}+C}$
As you said you cannot take the 4th root of a negative number so you will have domain restrictions in your answer since $-\frac{1}{x} +C \geq 0$
June 19th 2010, 01:49 PM #2
June 20th 2010, 12:52 AM #3
June 20th 2010, 02:48 AM #4 | {"url":"http://mathhelpforum.com/differential-equations/148913-how-do-i-deal-separable-differential-equation.html","timestamp":"2014-04-24T09:52:38Z","content_type":null,"content_length":"43201","record_id":"<urn:uuid:b03617fb-feb6-4892-a0e9-0dadb81af33d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
Incidence (geometry)
, the
are those such as 'lies on' between points and lines (as in 'point P lies on line L'), and 'intersects' (as in 'line L
intersects line L
', in three-dimensional space). That is, they are the binary relations describing how subsets meet. The
propositions of incidence
stated in terms of them are statements such as 'any two lines in a plane meet'. This is true in a
projective plane
, though not true in
Euclidean space
of two dimensions where lines may be
Historically, projective geometry was introduced in order to make the propositions of incidence true (without exceptions such as are caused by parallels). From the point of view of synthetic geometry
it was considered that projective geometry should be developed using such propositions as axioms. This turns out only to make a major difference only for the projective plane (for reasons to do with
Desargues' theorem).
The modern approach is to define projective space starting from linear algebra and homogeneous coordinates. Then the propositions of incidence are derived from the following basic result on vector
spaces: given subspaces U and V of a vector space W, the dimension of their intersection is at least dim U + dim V - dim W. Bearing in mind that the dimension of the projective space P(W) associated
to W is dim W - 1, but that we require an intersection of subspaces of dimension at least 1 to register in projective space (the subspace {0} being common to all subspaces of W), we get the basic
proposition of incidence in this from: linear subspaces L and M of projective space P meet provided dim L + dim M is at least dim P.
See also: Incidence matrix, incidence algebra, angle of incidence | {"url":"http://www.fact-index.com/i/in/incidence__geometry_.html","timestamp":"2014-04-21T04:31:57Z","content_type":null,"content_length":"5032","record_id":"<urn:uuid:ec37f180-f842-44d1-b915-85e54a0dcfca>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
* operator for vector-matrix multiplication
11-25-2013, 07:00 PM
* operator for vector-matrix multiplication
I've been having issues with transformations using the * operator between a mat4 and a vec4. I have isolated all matrix transformations in a 'mul' function and discovered that these two
implementations give different results (as seen from the result of the transformations on the scene):
Code :
vec4 mul1(mat4 matrix, vec4 vector)
return matrix * vector; // Doesn't do the expected transformation
vec4 mul2(mat4 matrix, vec4 vector)
return vector * transpose(matrix); // Does the expected transformation
It was my understanding that OpenGL should consider the vector as a column vector in the first case and as a row vector in the second case. Because of this, I was expecting those two functions to
be equivalent (apart for the row/column-ness of the resulting vector, but that should be ignored...?). I even confirmed this using Maple and didn't see anything to indicate that the * operator
behaves differently in the GLSL spec.
This is in a #version 140 shader using the latest drivers for my nvidia card.
Any idea what I am missing here? | {"url":"https://www.opengl.org/discussion_boards/printthread.php?t=183303&pp=10&page=1","timestamp":"2014-04-17T07:24:21Z","content_type":null,"content_length":"4592","record_id":"<urn:uuid:e11d612c-7216-431a-9c26-f1c6abde889e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
Understanding Products of Inertia
We covered this topic in Dynamics yesterday and the text does a less than average job on the topic. I am confused on the argument of symmetry. Does this mean if an item is symmetrical with respect to
all planes the Inertia is 0 ( even when it is not bisected by the planes in the axis given. We looked at an example in class dealing with three pipes joined by 90 degree elbows. Two of which laid on
the given x and the given y axis. The third however in the z direction laid under the x and y plane and was off of the given z axis. We used the parallel axis theorem which I understand. It's just
that pipe in the z direction had a zero in the product of inertia. and I thought that it would have some sort of inertia since it was not centered on the point of rotation. Any help clearing this up
would be greatly appreciated. Thanks! | {"url":"http://www.physicsforums.com/showthread.php?t=492214","timestamp":"2014-04-17T00:59:13Z","content_type":null,"content_length":"19754","record_id":"<urn:uuid:6f9a7602-b274-415e-9624-2df85f1c16a3>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
CMU Theory Lunch
Theory Lunch is an informal seminar run by the Algorithms and Complexity Theory Group. The seminar meets on Wednesdays, noon till 1pm, in GHC 6115 (unless otherwise specified). It is open to faculty,
graduate and undergraduate students interested in theoretical aspects of Computer Science.
The meetings have various forms: talks on recently completed results, joint reading of an interesting paper, presentations of current work in progress, exciting open problems, etc. | {"url":"http://www.cs.cmu.edu/~theorylunch/Fall2009.html","timestamp":"2014-04-20T19:42:17Z","content_type":null,"content_length":"14691","record_id":"<urn:uuid:8c9bf76e-b24a-4a8b-9c51-c3c2b0f5a294>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
Villa Park, IL Math Tutor
Find a Villa Park, IL Math Tutor
...I am happy to work with anyone who is willing to work and am very patient with students as they try to understand new concepts. I have been in the Glenview area the past four years and have
tutored high schoolers from Notre Dame, New Trier, GBS, GBN, Deerfield High, Loyola Academy and Woodlands ...
20 Subjects: including logic, geometry, trigonometry, precalculus
...I also have experience programming in SAS for larger data analysis problems. Besides stats, I have a lot of knowledge about subjects from algebra through calculus. No matter what subject I'm
tutoring, I share my love of learning with the student.
5 Subjects: including algebra 1, prealgebra, statistics, probability
My name is Anthony L. and I recently graduated from the University of Miami (FL). I have a degree in Biology with a minor in Psychology. My specialties include biology, psychology, chemistry,
history, and math. I have experience writing scientific papers and was recently published in the Journal of Comparative Biochemistry and Physiology.
27 Subjects: including algebra 2, chemistry, prealgebra, reading
...Through high school and while obtaining my undergraduate degree in Biology, I worked full time. This required a great deal of self control, effective time management, and diversified learning
strategies. My background is in Biology, Chemistry and Math.
36 Subjects: including algebra 1, algebra 2, biology, chemistry
...I love statistics and science and am familiar with college admissions processes and earned a 5.5/6 on the writing section of the GRE. Thus, I am good at deconstructing subjects and topics into
their constituent parts and delivering information in an effective and digestible way. Furthermore, I am familiar with working with students of all age groups and academic and intellectual
35 Subjects: including SPSS, statistics, geometry, probability
Related Villa Park, IL Tutors
Villa Park, IL Accounting Tutors
Villa Park, IL ACT Tutors
Villa Park, IL Algebra Tutors
Villa Park, IL Algebra 2 Tutors
Villa Park, IL Calculus Tutors
Villa Park, IL Geometry Tutors
Villa Park, IL Math Tutors
Villa Park, IL Prealgebra Tutors
Villa Park, IL Precalculus Tutors
Villa Park, IL SAT Tutors
Villa Park, IL SAT Math Tutors
Villa Park, IL Science Tutors
Villa Park, IL Statistics Tutors
Villa Park, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/villa_park_il_math_tutors.php","timestamp":"2014-04-19T00:03:36Z","content_type":null,"content_length":"24006","record_id":"<urn:uuid:471c7447-29e3-4b06-9bf3-04c88d51c495>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
ASCII Text x
B. Jayaraman, "Semantics of EqL," IEEE Transactions on Software Engineering, vol. 14, no. 4, pp. 472-480, April, 1988.
BibTex x
@article{ 10.1109/32.4670,
author = {B. Jayaraman},
title = {Semantics of EqL},
journal ={IEEE Transactions on Software Engineering},
volume = {14},
number = {4},
issn = {0098-5589},
year = {1988},
pages = {472-480},
doi = {http://doi.ieeecomputersociety.org/10.1109/32.4670},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Software Engineering
TI - Semantics of EqL
IS - 4
SN - 0098-5589
EPD - 472-480
A1 - B. Jayaraman,
PY - 1988
KW - Correctness theorems
KW - denotational semantics
KW - equational programming
KW - equation solving
KW - functional programming
KW - logic programming
KW - object refinement
KW - reduction semantics.
VL - 14
JA - IEEE Transactions on Software Engineering
ER -
We present the formal semantics of a novel language, called EqL, for first-order functional and Horn logic programming. An EqL program is a set of conditional pattern-directed rules, where the
conditions are expressed as a conjunction of equations. The programming paradigm provided by this language may be called equational programming. The declarative semantics of equations is given in
terms of their complete set of solutions, and the operational semantics for solving equations is an extension of reduction, called object refinement. The correctness of the operational semantics is
established through soundness and completeness theorems. Examples are given to illustrate the language and its semantics.
[1] J. Backus, "Can programming be liberated from the von Neumann style? A functional style and its algebra of programs,"Commun. ACM, vol. 21, no. 8, pp. 613-641, Aug. 1978.
[2] M. Bellia and G. Levi, "The relation between logic and functional languages:A survey," J. Logic Programming, vol. 3, pp. 217-236, 1986.
[3] W. F. Clocksin and C. S. Mellish,Programming in Prolog. New York: Springer-Verlag, 1984.
[4] J. S. Conery and D. F. Kibler, "Parallel interpretation of logic programs," inConf. Functional Program. Lung. and Compu' Architecture, Portsmouth, NH, 1981, pp. 163-170.
[5] D. DeGroot and G. Lindstrom,Logic Programming: Functions. Equations. and Relations. Englewood Cliffs, NJ: Prentice-Hall, 1986.
[6] N. Dershowitz and D. A. Plaisted, "Applicative programming cum logic programming," in1985 Symp. Logic Programming. Boston, MA, July 1985, pp. 54-66.
[7] L. Fribourg, "Oriented equational clauses as a programming language,"J. Logic Program., vol. 2, pp. 165-177, 1984.
[8] J. A. Goguen and J. Meseguer, "Equality, types, modules. and (why not?) generics for logic programming,"J. Logic. Program., vol. 2, pp. 179-210, 1984.
[9] A. Hansson, S. Haridi, and S.-A. Tärnlund, "Properties of a logic programming language," inLogic Programming, K. L. Clark and S.- A. Tärnlund, Eds. New York: Academic, 1982, pp. 267-280.
[10] C. M. Hoffman and M. J. O'Donnell, "Programming with equations,"ACM Trans. Program. Lung. Syst., vol. 4, no. 1, pp. 83- 112, Jan. 1982.
[11] J.-M. Hullot, "Canonical forms and unification," inProc. 5th Workshop Automated Deduction(Lecture Notes in Computer Science). New York: Springer-Verlag. 1980, pp. 318-334.
[12] B. Jayaraman and F. S. K. Silbermann, "Equations, sets, and reduction semantics for functional and logic programming," in 1986ACM Conf. LISP and Functional Programming, Boston, MA, Aug. 1986,
pp. 320-331.
[13] B. Jayaraman, F. S. K. Silbermann, and G. Gupta, "Equational programming: A unifying approach to functional and logic programming," inInt. Conf. Computer Languages, Miami Beach, FL, Oct. 1986,
pp. 47-57.
[14] B. Jayaraman and G. Gupta, "EqL user's guide," Dep. Comput. Sci., Univ. North Carolina, Chapel Hill, Tech. Rep. TR 87-010, May 1987.
[15] R. M. Keller, "FEL programmer's guide," Dep. Comput. Sci., Univ. Utah, Salt Lake City, AMPS Tech. Memo 7, Apr. 1982.
[16] G. Lindstrom, "Functional programming and the logical variable," in12th ACM Symp. Principles of Program. Lang., New Orleans, LA, Jan. 1985, pp. 266-280.
[17] Y. Malachi, Z. Manna, and R. Waldinger, "TABLOG: The deductive-tableau programming language," inACM Symp. LISP and Functional Programming, Austin, TX, Aug. 1984, pp. 323-330.
[18] J. McCarthyet al., 'LISP 1.5 Programmer's Manual. Cambridge, MA: M.I.T. Press, 1965.
[19] R. Milner, "A proposal for standard ML,"Conf. Record of the ACM Symp. LISP and Functional Programming, Aug. 1984.
[20] S. Narain, "A technique for doing lazy evaluation in logic," inIEEE Int. Symp. Logic Programming, Boston, MA, July 1985, pp. 261- 269.
[21] M. J. O'Donnell,Equational Logic as a Programming Language. Cambridge, MA: M.I.T. Press, 1985.
[22] G. Plotkin, "A powerdomain construction,"SIAM J. Comput., vol. 5, pp. 452-486, 1976.
[23] U. S. Reddy, "Narrowing as the operational semantics of functional languages," in1985 Symp. Logic Programming, Boston, MA, July 1985, pp. 138-151.
[24] J. A. Robinson and E. E. Sibert, "LOGLISP: Motivation, design. and implementation," inLogic Programming, K. L. Clark and S.-A. Tärnlund, Eds. New York: Academic, 1982, pp. 299-313.
[25] J. A. Robinson, "New generation knowledge processing: Syracuse University parallel expression reduction," Syracuse Univ.. First Annu. Progress Rep., Dec. 1984.
[26] J. E. Stoy,Denotational Semantics: The Scott-Strachey Approach to Programming Language Theory. Cambridge, MA: MIT Press, 1977.
[27] D. A. Turner, "Miranda: A nonstrict functional language with polymorphic types," inConf. Functional Program. Lang. Comput. Architecture, Nancy, France, Sept. 1985, pp. 1-16.
[28] M. H. van Emden and R. A. Kowalski, "The semantics of predicate logic as a programming language,"J. ACM, vol. 23, no. 4, pp. 733- 743, 1976.
[29] D. H. D. Warren, "Higher-order extensions of Prolog: Are they needed'?"Machine Intell., vol. 10, pp. 44-454, 1983.
[30] D. H. D. Warren, F. Pereira, and L. M. Pereira, "Prolog: The language and its implementation compared with LISP,"SIGPLAN Notices, vol. 12, no. 8, pp. 109-115, 1977.
[31] J.-H. You and P. A. Subrahmanyam, "Equational logic programming: An extension to equational programming," in13th ACM Symp. Principles of Program. Lung., St. Petersburg, FL, 1986, pp. 209- 218.
Index Terms:
Correctness theorems, denotational semantics, equational programming, equation solving, functional programming, logic programming, object refinement, reduction semantics.
B. Jayaraman, "Semantics of EqL," IEEE Transactions on Software Engineering, vol. 14, no. 4, pp. 472-480, April 1988, doi:10.1109/32.4670
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/ts/1988/04/e0472-abs.html","timestamp":"2014-04-23T20:31:18Z","content_type":null,"content_length":"55316","record_id":"<urn:uuid:1bd120d9-95a2-4b47-9fdc-e33c023e2987>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
Murder Mystery: A Project for Logarithms
[Note added on 2-7-14: This is one of my most-read posts. If it helps you design a good project for your class, that's great. Please do not use it as is. Make adjustments for the age of your students
(mine are in college), and fill in the details where I've put notes in brackets. Make sure you try each part out yourself before using this with students! To do otherwise is to ask for trouble.]
There've been some great discussions on teaching logarithms recently at
. At JD's blog, I mentioned a project I do in my classes, and nyates asked for the details. Here it is - just enough hook to get students working pretty well in groups. (One class decided we were CSI
at CCC.) :^) But in my opinion, they're still mostly following the formula. If you have any ideas for improving this, I'd love to hear them.
Years ago, I shared my office with a chemistry teacher. When she complained that we math teachers often did a bad job in Intermediate Algebra courses with the (vital to chemistry) topic of
logarithms, I decided to try to do it better. It was right at the end of the course, which always means neglect, so I moved it up a bit somehow. I also wanted to pull the students in more, so I made
this project up.
This project uses Newton's Law of Cooling, though I don't mention that. (I often get students coming in the second day with that information. They've searched online, and found it. Good and bad - I
like the research skill and initiative, but they want to use the formula they've found, instead of reasoning it out.) I work with them to figure out:
body.temp = air.temp + excess.temp.at.time0 * b ^ t
, where body.temp is a function of time, air.temp is constant, excess.temp.at.time0 is how much hotter the body was than the air when first measured, b < 1 (exponential decay), and t is measured in
hours or minutes. The only log work necessary is:
, so this doesn't get them practicing the other log properties.
To help them remember when to use logs, I tell them that historically logs helped people multiply, divide, and find roots, but that now calculators do all that, so
"the purpose of logarithms is to get the variable out of the exponent.
" (What do you think? Is that problematic?)
I have them get in groups, and give them maybe half an hour each day to work on the assignments. They've worked with exponential functions, and I start this project as we begin working on logarithms.
I play
the theme music from Gilligan's Island
to start class on the day we begin, and I tell them...
"Our whole class has gone on a cruise together and been shipwrecked. There's plenty to eat and drink, so no one's too stressed. But then, our classmate John Doe is murdered! (He was so quiet, you
may not remember him.)"
Then I give them this:
Shipwreck and Murder
You’ve all been shipwrecked on a tropical island - a wonderful place, with bananas, coconuts, fish, and a pretty constant temperature of 80°. Your classmate, John Doe, has been murdered. You know
there’s no one else on the island - it was one of your classmates that did it! None of you will sleep peacefully in your flimsy grass huts until the murderer is discovered.
You have watches, thermometers, and other simple tools, but no experts on murder investigations. The day John was murdered, everyone was walking around the island by two’s, noting its features, in
hopes that would help you all figure out where you are. It turns out that everyone walked by the spot where John's body was found, and recorded the time when they were at a nearby spot (where they
could see a volcano on the next island). Figuring out the time of death would likely narrow down the suspects to four or less.
Angel had a hunch that knowing the body temperature would help determine the time of death. So, at 1 pm, she checked the temperature of John’s corpse. It was 96.1°. Then at 2 pm, it was 91.7°.
Finding the murderer will be our goal. It might take us a few days. (I hope you don’t mind some sleepless nights...)
One more clue… this comes from Daniel, “I read a lot of murder mysteries. In one of them, this detective says, ‘A dead body cools off just like a hot cup of joe.’ I don’t know if that helps or not…”
~ ~ ~ ~ ~ ~ ~ ~
[Change Angel and Daniel to the names of students in your class.]
After they've read this, and asked me questions about fingerprints, how he was killed ("looks like a coconut to the head"), and a few other distractions, I ask them to do the following assignment.
Assignment 1
[Do each of these assignments in groups of 3 or 4. If you want, you can turn in one copy per group. It will be important later to be able to describe how your understanding of the problem changed
over time, so each person should keep neat notes on the case.]
We want to think about how a hot cup of coffee cools off.
1. What would be a reasonable starting temperature?
2. After about how long would it be cold?
3. About what temperature is it when it’s cold? (Why?)
4. Now let time be the x-axis (t-axis) and temperature (T) be the y-axis, and (on graph paper) graph temperature versus time for a cop of coffee, using what you know from common sense. Does a
straight line graph make sense for this?
~ ~ ~ ~ ~ ~ ~ ~
I get a few volunteers to promise to actually measure the temperature of a cooling cup of hot water, and point out that old fashioned mercury thermometers will break and other body temp thermometers
might break - they'll need a lab thermometer or a cooking thermometer. No matter how many people promise to do this, I know I might have no data by the next day, so I've saved data from an old class.
Assignment 2
Sarah says “We need some numbers here.” And she boils some coffee up over a campfire and measures its temperature with a thermometer Jessica provides.
Here’s what she gets:
The coffee starts out at 176 degrees, and cools off like this…
Min Degrees
1. If he measured the coffee at 2 hours and 3 hours, what temperature would it be?
2. Graph this data, and connect the points with a smooth curve.
3. So we can conclude that this graph has what line as an asymptote?
4. Give an example of a function with this asymptote.
~ ~ ~ ~ ~ ~ ~ ~
Assignment 3
John Doe’s dead body was lying near the viewing spot for the volcano, and it turns out that there was only one path going by that spot. So, after checking with each other, and remembering who passed
whom, you all agree that the murderer was most likely one of the people at that spot right before or after the time of death.
Below are the times that each pair walked by the volcano viewing spot:
Colleen & Mouang 11:15am
Tiana & Armoriana 11:38
etc... (all class members listed)
Paolo & Vithaya 1:24pm
When you figure out the time of death, you’ll know the 4 most likely suspects.
Time of Death:
[Note: Before typing up this list in assignment 3, I've figured out the time of death, checked with the class to find out whether anyone objects to playing the killer, and made sure no one who'd be
uncomfortable with it will be a suspect. The idea is that the time of death will be between two of the times listed, and the 4 people listed for those 2 times are the suspects. To find the time of
death, they'll solve the equation body.temp = air.temp + excess.temp.at.time0 * b ^ t for the time that the body was 98.6 degrees, which will be the time of death. If we've let the first temperature
measured be at time 0, then the time we get from this equation will be negative. That's a nice switch, since some of them think story problems can never have negative answers.]
[Note added on 2-7-14: If you are a teacher planning to use this, the idea is to use your own students' names here. Make sure none is walking by at the exact time of death, so that the pair just
before and the pair just after are both suspect, and have pairs walk by every 5 to 15 minutes.]
~ ~ ~ ~ ~ ~ ~ ~
Assignment 4: The Next Day
Of course, all 4 suspects swear they’re innocent. The next day, Rasha finds
dead, left lying right in the clearing.
My body temperature is 93.1°, and it’s 9:46am. You check at 10:16am, and it’s 87.2°.
Here’s everyone’s alibis:
• Colleen, Tiana, Robyn, Pardeep, Maureen, Jianfei, and Tayyaba were all swimming together from 8am to 9:30am.
• Brandon, Adeyinka, Cristina, Ash, Cookie, and Paolo were all looking for clams together from 8:30am to 9:30am.
• Mouang, Armoriana, JoAnn, Adam, Denisha, Edith, and Angel were all gathering coconuts together from 9am until they heard Kevin’s screams.
• Daniel, Josue, Natalie, Danielle, Dwight, and Vithaya were hiking from 9:30 until they heard the screams.
Please find the killer before someone else is murdered!
[Note: Students often like to accuse me of being the killer, so I get killed next. I tell them I think it's because I knew too much. The list of names puts each of the 4 suspects in a different
group. The one without an alibi is the serial killer.]
I do this project in Intermediate Algebra (the community college equivalent of a high school Algebra II course, done in one semester) and in Pre-Calc. In Pre-Calc, I often end with an assignment to
each write up their closing arguments as prosecuting attorney, explaining to the jury how we know the time of death, and why that means the prime suspect is the killer. A lot of them have fun with
that assignment.
I give them a problem like this on their next test, and it's not great. I wish I had collected data on success rate on that question. It's probably less than half of the students. Worse than other
[Notes added on 2-7-14:
1. Well, I'm not sure when it shifted, but they do better on this test problem these days.
2. If you plan to use this, you will need time to fix up the two lists of names. Work the problems out ahead of time, so you know how it will play out in class.]
Please comment, critique, and suggest improvements.
56 comments:
1. John Doe's initials make me awfully suspicious...
I'm going to play with this a bit. I want to see if I can modify it so that I can squeeze it in over a series of several regular lessons (with the abundance of extra time that I always
But, at first blush, this is great. I am just frustrated trying to figure how to work it in.
2. Thanks for sharing! We just finished a quick unit on logs in precalc - I'm going to try and find a way to use this when we get back from spring break.
3. I'm glad you both like it. Tell me how it goes. I'm thinking it will be easy nowadays to get that theme music. When I first did this, it was pre-internet. My mom found a tape at the library of TV
theme songs.
4. This is a great lesson. I just wrote about it on my blog (mathforlove.com). Thanks for the great blog!
5. Thanks for this post--I just linked to it on my blog (it's called "math for love" if you'd ever like to check it out). I've really enjoying reading your blog since I discovered it recently. I'm
very curious though: are you suggesting that people didn't do well at actually absorbing the mathematics of logarithms after this lesson? I would imagine it would work really effectively.
6. I'm impressed you can work with them to make that formula make sense, without needing calculus. I guess I can kind of imagine how it would go, but maybe you can say something, or give a link?
7. That's a nice activity; I'll give it a try with my class when we get to logarithms next month. Thanks for sharing this, Sue!
8. Hey Sue - I just wrote an incredibly long comment that got eaten by the computer. I'll try again, but quickly:
1) This lesson seems really fun!
2) In assignment 3 they have to solve the equation bodytemp = airtemp + (tempattimeofdeath-airtemp)*b^t for t. When and how do they learn how to solve this equation? Was that taught previous to
the whole project, or during the project, and is the body-temp scenario used to explain the way the solution works or just to motivate the question?
3) Since you asked, I do think "the purpose of logs is to get the variable out of the exponent" is problematic. Logs have lots of other uses too (in stats, to re-express skewed data to increase
symmetry, for example; in all branches of math, to turn problems about multiplication into problems about addition; etc). More importantly, I think it sends the wrong message to treat a
fundamental object like it has a single purpose. Ideas come into being to solve specific problems, but once they exist we can use them for whatever uses our imagination and resourcefulness can
put them to. Students should get to see this in order to understand math as a creative and evolving field. I think it's empowering for them to see it this way because it invites them into the
history of math rather than being mere consumers of the products of that history. Am I taking your question too seriously?
9. Thanks, Ben. You are definitely not taking my question too seriously.
I don't have a good sense of when folks need to use logarithms for other purposes besides getting the variable out of the exponent. (Except, uh yeah, the chemistry work my former colleague
pointed to, that got me started on this whole project.)
I think I have a problem with trying to please the students. They want easy answers, easy procedures, and I think I've provided lots of things like that in the past.
[For example, when we're working on factoring problems, I'd say "I think I can..." while writing the two sets of parentheses, and putting in the x's that multiply to x^2. I was just silly enough
about it to be memorable, and at least one student uses it as a cue to remind her how to start. That one's probably not a problem, but if it seems so to you, please do say so.]
>When and how do they learn how to solve this equation?
Some before, and some during. It's definitely taught more than discovered. I have them take the log of both sides, once they get it down to c=b^t. The ones who've looked up Newton's law of
cooling often do it differently. No problem when they get it right, but harder to figure the partial credit on wrong answers.
>is the body-temp scenario used to explain the way the solution works or just to motivate the question?
Not sure what you're asking here. We definitely talk about how temperature change works. I'm thinking of the Eric Mazur post now, and temperature change does not seem like one of the topics
students would have misconceptions about. But maybe they do...
I do talk about how we say our coffee got cold, but it's no colder than the air.
Sorry about the system eating your comment. That's the second one recently. I wonder what the deal is. I have no idea what the character limit is for comments.
10. Hi Dan and David (if you're still here). I'm glad I found your comments.
Dan, I do think they pretty much try to memorize the procedure. It might be something that will take a long time to sink in, no matter what.
David, good question. I try to get them to experiment with hot water and thermometers. (In one class early on, I felt responsible for a number of mercury spills, when a bunch of students did this
with fever thermometers.)
Even if they don't, it seems reasonable that the liquid won't get colder than the air, and that the air temp will be an asymptote. That naturally leads to this equation, more or less.
Does that make sense? (If it doesn't, let's talk, and I can write another post about the formula part. My email is suevanhattum on hotmail.)
11. >temperature change does not seem like one of the topics students would have misconceptions about
Re-reading this months later, and I can't believe I said that!
The students do have some big misconceptions. They want the temperature change to be linear - straight line down to air temp, and then zap, no more change. I should ask them on a test after the
project to show a graph of temperature change (no numbers, just shape).
12. So glad to find this while searching today! I have a pre-cal class that I try to do projects for every unit we cover, and I was stymied for our exponential/logarithmic unit. Thanks!! I'm really
looking forward to seeing how this goes.
13. Hi Dawn, I'd love to hear how it goes.
14. Thanks so much for sharing this! I used parts of your project and other things I've found to create my own murder mystery. I'm excited to use it in my classroom!
15. Ashley, I'd love to see what you've put together. My email is suevanhattum on the hot email system.
1. Sue,
Just found this project and am hoping to use it in Pre-cal. Did Ashley ever email you her variation on a theme? Would love to see what she has done.
2. I don't think she did. But it's so long ago I wouldn't remember it. (I searched on 'ashley' in my email.)
16. So glad to find this site...I teach GT students and our Alg I we use CMP2 from MSU that is inquiry-based, so the temp cooling they learn in a unit called "Growing, Growing, Growing" where we
actually cooled hot chocolate (and cider and tea...and noticed they cooled at different rates but the shape of the curve was the same, and then graphed first and second differences...cool
calculus concepts at algebra I!) Anyway, I'm going to try this this coming week to see if it makes more sense of logs for them. As for the "too simple" question, I wanted to also relate some
astrophysical data b/c I know that was when I used logs a lot in my own educational past...am looking for a Star Wars kind of project to come to me---anyone seen anything like that?
17. I haven't. Please let me know what you come up with!
18. Just saw this post and I am really looking forward to trying to use it with my class. I plan on trying to extend it a little with my Precalculus class, but haven't figured out quite how to do
that yet. If you have any suggestions or if you have made any modifications since you posted it please let me know. Thanks, Courtney.
19. Hi Courtney, I haven't modified it any. If you come up with any extensions, I'm interested!
We're in the beginning stages of it right now.
20. Where are the answers!!!!
21. Hi anon, I don't usually post 'answers'. If you'd like help understanding how this works, feel free to email me (suevanhattum on hotmail).
22. What is supposed to be done in Assignment 4? How do we find the time the murder occurred?
23. This project should come after a unit on exponential functions, and using logarithms to solve them.
Then it's helpful to know that dead bodies (and coffee) cool off like an exponential decay function, using the surrounding temperature as the asymptote (so that what decays exponentially is the
'extra heat').
In my class, we use y=a*b^t+c as a template, and find the values for a, b, and c. (There are other ways to work with this...) We let the first given time be t=0, our second data point comes from
the other time, and we also know that the body was 98.6 degrees just before dying.
I hope this is helpful.
24. So glad I found your post! I plan to start this in 2 days. Very fitting that the class just watched the pilot episode of LOST to parallel Lord of the Flies. Might be kinda cool to link up with an
English teacher for this unit too! Since there is a volcano near by.. it might be fun to use the Richter Scale for an earthquake near by.
25. I also wondered if you happen to have a rubric for grading the project?
26. I don't have a rubric. I test them on a similar problem on the test. (And they do better on this type of problem than they do on population growth problems. I think they've learned this type, but
not the bigger concept.)
27. Just worked through all of the math. I thought it would be fun to extend the lesson a bit and I am having 2 suspects in the end. One went hunting and killed a boar. The class will find out that
the boar was killed in the same minute as the teacher so it rules out that suspect. I wanted it to be more challenging then just looking for the missing person from a list of alibis.
28. Nice!
(I think this post gets more hits than anything else on my blog. Maybe others will use your alternate ending too.)
29. some one should post the answer so i can see if im right.
30. My teacher gave me a similar problem but i dont really understand it how do you do assignment 2?
31. Please ask your teacher for help. To offer the right help involves understanding which parts the student already understands, and which parts she doesn't. I can't help you without seeing what
makes sense to you and what doesn't.
32. I am going to try this problem starting next week in my high school pre-calculus classes. I am introducing logarithms and was looking for an experiment to use next week. What I think I will do,
though, is microwave cups of water and use the chemistry teacher's thermometers to collect data. I will not give them the scenario yet. We will collect and graph the data as a classroom exercise
as preparation for speaking about logarithms. I will combine assignments 1 and 2 due to limited time with an A/B day schedule.
Very exciting!!!
33. Michelle, I'd love to hear how it goes.
1. Sue,
In the equation, bodytemp = airtemp + (tempattimeofdeath-airtemp)*b^t what does the b of the b^t represent?
34. The rate of exponential decay (sort of). If you prefer to use e^rt, that works too. I think this representation is easier. If you lost a quarter of your excess heat each hour (and you measure
time in hours), b would equal .75. So it's actually the percentage of excess heat retained.
35. OK...I think I will use e^rt so that the students use "e" to calculate the answer. If I use e^rt, how would we determine rate (r) and when we solve for t, it will be in hours, right?
36. When I wrote "I think this representation is easier" above, I meant my representation. I'm curious why you prefer to use e.
Here's a bit more about how it goes now, cobbled together in part from a comment of mine above:
We use y=a*b^t+c as a template, and find the values for a, b, and c.
We know the temperature will eventually cool of to be the same as the air temp, so that's the asymptote, or the shift, and must = c.
We always start with two time and temp pairs as data. We let the first given time be t=0, our second data point comes from the other time (measured as minutes or hours since the first time).
Plugging in these two data points to the template lets us find a and then b. (If we didn't use t=0, it would still be possible to find a and b from the two data points, but much harder.)
We also know that the body was 98.6 degrees just before dying. Now that we have a proper function, we can find the time of death from this.
37. I wanted to use "e" because we had just introduced it and it would be a way to apply "e" but I see your point. I solved the equation and found that t=-.44 which is a fraction of an hour right?
When multiplied by 60, the answer is -26 which means 26 minutes before the 1st temp of the corpse was taken. Is that right?
38. I don't know without re-doing it. Try plotting your points as a graph, using desmos, if that will help you make sense of it. It sounds about right...
39. I'm working through this project. Too fun! Next step is to find the Gilligan's Island soundtrack. I wanted to check my result for Assignment 4...the death of the teacher. I got 38.618 minutes.
Not feeling like this is correct. Thanks for your help!
40. I just edited the post to point to a soundtrack on youtube. Thanks for pointing that out.
Email me with the steps you took, and I'll point to your mistake. (suevanhattum on hotmail)
41. I found my mistake. Don't you know it was in the arithmetic, not in the mathematics. Thanks for this fun activity!
42. Sue,
I am wondering if you happen to have even more information about this project, or any sort of additional materials you use. This looks like a fun project and I would like to start it next week
but have very limited time to put things together this week. My email is aderosa@mancosre6.edu. Thank you!
43. I tried to email you and it bounced. Try emailing me directly (mathanthologyeditor on gmail).
44. Hey Sue,looks like a great project, do you have an answer key for this you can post?
45. Nope. The teachers can figure out the math. And I don't want it searchable for students. You are welcome to email me with particular questions.
46. My daughter got this project for her 8th grade math class. The teacher gave only the 3 pairs that you have mentioned in this for assignment 3. The time of murder she got is not very close to any
of the three values and she thinks the in between values for the other pairs should be there to find the true murderer. The time (given), closest to the value she got is about 1 hr before the
murder. Are the 3 times given in your problem enough to find the murderer?
47. No, that's why I said I list the rest of the class in between. The way I do it is to put students in pairs, and have one pair walk by every few (say 4 to 15) minutes. I make sure none walks by at
exactly the right time, so there are two pairs who are suspects. The second murder splits everyone up, so only one person is a suspect for both murders.
Did the teacher use the names I gave here, instead of the kids in the class?
48. It was exactly what you posted here. The time she got was roughly an hour after the second pair. She said that the 3rd pair went by after the body was discovered and so they can't be suspects. So
it was very confusing to find 2 pairs of suspects. They were supposed to make a poster for the puzzle. She did everything in the poster with out writing the suspects or the murderer. Just wrote
the titles, suspects and murderer and wanted to ask the teacher about this part before completing it. When she went to ask her, the teacher has asked her to turn it in for a lower grade as she
didn't complete it. Never answered her question too. I didn't check her answers, but I am pretty sure she got the answers right.
49. I don't understand. Email me to discuss answer if you want. (mathanthologyeditor on gmail)
50. This comment has been removed by the author.
51. What do you think, Nelson? (And if you're not old enough to drink coffee, ask someone who is. Or google it. Or google the time McDonald's got sued because someone got burned by their coffee. Lots
of options. No need to ask here.) You could buy a cup of coffee to see how long it takes to get cold. Or you could heat up some water, and check that.
Sorry if I sound snippy. I am concerned because I've also gotten questions in email that seem like people don't want to think. I'm not sure yet what I want to do about it...
52. This comment has been removed by the author.
53. Are you telling me you have no access to a way to heat water up? You can at least test the time, can't you?
Comments with links unrelated to the topic at hand will not be accepted. (I'm moderating comments because some spammers made it past the word verification.) | {"url":"http://mathmamawrites.blogspot.com/2010/04/murder-mystery-project-for-logarithms.html?showComment=1270442550023","timestamp":"2014-04-16T13:22:22Z","content_type":null,"content_length":"238880","record_id":"<urn:uuid:a0895182-5fd8-421e-8c89-5df552756d08>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numbers in bar graphs
I spent far too much time today figuring out how to put numbers in bar graphs. i.e. how to transform this:
into this:
When reading papers, I often wish numbers were included like this, and so resolved a while ago to always do so myself. Plotting the numbers allows someone else to replot them when doing some sort of
a comparison. (With out needing to implement your algorithm, or ask you for your doubtlessly long-lost data, or (ugh) physically measuring the lengths of your bars.) However, now seeing how tedious
this can be, I understand why it is rarely done.
In any case, I wrote a function add_bar_nums.m that you can run after doing a bar plot that will add the numbers like above. There are options for rotating the text, changing the display format,
2 Responses to Numbers in bar graphs
1. Just a different viewpoint. If those values are not so different, like in your example, why do you want to show a graph anyway? Why not just report numbers (e.g. in a table)? On the other hand,
when values are so apparently different, a graph may not need numbers to be put on the bars. Anyway, your function add_bar_nums.m will be helpful as we can decide whether to put numbers or not
2. I agree that graph is very boring! I’m not sure why the limits are set so poorly, either…
This entry was posted in Uncategorized and tagged data, graphs, visualization. Bookmark the permalink. | {"url":"http://justindomke.wordpress.com/2009/05/22/numbers-in-bar-graphs/","timestamp":"2014-04-16T04:11:28Z","content_type":null,"content_length":"52013","record_id":"<urn:uuid:86bcd8cc-22ad-4086-ac1f-9a5fefe44522>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
General Information
Math 60330
Basic Geometry and Topology
Fall Semester, 2010
General Information
Text: There is no required textbook. Here are notes for the point set topology part of the course (covering the first fives lectures). See the syllabus for information about additional references.
There are some books for the course on the reference shelf of the math library. Syllabus: Grades: Grades will be based on a total of 450 points, distributed as follows:
Midterm Exam - 100 points;
Final Exam - 150 points;
Quizzes - 100 points;
Homework - 100 points;
Exams: The midterm as well as the final exam will both consist of two parts: a take-home and an in-class part, each worth half of the total points. The in-class part, similar to the quizzes, is about
definitions and theorems, possibly requiring some basic calculations, but involving no proofs. The take-home part, similar to homework problems, will involve more extensive calculations and proofs.
Homework: Homework is an integral part of the course. You are permitted, in fact encouraged, to work together and help one another with homework, although what you turn in should be written by you.
Providing detailed arguments in your homework is important, since learning how to write mathematics in a rigorous and yet concise and readable way is an essential part of graduate school in
mathematics. Homework problems will be posted on the course web-page and will be due on a to be determined day each week. After the due date I will provide a detailed solution to each homework set
Pre/post lecture preparation: As with most math classes, the material in one class often depends heavily on stuff covered in previous classes. Hence it is a good idea to look over your notes before
the next class. I'm very happy to discuss questions concerning the material of previous classes at the beginning of each class. Quizzes: There will be weekly quizzes on a to be determined day of the
week testing basic understanding of definitions and theorems.
Course website: www.nd.edu/~stolz/Math60330(F2010)/ | {"url":"http://www3.nd.edu/~stolz/Math60330(F2010)/general.html","timestamp":"2014-04-19T14:51:27Z","content_type":null,"content_length":"2904","record_id":"<urn:uuid:8f36872c-6356-4459-9827-917e09bf0ead>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
11 search hits
Measure and Integration on Lipschitz-Manifolds (2007)
Joachim Naumann Christian G. Simader
The first part of this paper is concerned with various definitions of a k-dimensional Lipschitz-manifold and a discussion of the equivalence of these definitions. The second part is then devoted
to the geometrically intrinsic construction of a sigma-algebra L of subsets of the manifold and a measure on L.
Integral point sets over Z_n^m (2007)
Axel Kohnert Sascha Kurz
There are many papers studying properties of point sets in the Euclidean space or on integer grids, with pairwise integral or rational distances. In this article we consider the distances or
coordinates of the point sets which instead of being integers are elements of Z_n, and study the properties of the resulting combinatorial structures.
There are integral heptagons, no three points on a line, no four on a circle (2007)
Tobias Kreisel Sasch Kurz
We give two configurations of seven points in the plane, no three points in a line, no four points on a circle with pairwise integral distances. This answers a famous question of Paul Erdös.
On the minimum diameter of plane integral point sets (2007)
Sascha Kurz Alfred Wassermann
Since ancient times mathematicians consider geometrical objects with integral side lengths. We consider plane integral point sets P, which are sets of n points in the plane with pairwise integral
distances where not all the points are collinear. The largest occurring distance is called its diameter. Naturally the question about the minimum possible diameter d(2,n) of a plane integral
point set consisting of n points arises. We give some new exact values and describe state-of-the-art algorithms to obtain them. It turns out that plane integral point sets with minimum diameter
consist very likely of subsets with many collinear points. For this special kind of point sets we prove a lower bound for d(2,n) achieving the known upper bound n^{c_2loglog n} up to a constant
in the exponent.
Lotsize optimization leading to a p-median problem with cardinalities (2007)
Constantin Gaul Sascha Kurz Jörg Rambau
We consider the problem of approximating the branch and size dependent demand of a fashion discounter with many branches by a distributing process being based on the branch delivery restricted to
integral multiples of lots from a small set of available lot-types. We propose a formalized model which arises from a practical cooperation with an industry partner. Besides an integer linear
programming formulation and a primal heuristic for this problem we also consider a more abstract version which we relate to several other classical optimization problems like the p-median
problem, the facility location problem or the matching problem.
Inclusion-maximal integral point sets over finite fields (2007)
Michael Kiermaier Sascha Kurz
We consider integral point sets in affine planes over finite fields. Here an integral point set is a set of points in $GF(q)^2$ where the formally defined Euclidean distance of every pair of
points is an element of $GF(q)$. From another point of view we consider point sets over $GF(q)^2$ with few and prescribed directions. So this is related to Redeis work. Another motivation comes
from the field of ordinary integral point sets in Euclidean spaces. In this article we study the spectrum of integral point sets over $GF(q)^2$ which are maximal with respect to inclusion. We
give some theoretical results, constructions, conjectures, and some numerical data.
Integral point sets over finite fields (2007)
Sascha Kurz
We consider point sets in the affine plane GF(q)^2 where each Euclidean distance of two points is an element of GF(q). These sets are called integral point sets and were originally defined in
m-dimensional Euclidean spaces. We determine their maximal cardinality I(GF(q),2). For arbitrary commutative rings R instead of GF(q) or for further restrictions as no three points on a line or
no four points on a circle we give partial results. Additionally we study the geometric structure of the examples with maximum cardinality.
Enumeration of integral tetrahedra (2007)
Sascha Kurz
We determine the numbers of integral tetrahedra with diameter d up to isomorphism for all d<=1000 via computer enumeration. Therefore we give an algorithm that enumerates the integral tetrahedra
with diameter at most d in O(d^5) time and an algorithm that can check the canonicity of a given integral tetrahedron with at most 6 integer comparisons. For the number of isomorphism classes of
integral 4x4 matrices with diameter d fulfilling the triangle inequalities we derive an exact formula.
Nilmanifolds: complex structures, geometry and deformations (2007)
Sönke Rollenske
We consider nilmanifolds with left-invariant complex structure and prove that in the generic case small deformations of such structures are again left-invariant. The relation between nilmanifolds
and iterated principal holomorphic torus bundles is clarified and we give criteria under which deformations in the large are again of such type. As an application we obtain a fairly complete
picture in dimension three. We show by example that the Frölicher spectral sequence of a nilmanifold may be arbitrarily non degenerate thereby answering a question mentioned in the book of
Griffith and Harris. On our way we prove Serre Duality for Lie algebra Dolbeault cohomology and classify complex structures on nilpotent Lie algebras with small commutator subalgebra. MS Subject
classification: 32G05; (32G08, 17B30, 53C30, 32C10)
Zwei auf einen Streich: Optimierte dynamische Einsatzplanung für Gelbe Engel und Lastenaufzüge (2007)
Jörg Rambau Cornelius Schwarz
Wir modellieren zwei verschiedene dynamische Einsatzplanungsprobleme: die dynamische Einsatzplanung Gelber Engel beim ADAC und die Steuerung von Lastenaufzügen in einem Versandlager der Herlitz
PBS AG. Wir benutzen eine Reoptimierungspolitik, die die Steuerung des Systems mit Hilfe der Lösung von statischen Schnappschussproblemen durchführt. Für die auftretenden Schnappschussprobleme
vergleichen wir zwei Modellierungsansätze (Flussmodell versus Tourenmodell), von denen nur einer echtzeittauglich ist. Das Verfahren zur dynamischen Einsatzplanung Gelber Engel ist beim ADAC in | {"url":"http://opus.ub.uni-bayreuth.de/opus4-ubbayreuth/solrsearch/index/search/searchtype/all/start/0/rows/10/institutefq/Mathematik/yearfq/2007","timestamp":"2014-04-20T23:49:09Z","content_type":null,"content_length":"39753","record_id":"<urn:uuid:be0816a7-3d2e-4fdb-8586-0af5d8ea6788>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to Graph Theory: Finding The Shortest Path (Part 2)
Introduction to Graph Theory: Finding The Shortest Path (Part 2) (Posted on February 23^rd, 2013)
A couple of weeks ago I did an introduction to graph theory using Dijkstra's algorithm. I received some awesome feedback so this week I want to take it a step further and build on that using the A*
algorithm. The big difference between A* and Dijkstra's algorithm is the addition of a heuristic function. This heuristic function allows us to give a best guess of which path to take rather than
always taking the path that is the shortest distance from the source node. If you're a gamer or someone looking to get into game development this algorithm is the basis of how path finding is done in
a lot of games.
Dijkstra's Algorithm Refresher
Given a source node in a graph we examine all connected edges for the shortest distance from that node. We then mark the source node as visited so we don't have to check the edges again. Based off
our edge examinations we then go to the next closest node that hasn't been visited. We repeat this process until we reach our destination node. Then we check the path we took to get there and print
that out.
If you'd like a more detailed explanation definitely check out my blog post on the algorithm.
The Heuristic Function
The goal of this function is to give us a dynamic way to pick a path to our target node. An example of this would be planning a route home from work. You may normally take the highway but as you get
closer to the highway you realize that there is an accident blocking the road. So instead of taking the highway home you use back roads which get you home faster. You took a guess that the highway
would be the quickest way home but after checking the path it turned out you need take an alternate route. The heuristic function is used to give a best guess of the path to take.
There are several different ways to come up with a heuristic function but it is important to make sure that the function is admissible (never overestimates the cost of reaching the goal). A common
heuristic function is the Chebyshev Distance which allows us to move diagonally along a grid rather than just up, down, and side to side like the Manhattan Distance. If you're making something where
diagonal movement isn't allowed then definitely use the Manhattan Distance. The Chebyshev Distance states that the distance between two nodes is the greatest of their differences along any coordinate
dimension. The code for this calculation is easy to implement.
chebyshev_distance = [(current.x - target.x).abs, (current.y - target.y).abs].max
The Setup
Just need one gem for this one. PriorityQueue will allow us to have a heap where we can change the priority and have the tree automatically rebalance which is nice.
gem install PriorityQueue
Creating the Graph
Let's start by creating a grid of a specified size. This grid will be our graph and will be a square for simplicity. For A* each node in the graph will need to store some extra information about it
so let's just create a Node class that will store this information.
class Node
def initialize(x, y)
@x = x
@y = y
@obstacle = false
@g_score = Float::INFINITY
def x()
return @x
def y()
return @y
def set_obstacle()
@obstacle = true
def obstacle()
return @obstacle
def set_g_score(score)
@g_score = score
def g_score()
return @g_score
def to_s
return "(" + @x.to_s + ", " + @y.to_s + ", " + @obstacle.to_s + ")"
The x and y variables will give us our coordinate position, the obstacle variable will tell us if we can pass through this position or not, and the g_score variable will be the best distance from the
source node to this node. The g_score for each node starts at infinity since no paths have been calculated yet.
Next we need to create a grid out of these Nodes. So let's create a graph class and insert a grid of a size we choose.
class Graph
def initialize(size)
@size = size-1 # Last index in 0 based array
@grid = []
for y in 0..@size
row = []
for x in 0..@size
row.push(Node.new(x, y))
def to_s
return @grid.inspect
Our graph takes a variable called size and creates a square of nodes of that size. Here's basically how our graph looks in memory if we make a size 8 graph.
Something to keep in mind is that the ordered pair (0,0) is at the top left. So as Y increases you actually go down instead of up. This just makes array access easier. Otherwise you'll need to
convert the x and y variables every time to map to the bottom of the graph which is definitely doable. So just remember (0,0) is at the top left and our positive y axis is pointed down.
Implementing A*
Before we start here is a great graphic from Wikipedia that shows how the algorithm works.
The algorithm will take in the starting x and y position as well as the finishing x and y position. We can then convert those coordinates into the actual nodes they are referencing. We can also use
this time to initialize our variables that will be keeping track of our nodes.
def shortest_path(start_x, start_y, finish_x, finish_y)
start = @grid[start_y][start_x]
finish = @grid[finish_y][finish_x]
visited = Set.new # The set of nodes already evaluated
previous = {} # Previous node in optimal path from source
previous[start] = 0
f_score = PriorityQueue.new
Since we'll never visit the same node twice we want to make sure we don't waste time evaluating the same node more than once so we create a set of visited nodes for easy O(1) checking. We'll also
want to keep track of how we got to a node. We keep track of nodes the same way as we do in Dijkstra's algorithm. A* improves on Dijkstra's by having an f_score. The f_score variable will be a
min-heap (priority queue) of a node's distance from the source + the heuristic calculation. Since Set and PriorityQueue are not implemented by default in Ruby let's go ahead and require them at the
top of our file.
require 'priority_queue'
require 'set'
A node at most can have 8 other touching nodes (up-left, up, up-right, right, down-right, down, down-left, and left). In order to evaluate these 8 nodes that could be possibly touching the node we're
looking at I'm going to create an array of all possible combinations.
# All possible ways to go in a node
dx = [1, 1, 0, -1, -1, -1, 0, 1]
dy = [0, 1, 1, 1, 0, -1, -1, -1]
Using this array we'll always check the node to the right first. So if our starting node is (0,0) then the next node we evaluate is (0 + dx[0], 0 + dy[0]) = (1,0). This will be easier to see when we
actually use it for the code. Right now we're just initializing it.
The final initialization step will be to set our start node's g_score to 0 and calculate the f_score. To calculate the f_score we'll need to implement the heuristic function (Chebyshev Distance) we
talked about earlier. The code so far looks like this:
def shortest_path(start_x, start_y, finish_x, finish_y)
def heuristic(current, target)
return [(current.x - target.x).abs, (current.y - target.y).abs].max
start = @grid[start_y][start_x]
finish = @grid[finish_y][finish_x]
visited = Set.new # The set of nodes already evaluated
previous = {} # Previous node in optimal path from source
previous[start] = 0
f_score = PriorityQueue.new
# All possible ways to go in a node
dx = [1, 1, 0, -1, -1, -1, 0, 1]
dy = [0, 1, 1, 1, 0, -1, -1, -1]
start.set_g_score(0) # Cost from start along best known path
f_score[start] = start.g_score + heuristic(start, finish) # Estimated total cost from start to finish
Now that everything is setup let's start churning through nodes that need to be evaluated.
while !f_score.empty?
current = f_score.delete_min_return_key # Node with smallest f_score
This just grabs the lowest f_score from our priority queue and saves the node off to the current variable and adds it to the visited set so we know not to check it again.
Next we want to evaluate all the nodes that touch the current node. This is where our dx and dy arrays come in to play.
# Examine all directions for the next path to take
for direction in 0..7
new_x = current.x + dx[direction]
new_y = current.y + dy[direction]
if new_x < 0 or new_x > @size or new_y < 0 or new_y > @size #Check for out of bounds
next # Try next configuration
The direction variable will be our index in the dx and dy array. We check to see if a node exists in each of the 8 possible directions. If we get a negative number or the value is greater than the
size of the grid than we know we're on the edge of the grid. Since we can't check nodes outside of the grid we just say next which takes us to the next iteration of the for loop.
Once we get past the out of bounds check we know that the node exists in our grid. We'll call this node neighbor since it's adjacent to our current node. We then check if we've visited this node
before, if it's set to be evaluated later, or if it's an obstacle. In any of the cases we don't want to evaluate any node more than once, unless it's an obstacle which we don't want to evaluate at
all, so we skip that node and go to the next node.
# Examine all directions for the next path to take
for direction in 0..7
new_x = current.x + dx[direction]
new_y = current.y + dy[direction]
if new_x < 0 or new_x > @size or new_y < 0 or new_y > @size #Check for out of bounds
next # Try next configuration
neighbor = @grid[new_y][new_x]
# Check if we've been to a node or if it is an obstacle
if visited.include? neighbor or f_score.has_key? neighbor or neighbor.obstacle
When navigating the grid it's important to remember that not all paths are created equal. Think about a square with a side length of 10. The edge distance on any side of the square is going to be 10.
However, if we want go diagonally across the square we have to bust out some Pythagoras. 10² + 10² = 200. When we take the square root of 200 we get about 14.1. One optimization that a lot of games
make is rounding this number to 14 so you don't have to worry about floating point calculations. Let's go ahead and model that for our A* implementation. All diagonals will have a distance of 14 and
all non-diagonals will have a distance of 10.
#dx = [1, 1, 0, -1, -1, -1, 0, 1]
#dy = [0, 1, 1, 1, 0, -1, -1, -1]
# Examine all directions for the next path to take
for direction in 0..7
new_x = current.x + dx[direction]
new_y = current.y + dy[direction]
if new_x < 0 or new_x > @size or new_y < 0 or new_y > @size #Check for out of bounds
next # Try next configuration
neighbor = @grid[new_y][new_x]
# Check if we've been to a node or if it is an obstacle
if visited.include? neighbor or f_score.has_key? neighbor or neighbor.obstacle
if direction % 2 == 1
tentative_g_score = current.g_score + 14 # traveled so far + distance to next node diagonal
tentative_g_score = current.g_score + 10 # traveled so far + distance to next node vertical or horizontal
Our direction variable is essentially the index for our dx and dy arrays. All odd indices are diagonal paths. You can tell this by the fact that there is a 1 or -1 in both the dx and dy array at the
same position. So when we're on an odd index we set our tentative_g_score to the current node's g_score + 14 which is our diagonal distance.
We're almost done! We just need to relax. Our relax function will essential be the same as our Dijkstra's implementation.
# Examine all directions for the next path to take
for direction in 0..7
new_x = current.x + dx[direction]
new_y = current.y + dy[direction]
if new_x < 0 or new_x > @size or new_y < 0 or new_y > @size #Check for out of bounds
next # Try next configuration
neighbor = @grid[new_y][new_x]
# Check if we've been to a node or if it is an obstacle
if visited.include? neighbor or f_score.has_key? neighbor or neighbor.obstacle
if direction % 2 == 1
tentative_g_score = current.g_score + 14 # traveled so far + distance to next node diagonal
tentative_g_score = current.g_score + 10 # traveled so far + distance to next node vertical or horizontal
# If there is a new shortest path update our priority queue (relax)
if tentative_g_score < neighbor.g_score
previous[neighbor] = current
f_score[neighbor] = neighbor.g_score + heuristic(neighbor, finish)
We're basically checking to see if our new path is going to be shorter than the previous path we found. If it is let's update how we get there, the distance to get there, and let's add the node to
our priority queue to be evaluated in the future.
After this all that's left to do is simply print out the path. We can use the same procedure as we used for Dijkstra's. That is when get the current node we then check to see if the current variable
is the same node as our finish variable. If it is print out the path and return. I also created a print_path function so that I could see the path graphically. Here's the necessary code to traverse
the grid from the finish node back to the start node:
def print_path(path)
for y in 0..@size
for x in 0..@size
if @grid[y][x].obstacle
print "X "
elsif path.include? @grid[y][x]
print "- "
print "0 "
print "\n"
if current == finish
path = Set.new
while previous[current]
current = previous[current]
return "Path found"
At the end of the shortest_path function we can also return "Failed to find path" if no path can be found. Now we have all we need to test our code. Let's try some configurations.
g = Graph.new(8)
puts g.shortest_path(0,0,4,2)
- X 0 0 0 0 0 0
- X 0 0 0 0 0 0
- X 0 0 - 0 0 0
- X 0 - 0 0 0 0
- X - 0 0 0 0 0
- X - 0 0 0 0 0
- X - 0 0 0 0 0
0 - 0 0 0 0 0 0
Path found
g = Graph.new(8)
puts g.shortest_path(0,2,6,3)
0 - - - 0 0 0 0
- X X 0 - 0 0 0
- 0 X X 0 - 0 0
0 0 0 X X 0 - 0
0 0 0 0 X X 0 0
0 0 0 0 0 X X 0
0 0 0 0 0 0 X 0
Path found
Pretty cool right? I've posted the final source code below for reference.
Final Source Code
require 'priority_queue'
require 'set'
class Node
def initialize(x, y)
@x = x
@y = y
@obstacle = false
@g_score = Float::INFINITY
def x()
return @x
def y()
return @y
def set_obstacle()
@obstacle = true
def obstacle()
return @obstacle
def set_g_score(score)
@g_score = score
def g_score()
return @g_score
def to_s
return "(" + @x.to_s + ", " + @y.to_s + ", " + @obstacle.to_s + ")"
class Graph
def initialize(size)
@size = size-1 # Last index in 0 based array
@grid = []
for y in 0..@size
row = []
for x in 0..@size
row.push(Node.new(x, y))
def set_obstacle(x, y)
def shortest_path(start_x, start_y, finish_x, finish_y)
def heuristic(current, target)
return [(current.x - target.x).abs, (current.y - target.y).abs].max
start = @grid[start_y][start_x]
finish = @grid[finish_y][finish_x]
visited = Set.new # The set of nodes already evaluated
previous = {} # Previous node in optimal path from source
previous[start] = 0
f_score = PriorityQueue.new
# All possible ways to go in a node
dx = [1, 1, 0, -1, -1, -1, 0, 1]
dy = [0, 1, 1, 1, 0, -1, -1, -1]
start.set_g_score(0) # Cost from start along best known path
f_score[start] = start.g_score + heuristic(start, finish) # Estimated total cost from start to finish
while !f_score.empty?
current = f_score.delete_min_return_key # Node with smallest f_score
if current == finish
path = Set.new
while previous[current]
current = previous[current]
return "Path found"
# Examine all directions for the next path to take
for direction in 0..7
new_x = current.x + dx[direction]
new_y = current.y + dy[direction]
if new_x < 0 or new_x > @size or new_y < 0 or new_y > @size #Check for out of bounds
next # Try next configuration
neighbor = @grid[new_y][new_x]
# Check if we've been to a node or if it is an obstacle
if visited.include? neighbor or f_score.has_key? neighbor or neighbor.obstacle
if direction % 2 == 1
tentative_g_score = current.g_score + 14 # traveled so far + distance to next node diagonal
tentative_g_score = current.g_score + 10 # traveled so far + distance to next node vertical or horizontal
# If there is a new shortest path update our priority queue (relax)
if tentative_g_score < neighbor.g_score
previous[neighbor] = current
f_score[neighbor] = neighbor.g_score + heuristic(neighbor, finish)
return "Failed to find path"
def print_path(path)
for y in 0..@size
for x in 0..@size
if @grid[y][x].obstacle
print "X "
elsif path.include? @grid[y][x]
print "- "
print "0 "
print "\n"
def to_s
return @grid.inspect
g = Graph.new(8)
puts g.shortest_path(0,2,6,3)
I also created a gist of this implementation so that you can view and play around with the code on Github. Definitely try implementing this in another language or if Ruby is your language of choice
try and modify/optimize my implementation. A simple modification would be adjusting the graph and A* function to allow for a rectangular shape. Let me know in the comments what you come up with.
As always if you have any feedback or questions feel free to drop them in the comments below or contact me privately on my contact page. Thanks for reading!
P.S. If your company is looking to hire an awesome soon-to-be college graduate (May 2013) let me know!
Tags: Ruby
• There are currently no comments. You can be first!
About Me
My name is Max Burstein and I am a recent graduate of the University of Central Florida. I enjoy developing large, scalable web applications and I seek to change the world.
Recent Comments | {"url":"http://maxburstein.com/blog/intro-to-graph-theory-finding-shortest-path-part2/","timestamp":"2014-04-20T03:09:47Z","content_type":null,"content_length":"29617","record_id":"<urn:uuid:bef8c1e2-04e1-496c-b9b6-253c33b06ff3>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complexity of a weirdo two-dimensional sorting problem
up vote 10 down vote favorite
Please forgive me if this is easy for some reason.
Suppose given $S$, a set of $n^2$ points in $\mathbb{R}^2$.
I want to choose a bijective map $f$ from $S$ to the set of lattice points in $\lbrace 0,\ldots,n-1\rbrace \times \lbrace 0,\ldots,n-1\rbrace$ so as to maximize the sum, over all $p$ in $S$, of the
dot product $p \cdot f(p)$.
If, instead of $\mathbb{R}^2$, I had $\mathbb{R}^1$, and I was putting $S$ in bijection with $\lbrace0,\ldots,n-1\rbrace$, then this would simply be sorting $S$ and one knows how to do that fast.
For this problem, it's not even obvious to me how to do it in a number of steps that's polynomial in $n$.
Is this easy? Is it an example of a known genre of optimization problem?
2 I am somewhat tempted to vandalize your title by inserting a colon after the word "weirdo"... Nice question, by the way – Yemon Choi Nov 4 '11 at 2:42
It would help to be more explicit about the goal. Please clarify p . f(p)? Gerhard "Ask Me About System Design" Paseman, 2011.11.03 – Gerhard Paseman Nov 4 '11 at 2:54
2 It's a dot product, with the lattice points viewed as elements of the inner product space $\mathbb R^n$. Note to asker: mathbb is the command to make your R's look cool. Your problem is certainly
a linear programming problem. I don't know anything more specific. – Will Sawin Nov 4 '11 at 3:10
7 Well, en.wikipedia.org/wiki/Hungarian_algorithm certainly works here but, perhaps, you can do even better. – fedja Nov 4 '11 at 3:14
add comment
1 Answer
active oldest votes
To elaborate on the comments of Will Sawin and fedja: The question isn't a sorting problem, but it is a matching problem. If $S$ is your arbitrary set and $G = [n]^2$ is your grid, then
you are marrying elements of $S$ to elements in $G$, where the happiness of each marriage is your dot product $p \cdot f(p)$. Any happiness function on $S \times G$, not necessarily one
that is bilinear in the plane, can be maximized in polynomial time. That's because the convex hull of the set of permutation matrices has few facets: It's the Birkhoff polytope of doubly
stochastic matrices. You can apply the general theorem that linear programming with polynomially many facets can be done in polynomial time. For the specific case of the Birkhoff
polytope, there is an optimized linear programming algorithm, the Hungarian algorithm, that was discovered and known to be fast before the result that LP is polynomial time in general.
The Hungarian algorithm is already much faster than generic optimization strategies (which generally aren't polynomial time at all). The remaining question is whether you can devise a
up vote 17 sorting algorithm that's even faster, maybe even quasilinear time like linear sorting. It's difficult to rule that out, but the setup is sufficiently complicated that I'd be surprised/
down vote impressed.
I guess one thing is worth trying: Whether you can prove that you climb to the maximum by separately sorting each row and column of the grid until the permutation is both row-stable and
column-stable. That looks a bit like linear programming, and it also looks like a shortcut. Even if this idea doesn't work, it might ably accelerate those linear programming algorithms
that travel along the 1-skeleton of the polytope, such as the simplex algorithm. You could also throw in sorting along other straight lines. The Hungarian algorithm does not travel along
the 1-skeleton; instead it's a dual/primal algorithm. Actually, traveling along the 1-skeleton of the Birkhoff polytope is a tricky matter because the vertices are highly degenerate. But
there are tricks for that such as desingularizing the polytope by making a generic perturbation of the facets.
add comment
Not the answer you're looking for? Browse other questions tagged oc.optimization-control computational-complexity algorithms computational-geometry linear-programming or ask your own question. | {"url":"http://mathoverflow.net/questions/79999/complexity-of-a-weirdo-two-dimensional-sorting-problem","timestamp":"2014-04-19T04:58:58Z","content_type":null,"content_length":"59878","record_id":"<urn:uuid:c2042339-44ff-438c-ba42-f16c800ba4e4>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jwir3's Weblog
My intention today was to get the triangulate3d code written, but unfortunately, it looks like that isn't going to happen. Things have just been going really slow today. Thus, to adapt the schedule,
I think I am going to combine the Triangulation and Finishing of Geometry conversion into a single stage. The combination of these two should be done by the time I laid out for Convert Geometry to be
The concept I have for the triangulation code is fairly straightforward. I intend to utilize the MakeMonotone algorithm given in deBerg et. al.'s computational geometry book. (I originally intended
to use a Delaunay Triangulation, but I don't think that it's necessary to do all that work - we are given a polygon in 3D, so that makes the problem a little easier than a point cloud). Now, the
problem I see is that the MakeMonotone algorithm (that is, break the polygon into y-monotone pieces, then triangulate each of these and combine) works on a 2D polygon. Normally, I thought it would be
easiest simply to project the 3D polygon onto a 2D plane, but this doesn't work if the polygon wraps over on itself (consider a polygon like the following: http://www.cosy.sbg.ac.at/~held/projects/
). Thus, my idea is to perform the following:
1. Map from 3D to 2D in some manner
2. Break polygon with holes into separate polygons without holes
3. Triangulate each polygon according to algorithm in deBerg et. al.
4. Re-construct original 2D polygon with holes
5. Revert to original 3D polygon by un-doing mapping
The problem will be to determine the mapping (Note: The removal of holes isn't trivial, but I have an algorithm for it somewhere... I just can't remember where it is at the moment. I will find it.
So, I'm going to do everything else first, then add it in later.) I thought about an LSCM-unwrap type of algorithm, but that relies on user cuts. The thing is, that if the polygon is some kind of
wierd 3D shape that loops back on itself, then it's no longer a polygon, and I probably can just ignore that case.
The code I have been working on today has been mostly to add a test application, so I can visualize the two versions of a polygon. Ideally, I want to create a split window. The top has the
un-triangulated polygon, and the bottom has the triangulated one. To begin with, what I want to do is show the polygon in 3D in the top (the original polygon), then show the polygon in 2D after the
mapping in the bottom. I am working on this application, which will be called apptri3dtest, as this is being written.
No Trackbacks/Pingbacks for this post yet... | {"url":"http://www.crystalspace3d.org/blog/jwir3/2007/07/03/design_log_6?blog=13&title=design_log_6&page=1&more=1&c=1&tb=1&pb=1&disp=single","timestamp":"2014-04-19T17:41:57Z","content_type":null,"content_length":"21058","record_id":"<urn:uuid:9b9a358e-b50b-4336-8b18-be9b011eb64d>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
ACP Atmospheric Chemistry and Physics ACP Atmos. Chem. Phys. 1680-7324 Copernicus GmbH Göttingen, Germany 10.5194/acp-13-7215-2013 Generalisation of Levine's prediction for the distribution of
freezing temperatures of droplets: a general singular model for ice nucleation Sear R. P. ^1 Department of Physics, University of Surrey Guildford, Surrey GU2 7XH, UK 30 07 2013 13 14 7215 7223 This
is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original
author and source are credited. This article is available from http://www.atmos-chem-phys.net/13/7215/2013/acp-13-7215-2013.html The full text article is available as a PDF file from http://
Models without an explicit time dependence, called singular models, are widely used for fitting the distribution of temperatures at which water droplets freeze. In 1950 Levine developed the original
singular model. His key assumption was that each droplet contained many nucleation sites, and that freezing occurred due to the nucleation site with the highest freezing temperature. The fact that
freezing occurs due to the maximum value out of a large number of nucleation temperatures, means that we can apply the results of what is called extreme-value statistics. This is the statistics of
the extreme, i.e. maximum or minimum, value of a large number of random variables. Here we use the results of extreme-value statistics to show that we can generalise Levine's model to produce the
most general singular model possible. We show that when a singular model is a good approximation, the distribution of freezing temperatures should always be given by what is called the generalised
extreme-value distribution. In addition, we also show that the distribution of freezing temperatures for droplets of one size, can be used to make predictions for the scaling of the median nucleation
temperature with droplet size, and vice versa. | {"url":"http://www.atmos-chem-phys.net/13/7215/2013/acp-13-7215-2013.xml","timestamp":"2014-04-18T09:19:00Z","content_type":null,"content_length":"10915","record_id":"<urn:uuid:c7d7a1a7-5157-4a12-acf8-20c1e4feff56>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Flipping arrows in coBurger King
by Edward Z. Yang
Category theory crash course for the working Haskell programmer.
A frequent question that comes up when discussing the dual data structures—most frequently comonad—is “What does the co- mean?” The snippy category theory answer is: “Because you flip the arrows
around.” This is confusing, because if you look at one variant of the monad and comonad typeclasses:
class Monad m where
(>>=) :: m a -> (a -> m b) -> m b
return :: a -> m a
class Comonad w where
(=>>) :: w a -> (w a -> b) -> w b
extract :: w a -> a
there are a lot of “arrows”, and only a few of them flipped (specifically, the arrow inside the second argument of the >>= and =>> functions, and the arrow in return/extract). This article will make
precise what it means to “flip arrows” and use the “dual category”, even if you don’t know a lick of category theory.
Notation. There will be several diagrams in this article. You can read any node (aka object) as a Haskell type, and any solid arrow (aka morphism) as a Haskell function between those two types.
(There will be arrows of different colors to distinguish concepts.) So if I have f :: Int -> Bool, I will draw that as:
Functors. The Functor typeclass is familiar to the working Haskell programmer:
class Functor t where
fmap :: (a -> b) -> (t a -> t b)
While the typeclass seems to imply that there is only one part to an instance of Functor, the implementation of fmap, there is another, almost trivial part: t is now a type function of kind * -> *:
it takes a type (a) and outputs a new type (unimaginatively named t a). So we can represent it by this diagram:
The arrows are colored differently for a good reason: they are indicating completely different things (and just happen to be on the same diagram). While the red arrow represents a concrete function a
-> b (the first argument of fmap), the dashed blue arrow does not claim that a function a -> t a exists: it’s simply indicating how the functor maps from one type to another. It could be a type with
no legal values! We could also posit the existence of a function of that type; in that case, we would have a pointed functor:
class Functor f => Pointed f where
pure :: a -> f a -- aka return
But for our purposes, such a function (or is it?) won’t be interesting until we get to monads.
You may have heard of the Functor law, an equality that all Functors should satisfy. Here it is in textual form:
fmap (g . f) == fmap g . fmap f
and here it is in pictorial form:
One might imagine the diagram as a giant if..then statement: if f, g and g . f exist, then fmap f, fmap g and fmap (g . f) exist (just apply fmap to them!), and they happen to compose in the same
Now, it so happens that if we have f :: a -> b and g :: b -> c, g . f is also guaranteed to exist, so we didn’t really need to draw the arrow either. This is such an implicit notion of function
composition, so we will take a moment and ask: why is that?
It turns out that when I draw a diagram of red arrows, I’m drawing what mathematicians call a category with objects and arrows. The last few diagrams have been drawn in what is called the category
Hask, which has objects as Haskell types and arrows as Haskell functions. The definition of a category builds in arrow composition and identities:
class Category (~>) where
(.) :: (b ~> c) -> (a ~> b) -> (a ~> c)
id :: a ~> a
(you can mentally substitute ~> with -> for Hask) and there are also laws that make arrow composition associative. Most relevantly, the categorical arrows are precisely the arrows you flip when you
talk about a dual category.
“Great!” you say, “Does that mean we’re done?” Unfortunately, not quite yet. It is true that the comonad is a monad for an opposite (or dual) category, it is not the category Hask. (This is not the
category you are looking for!) Still, we’ve spent all this time getting comfortable drawing diagrams in Hask, and it would be a shame to not put this to good use. Thus, we are going to see an example
of the dual category of Hask.
Contravariant functors. You may have heard fmap described as a function that “lifts” functions in to a functorial context: this “functorial context” is actually just another category. (To actually
mathematically show this, we'd need to show that the functor laws are sufficient to preserve the category laws.) For normal functors, this category is just Hask (actually a subcategory of it, since
only types t _ qualify as objects). For contravariant functors, this category is Hask^op.
Any function f :: a -> b in Hask becomes a function contramap f :: f b -> f a in a contravariant functor:
class ContraFunctor t where
contramap :: (a -> b) -> t b -> t a
Here is the corresponding diagram:
Notice that we’ve partitioned the diagram into two sections: one in Hask, and one in Hask^op, and notice how the function arrows (red) flip going from one category to the other, while the functor
arrows (blue) have not flipped. t a is still a contravariant functor value.
You might be scratching your head and wondering: is there any instance of contramap that we could actually use? In fact, there is a very simple one that follows directly from our diagram:
newtype ContraF a b = ContraF (b -> a)
instance ContraFunctor (ContraF a) where
contramap g (ContraF f) = ContraF (f . g)
Understanding this instance is not too important for the rest of this article, but interested readers should compare it to the functor on normal functions. Beyond the newtype wrapping and unwrapping,
there is only one change.
Natural transformations. I’m going to give away the punchline: in the case of comonads, the arrows you are looking for are natural transformations. What are natural transformations? What kind of
category has natural transformations as arrows? In Haskell, natural transformations are roughly polymorphic functions: they’re mappings defined on functors. We’ll notate them in gray, and also
introduce some new notation, since we will be handling multiple Functors: subscripts indicate types: fmap_t is fmap :: (a -> b) -> t a -> t b) and η_a is η :: t a -> s a.
Let’s review the three types of arrows flying around. The red arrows are functions, they are morphisms in the category Hask. The blue arrows are indicate a functor mapping between types; they also
operate on functions to produce more functions (also in the category Hask: this makes them endofunctors). The gray arrows are also functions, so they can be viewed as morphisms in the category Hask,
but sets of gray arrows across all types (objects) in Hask from one functor to another collectively form a natural transformation (two components of a natural transformation are depicted in the
diagram). A single blue arrow is not a functor; a single gray arrow is not natural transformations. Rather, appropriately typed collections of them are functors and natural transformations.
Because f seems to be cluttering up the diagram, we could easily omit it:
Monad. Here is the typeclass, to refresh your memory:
class Monad m where
(>>=) :: m a -> (a -> m b) -> m b
return :: a -> m a
You may have heard of an alternate way to define the Monad typeclass:
class Functor m => Monad m where
join :: m (m a) -> m a
return :: a -> m a
m >>= f = join (fmap f m)
join m = m >>= id
join is far more rooted in category theory (indeed, it defines the natural transformation that is the infamous binary operation that makes monads monoids), and you should convince yourself that
either join or >>= will get the job done.
Suppose that we know nothing about what monad we’re dealing with, only that it is a monad. What sort of types might we see?
Curiously enough, I’ve colored the arrows here as natural transformations, not red, as we have been doing for undistinguished functions in Hask. But where are the functors? m a is trivial: any Monad
is also a valid instance of functor. a seems like a plain value, but it can also be treated as Identity a, that is, a inside the identity functor:
newtype Identity a = Identity a
instance Functor Identity where
fmap f (Identity x) = Identity (f x)
and Monad m => m (m a) is just a functor two skins deep:
fmap2 f m = fmap (fmap f) m
or, in point-free style:
fmap2 = fmap . fmap
(Each fmap embeds the function one functor deeper.) We can precisely notate the fact that these functors are composed with something like (cribbed from sigfpe):
type (f :<*> g) x = f (g x)
in which case m :<*> m is a functor.
While those diagrams stem directly from the definition of a monad, there are also important monad laws, which we can also draw diagrams for. I’ll draw just the monad identity laws with f:
return_a indicates return :: a -> m a, and join_a indicates join :: m (m a) -> m a. Here are the rest with f removed:
You can interpret light blue text as “fresh”—it is the new “layer” created (or compressed) by the natural transformation. The first diagram indicates the identity law (traditionally return x >>= f ==
f x and f >>= return == f); the second indicates associativity law (traditionally (m >>= f) >>= g == m >>= (\x -> f x >>= g)). The diagrams are equivalent to this code:
join . return == id == join . fmap return
join . join == join . fmap join
Comonads. Monads inhabit the category of endofunctors Hask -> Hask. The category of endofunctors has endofunctors as objects and (no surprise) natural transformations as arrows. So when we make a
comonad, we flip the natural transformations. There are two of them: join and return.
Here is the type class:
class Functor w => Comonad w where
cojoin :: w a -> w (w a)
coreturn :: w a -> a
Which have been renamed duplicate and extract respectively.
We can also flip the natural transformation arrows to get our Comonad laws:
extract . duplicate == id == duplicate . extract
duplicate . duplicate == fmap duplicate . duplicate
Next time. While it is perfectly reasonable to derive <<= from cojoin and coreturn, some readers may feel cheated, for I have never actually discussed the functions from monad that Haskell
programmers deal with on a regular basis: I just changed around the definitions until it was obvious what arrows to flip. So some time in the future, I hope to draw some diagrams for Kleisli arrows
and show what that is about: in particular, why >=> and <=< are called Kleisli composition.
Apology. It being three in the morning, I’ve managed to omit all of the formal definitions and proofs! I am a very bad mathematician for doing so. Hopefully, after reading this, you will go to the
Wikipedia articles on each of these topics and find their descriptions penetrable!
Postscript. You might be interested in this follow-up post about duality in simpler settings than monads/comonads.
20 Responses to “Flipping arrows in coBurger King”
1. The definition of a Monad in Haskell is a bit muddled by the properties of Hask as a category having arbitrary exponentials.
If you instead consider (=<<) :: (a -> m b) -> (m a -> m b)
Then you can view that as mapping a Kleisli arrow (a -> m b) onto an arrow from (m a -> m b).
Then the dual notion extract :: (w b -> a) -> w b -> w a is reversing the arrows referenced by that mapping. The ‘middle ->’ isn’t logically in the category in question.
But what about (>>=)? Well, by flipping the arguments, now we’ve broken one of those arrows in half, and muddled the existence of the mapping in exchange for obtaining a slightly more pipelined
style…. as long as the monad in question is a monad over Hask, rather than some arbitrary Category. Alas, this notion (and parameter order) does not generalize.
I wrote a slightly longer explanation a year or two back for the first meeting of the Boston Haskell user group. Slides are available here: http://comonad.com/haskell/Comonads_1.pdf
2. [you may need to clean up the formatting on that last post your blog doesn't like flipped (>>=).]
3. That “instance Comonad w where” should probably be “class”.
4. Ben, thanks, fixed.
Edward, beaten me to the punch as usual. :-) That’s quite a convincing way to think about it, although there is perhaps a slight discomfort at the fact that the connection between your a -> m b
to m a -> m b doesn’t smell much like the Kleisli category as described by Wikipedia.
5. Well, the Kleisli category defined by Wikipedia defines composition in terms of what we call (>=>).
I was just identifying an arrow of the form ‘a -> m b’ as an arrow in the Kleisli category for m, not tying to the notion of the Kleisli category beyond that.
The only real relation to the Kleisli category is that:
(f <=< g) a = f =<< g a
[feel free to reformat, i'm not sure how to get <'s consistently into your blog and can't edit my posts to experiment]
6. Thanks, your post is a good companion to the Haskell wikibook about categories:
7. Oh wow, the Wikibooks article has much more material than I remember it having. They do look like good pairs.
8. Why do you bring up contravariant functors? They seem out of place in this article.
9. Patai, I originally was hoping there was such a thing as a “cofunctor”, to parallel a “comonad”, but apparently using this name is slightly ambiguous (some people will tell you cofunctors are
contravariant functors, some people will tell you those people are confused.) Contravariant functors do have dual categories in them, so they are relevant to the general discussion of duality.
10. If flipping the arrow in an object makes a co-object, does flipping them twice get back the original object? That would mean coconuts are the same thing as nuts, cocoa puffs are just puffs, etc.
11. Yes! More specifically, the dual of a dual category is the original category.
12. When I ask “What is …?” I don’t expect nor particularly logically understand a “Because …” answer!
Also, you wrote :
“The blue arrows are indicate a functor mapping between types… The gray arrows (…) form a natural transformation” immediately followed by
“Blue arrows are not functors; gray arrows are not natural transformations” !!
I believe that last sentence is wrong.
And still don’t get the difference between a natural transformation and an endofunctor…
13. Yes, Axio, you’re right. I should have said a single particular arrow does not indicate a functor. I’ve updated the article accordingly.
The difference between a natural transformation f a -> g a and an endofunctor f a -> f a is two-fold: one is that a natural transformation spans different functors (it transforms from one functor
to another) while an endofunctor must map back to itself; the second is that a natural transformation implies the existence of a function f a -> g a; an endofunctor makes no such implication
about f a -> f a; the function it defines is slightly different: (f a -> f b) -> f a -> f b (these definitions may seem like mere specializations of id, but if f is known, they don’t necessarily
have to be). A little more concisely, natural transformations are functions on values, functors are functions on types. Since most (endo)functors you deal with on a daily basis are also monads,
it may seem that the natural transformation a -> f a is fairly certain to exist, but it is by no means a given. What is a function with type “b -> Either a b“? There is no way we can generate a
value of type a with just that function.
14. [...] but it will take a while before these discussions appear in the F# community. See this article in case you want to know more about [...]
15. Probably would be better to add subscript m to the fmap f at the diagram, which illustrates monads identity rules. Thank you very much for this article.
16. Regarding the same diagram, I guess, m in the resulting m a is supposed to be colored light blue. Probably, it is also worth to mention about naturality condition:
fmap f . return = return . f
fmap f . join = join . fmap (fmap f)
17. Sorry, which diagram are you referring to?
18. This diagram: http://blog.ezyang.com/img/coburger/full-monad-id-law.png
Here is the modified one: http://i.imgur.com/ULEA5.png
19. My category theory is weak, but an elementary result in linear algebra is that the dual of a (finite-dimensional) dual vector space is naturally isomorphic to the original vector space. But this
is duality of objects, not of categories. Is there any connection with the result on categories that you mention?
20. Tobin: I’m not very familiar with advanced linear algebra, but if the objects are changing, then likely what you have is something like a contravariant functor as opposed to a dualization. | {"url":"http://blog.ezyang.com/2010/07/flipping-arrows-in-coburger-king/comment-page-1/","timestamp":"2014-04-17T12:28:56Z","content_type":null,"content_length":"47019","record_id":"<urn:uuid:fffcd96e-1bb9-479c-a66c-41600297fc95>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is exp(rA) = (exp(A))^r for real r and A in a Banach space?
up vote 1 down vote favorite
Is $e^{(rA)} = (e^{A})^r$ when $r \in \mathbb{R}$ and $A$ is an element of a Banach algebra?
Clearly if $n$ is an integer, then
$e^{(nA)} = e^{A+A \cdots +A} = e^{A}e^{A}\cdots e^{A} = (e^{A})^n$,
where the second equality follows from the Baker-Hausdorff lemma and the fact that [A,A]=0. On the other hand, I think the equality is not generally true when $r \in \mathbb{C}$. But what about the
Many thanks for your thoughts!
banach-spaces banach-algebras oa.operator-algebras linear-algebra
3 How do you define the non-integer powers for arbitrary Banach algebra elements? – fedja Dec 24 '10 at 15:35
1 Using the binomial series, I guess? – Mariano Suárez-Alvarez♦ Dec 24 '10 at 15:36
1 The conventional definition is to use the limit of (1 + x/n)^n as n goes to infinity. (There's a slightly different variants that's more likely to converge, but I can't remember what it is.) –
arsmath Dec 24 '10 at 16:09
2 The Taylor expansion of the logarithm diverges more often than not. There is no reason to expect it to converge for $e^A$. Of course, if everything is defined as a series and you can meaningfully
plug series into series justifying all passages to the limits, the identity holds. On the other hand, if the right hand side just fails to exist or has multiple values like non-integer powers of
complex numbers (the first example to look at), the question hardly makes sense. – fedja Dec 24 '10 at 16:59
2 You are asking why $(f\circ g)(A)= f(g(A))$. The answer is, it is just a matter of functional calculus; check any textbook on the topic. – Pietro Majer Dec 24 '10 at 17:38
show 2 more comments
1 Answer
active oldest votes
A particular case of a Banach algebra is the one-dimensional case $\mathbb C$, right?
And according to Pietro, if you can prove this in that case, you get the general case by saying "functional calculus". (At least for $A$ where it applies, such as normal
up vote 4 down vote But is it true for $\mathbb C$?
$A = 2\pi i$, $r=1/2$, $1 = \exp(2\pi i)$ and $1^{1/2} =^{?} \exp(\pi i) = -1$ .
Oh dear -- you're of course correct, the claim is not generally true. Thanks! – soulphysics Dec 26 '10 at 17:21
add comment
Not the answer you're looking for? Browse other questions tagged banach-spaces banach-algebras oa.operator-algebras linear-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/50288/is-expra-expar-for-real-r-and-a-in-a-banach-space","timestamp":"2014-04-18T06:13:13Z","content_type":null,"content_length":"58443","record_id":"<urn:uuid:6da96110-55d5-4d1a-b691-09e5f93d3799>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
Villa Park, IL Math Tutor
Find a Villa Park, IL Math Tutor
...I am happy to work with anyone who is willing to work and am very patient with students as they try to understand new concepts. I have been in the Glenview area the past four years and have
tutored high schoolers from Notre Dame, New Trier, GBS, GBN, Deerfield High, Loyola Academy and Woodlands ...
20 Subjects: including logic, geometry, trigonometry, precalculus
...I also have experience programming in SAS for larger data analysis problems. Besides stats, I have a lot of knowledge about subjects from algebra through calculus. No matter what subject I'm
tutoring, I share my love of learning with the student.
5 Subjects: including algebra 1, prealgebra, statistics, probability
My name is Anthony L. and I recently graduated from the University of Miami (FL). I have a degree in Biology with a minor in Psychology. My specialties include biology, psychology, chemistry,
history, and math. I have experience writing scientific papers and was recently published in the Journal of Comparative Biochemistry and Physiology.
27 Subjects: including algebra 2, chemistry, prealgebra, reading
...Through high school and while obtaining my undergraduate degree in Biology, I worked full time. This required a great deal of self control, effective time management, and diversified learning
strategies. My background is in Biology, Chemistry and Math.
36 Subjects: including algebra 1, algebra 2, biology, chemistry
...I love statistics and science and am familiar with college admissions processes and earned a 5.5/6 on the writing section of the GRE. Thus, I am good at deconstructing subjects and topics into
their constituent parts and delivering information in an effective and digestible way. Furthermore, I am familiar with working with students of all age groups and academic and intellectual
35 Subjects: including SPSS, statistics, geometry, probability
Related Villa Park, IL Tutors
Villa Park, IL Accounting Tutors
Villa Park, IL ACT Tutors
Villa Park, IL Algebra Tutors
Villa Park, IL Algebra 2 Tutors
Villa Park, IL Calculus Tutors
Villa Park, IL Geometry Tutors
Villa Park, IL Math Tutors
Villa Park, IL Prealgebra Tutors
Villa Park, IL Precalculus Tutors
Villa Park, IL SAT Tutors
Villa Park, IL SAT Math Tutors
Villa Park, IL Science Tutors
Villa Park, IL Statistics Tutors
Villa Park, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/villa_park_il_math_tutors.php","timestamp":"2014-04-19T00:03:36Z","content_type":null,"content_length":"24006","record_id":"<urn:uuid:471c7447-29e3-4b06-9bf3-04c88d51c495>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
general power rule
May 28th 2011, 05:06 PM
general power rule
Using the general power rule to find the derivative of the function
$f(x)= {({x}^{2 }-9 })^{\frac{2}{3 } }$
in this case do they mean the chain rule...if not then i'm not sure how to expand the function
May 28th 2011, 05:13 PM
in order to use the chain rule, you will have to invoke the power rule as well. you have: (something)^(a power).
May 28th 2011, 05:23 PM
The general power rule states that for any function of the form $[g(x)]^n$, the derivative will be: | {"url":"http://mathhelpforum.com/calculus/181906-general-power-rule-print.html","timestamp":"2014-04-20T05:53:59Z","content_type":null,"content_length":"5321","record_id":"<urn:uuid:d53a6f5c-a195-410d-b8c9-4cf2893cee13>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 98
Proc. Natl. Acad. Sci. USA Vol. 94, pp. 8370–8377, August 1997 Colloquium Paper This paper was presented at a colloquium entitled “Carbon Dioxide and Climate Change,” organized by Charles D. Keeling,
held Nov. 13–15, 1995, at the National Academy of Sciences, Irvine, CA. Dependence of global temperatures on atmospheric CO2 and solar irradiance DAVID J. THOMSON Mathematics of Communications
Research Department, Bell Laboratories, Murray Hill, NJ 07974 ABSTRACT Changes in global average temperatures and of the seasonal cycle are strongly coupled to the concentration of atmospheric CO2. I
estimate transfer functions from changes in atmospheric CO2 and from changes in solar irradiance to hemispheric temperatures that have been corrected for the effects of precession. They show that
changes from CO2 over the last century are about three times larger than those from changes in solar irradiance. The increase in global average temperature during the last century is at least 20
times the SD of the residual temperature series left when the effects of CO2 and changes in solar irradiance are subtracted. Although it is generally conceded that the average surface temperature of
the Earth has increased by about 0.6°C during the last century, there is little agreement on the cause of this warming. The primary cause of this disagreement is uncertainty about the relative
contribution to this warming of atmospheric CO2 and changes in solar irradiance. The purpose of this paper is to describe some data analysis that may help to discriminate between solar and CO2
effects, and to give estimates of the relative magnitudes of these two effects. The difference between analysis such as those described in ref. 1 and those here is that this data analysis is based on
deseasonalized temperature time series where the effects of precession were included. * The detection of precession in instrumental temperature series and the necessity of including it when removing
the annual cycle from temperature data was demonstrated in ref. 2. I also describe some of the statistical peculiarities and limitations of these data series and suggest where better data are needed.
The paper begins with a discussion of the data being analyzed and, to delineate the issues, presents some ordinary least-squares fits of the temperature data with atmospheric CO2 concentration and
changes in solar irradiance. I next discuss the mathematical methods used and describe some statistical properties of the various data series. This is followed by some simple estimates of the
transfer functions between fossil fuel consumption and atmospheric CO2 levels and from CO2 levels and changes in solar irradiance to temperature. The penultimate section summarizes recent findings on
destabilization of the annual cycle, followed by conclusions. In these analyses I do not directly take into account the effects of stratospheric aerosols nor various internal feedback mechanisms such
as cloud cover. Stratospheric aerosols are generally believed ( 3 , 4 ) to result in cooling, so their omission makes the estimates for sensitivity conservative. Similarly, while a detailed
understanding of internal feedback mechanisms, such as water vapor, is necessary to predict temperature changes from first principles, one may use measurements to assess the general climate response
to forcing without having to consider the internal feedbacks explicitly, much as one can design a filter using operational amplifiers without detailed consideration of the quantum mechanics, or even
current flow, in the individual transistors in the amplifiers. The estimates given here depend neither on general circulation models nor on the assumptions that underlie such models. The transfer
functions are estimated directly from observations of temperature and CO2 and, for solar irradiance, a physically based proxy data series. Data Sources and Preparation For measurements of surface air
temperature I use the low-pass filtered Jones–Wigley Land plus Marine data ( 5 ) shown in figures 9 and 10 of ref. 2. The bandwidth of the low-pass filter was 0.5 cycle/year so the Nyquist rate is
one sample per year. These series differ from the ones usually seen in two important aspects: First, I replaced the standard “deseasonalizing” procedure used to produce temperature anomaly series
with a projection filter separation into low-pass, annual, and high-frequency components so, implicitly, the usual “box-car” running-mean smoother has been replaced with a low-pass filter. Second,
instead of assuming a constant amplitude climatology with a period of 1 calendar year, I allowed the phase of the annual components to track the observed phase. Thus, the significant changes in the
annual cycle caused by the changing balance between direct insolation, periodic at one cycle per tropical year, and transported heat, periodic at one cycle per anomalistic year, has been removed from
the data, eliminating the spurious monthly trends associated with temperature anomaly series ( 2 , 6 ). Note that, although the time-resolution of these series is one year, the series is as smooth as
that given by the usual “boxcar” procedure at decade-scale resolution. The “Global” temperature series used here is the arithmetic average of the Northern and Southern Hemisphere series. The Northern
Hemisphere, Southern Hemisphere, and Global series are denoted by Tn(t), Ts(t), and Tg(t) respectively, with t the Gregorian calendar date. In this paper I use the average temperature over the 65
years from 1854 to 1918 as a base reference. There are several reasons to prefer this period to the usual 1951–1980 reference period. First, based on the sunspot record, solar activity in the
1854–1918 period appears to be representative of the 245-year available record and 65 years covers most of the 88-year Gleisberg cycle ( 7 ). Second, median fossil fuel consumption in this period was
only about 6% of the current rate and the © 1997 by The National Academy of Sciences 0027-8424/97/948370-8$2.00/0 PNAS is available online at http://www.pnas.org. * Briefly, the annual temperature
cycle at a given latitude consists of direct insolation plus transport effects. The direct insolation components vary with the tropical year, the time from equinox to equinox, 365.2422 days, while
the net radiation received by earth and hence the mean transport vary as the anomalistic year, the time from perihelion to perihelion, or 365.2596 days. Precession, loosely defined, is the change in
the longitude of perihelion measured from the vernal equinox.
OCR for page 98
concentration of atmospheric CO2 increased by 4.7% between 1854 and 1918 ( 8 ), only one-quarter of the 1951–1980 rate. Third, chloroflourocarbons and similar ozone-depleting chemicals were not yet
in use, and as there is considerable evidence ( 9 , 10 ) for stratospheric control of climate, it is desirable to use a reference period before changes in stratospheric chemistry by
chloroflourocarbons began. Fourth, the major erratic changes in the timing of the seasons described in ref. 2 appear to have begun about 1920, so the early reference period is largely free of their
effects. Opposed to these considerations are poorer spatial coverage of the temperature series and the presence of several major volcanic events (11) during the earlier period. There are no solar
irradiance measurements without the confounding influence of the atmosphere in the early period and only a few near the end of the 1951–1980 period. Keeling’s CO2 measurements began in 1958, so the
1951–1980 period has better CO2 data than the earlier period. The average and median values of the 1854–1918 data are 171.9 mK and 180.0 mK (Northern Hemisphere) and 150.7 mK and 159.0 mK (Southern
Hemisphere) below the 1951–1980 reference. Because the SD (s) of the raw Tg(t) series during the 65-year reference period is 57.7 mK, the 1990 temperature of 635 mK above the base temperature is, at
a minimum, an 11 s increase. The CO2 data are as listed in table A.6 of ref. 8 up to 1955, and the Mauna Loa averages through 1994 from the Oak Ridge National Laboratory data set ndp001r5 ( 12 )
since 1955. The early data has been interpolated from irregular and inhomogeneous observations and, statistically, is too smooth. I denote this series by C(t), and log2CO2(t) by CL(t). [Radiation
theory predicts that temperature is a logarithmic function of atmospheric CO2 concentration ( 13 ). Use of the base 2 logarithm of the CO2 data gives coefficients that directly describe the effect of
doubling CO2.] I have also used Marland’s fossil-fuel production series ( 14 ), denoted F(t), as an adjunct to the CO2 measurements. This series starts in 1860, later than the temperature data, but
the early data appear to be better than the corresponding CO2 data. The combination of these data with the CO2 data is described later. For solar irradiance I used the Foucal–Lean ( 15 )
reconstruction. This series, L(t), is independent of the temperature data, matches direct solar irradiance measurements since they have been available, and is a reconstruction from other solar
measurements before then. I emphasize, however, that this series is a proxy, not direct measurements, so that inferences drawn from it may not have the reliability of inferences from direct
observations. Changes in the period of the sunspot cycle were suggested ( 16 ) as a solar irradiance proxy. Although there is a high apparent correlation between this proxy and a heavily smoothed
version of the Hansen temperature series ( 17 ), this correlation is not reliable. The jackknife variance (see below) of the low-frequency coherence between the temperature and the sunspot period is
large, more than four times that expected under Gaussian theory. Because of this, lack of a physical basis for the proxy, and the failure of other statistical tests described in ref. 2, this proxy is
not used here. Least-Squares Fits Both CO2 and changes in solar irradiance have been invoked to explain the observed increase in global temperature. Fig. 1 shows the filtered Jones–Wigley global
temperature series, Tg(t) together with a least-squares fit to it using C(t) plus a constant, and a second least-squares fit to Tg(t) using L(t) plus a constant. Each of these fits explains more than
75% of the variance over the full 1854–1990 interval (the residual SDs are 74.2 mK and 83.8 mK, respectively). Including both C(t) and L(t) simultaneously as explanatory variables further reduces the
residual SD to 62.5 mK. Nearly 87% of the variance is explained, and both partial F statistics ( 18 ) are highly significant, so neither variable can be dropped. Examining these residuals further,
one finds that their autocorrelation at a 1-year lag is 0.914 so that the conditions required for the Gauss–Markov theorem (the basis of least-squares) to be valid ( 19 ) are not satisfied. This is
not simply a technical mathematical quibble, but indicates serious fitting problems whose existence may be verified by repeating the fitting process over different time intervals and observing the
change in the estimated coefficients. For example, a least-squares fit to Tg(t) with just the proxy solar irradiance L(t) and a constant the interval 1854–1918 gives a negative temperature response
to increasing proxy solar irradiance. Consequently, one must use statistical time-series methods that are reliable when serially correlated residuals are present. Calendar Date FIG. 1. The
Jones–Wigley global temperature series, low-pass filtered to a time resolution of 1 year, with independent least-squares fits to solar irradiance changes and to atmospheric CO2. Mathematical
Preliminaries Many of the problems in the analysis of climate data require new methods for time-series analysis. Most of the commonly used time-series methods were derived under the assumption of
stationarity, that is, their derivations assume that the statistics of the observed process are independent of the choice of time origin. The climate seems to be nonstationary; over a few years the
annual cycle of the seasons makes cyclostationary (or periodically correlated) processes a better model than stationary processes, implying that Fourier transforms of frequencies offset by multiples
of one cycle per year will be correlated. On longer time scales, evolution of the Earth’s orbit obviously results in vast shifts in the climate, and I recently have shown ( 2 ) that these changes in
the orbit must be considered in the analysis of instrumental data series as well. Here these effects are accounted for in the way the data were filtered. Because both solar and CO2 effects alter the
seasonal cycle as well as low frequencies, I have not removed common modulation terms at 0 and 1 cycle/year. Finally, anthropogenic changes are altering the composition of the atmosphere and the
climate system. Thus, analysis methods that contain an implicit assumption of stationarity should be used with caution.
OCR for page 98
Similarly, confidence intervals for parameters derived from time series are often based on an assumed Gaussian distribution. It appears that this assumption is invalid for climate data: there are
outliers, predominantly from volcanic effects, and, surprisingly, some quantities appear to be more stable than one would expect from a Gaussian distribution. Even discounting the intrinsic
complications of the climate and solar data, the available data is far from homogeneous. There are, for example, no direct measurements of solar irradiance, and only a few direct CO2 measurements
before 1958. In addition, the spatial coverage of the temperature averages has changed markedly. Fluctuations in the fossil-fuel production series ( 14 ) appear to be roughly proportional to the
level of production, so the absolute errors in the 19th century were lower than they are at present. The methods used in this paper are a mixture of time-domain and frequency-domain techniques. The
multiple-window ( 20 , 21 , 22 ) method of spectrum estimation is used as an indirect basis for most estimates. This basic method is formally extended in several ways: Repeating the analysis with
each window omitted in turn, or jackknifing over windows ( 23 ), gives estimates of the variances of spectra and coherences that do not depend on a Gaussian assumption. Quadratic-inverse theory ( 24
) tests for correlations between closely spaced frequencies, and a variant ( 25 ) allows checking for nonstationary effects. The singular value decomposition of a multiple-window spectrogram ( 21 )
provides insight into some of the stationarity issues. In addition, a variety of prewhitening and robust estimation methods typified by those in ( 26 , 27 ) have been used throughout. The
Jones–Wigley data, however, has been so carefully prepared that robust procedures detect little more than volcanic activity and do not significantly change the results. Space limitations preclude
including many of these checks and comparisons, and I have omitted those that give expected results. The stochastic structure of both climate and solar data is amazingly complicated when compared to
that commonly encountered in textbook examples or even most engineering problems. Because analysis methods for nonstationary time series do not appear to be well enough developed to cope with these
series, nor the stochastic structure of the data well enough understood, I use methods that allow simple models for the trends and forcings while leaving nearly stationary residuals. Stochastic
Properties of the Data Series Because one cannot reliably estimate the relationships between random series without understanding their internal structure, I began this study with a brief examination
of the properties of the individual data series. Power spectra of the various series have the usual “red” characteristics of much geophysical data and are not shown. Most of the high-energy,
low-frequency part of the spectrum is a result of the trends under consideration, and the stochastic structure is more obvious if the data are prewhitened by filtering. Because the spectra appear
similar, I computed the geometric mean of the spectra of Tg(t), L(t), and CL(t), Fourier transformed this average spectrum to give an autocorrelation sequence, then computed a fourth-order
autoregressive prewhitening filter from the autocorrelations. I then use this AR-4 filter on all the data series and recompute the spectra. The spectra have been computed with a multiple-window ( 20
) estimate typically using a time-bandwidth product of 6.0 and the 10 lowest-order windows so the spectrum estimates have a nominal x-square distribution with 20 degrees of freedom. Their 5% and 95%
confidence intervals have been determined by jackknifing over windows ( 23 ). The jackknife method is known analytically to give good performance over a range of distributions, to be reasonably
distribution free, and, empirically, to be sensitive both to unresolved structure in the spectrum and to nonstationarity. The jackknife variances of the spectrum of the prewhitened temperature series
are, on average, about 87% of that expected under Gaussian theory ( 28 ). In contrast, the jackknife variance of the spectrum of L(t), Fig. 2, is nearly 10 times that expected near the 22-year Hale
cycle and considerably lower than expected around the 11-year sunspot cycle. In a stationary Gaussian series such excursions would be extremely rare. Examining the cause for the low jackknife
variance, the quadratic-inverse test for unresolved spectral details ( 24 ), Fig. 3 , for the Northern Hemisphere is lower than expected except at low frequencies. (The geometric mean, across
frequencies, is 3.1, compared to an expected value of 4, and the estimate is below the expected value over much of the frequency range.) This implies that the sampling variations within the analysis
bandwidth in the estimated spectrum are smaller than one would expect in a Gaussian random process with a constant spectrum. From this, I infer that much of the apparently random structure in the
temperature data is not random but, more probably, a consequence of either deterministic forcing or internal oscillations. Because of the similarity between the statistics of the solar and
temperature data, and the stationary temperature residuals (described below), solar forcing is a more likely explanation than internal oscillations. The unresolved structure test in both hemispheres
have local maxima near the 22-year Hale cycle and significant maxima at low frequencies. Fig. 4 is a plot of an estimate of the frequency derivative d/df In{S(f)} [strictly, the relative
quadratic-inverse coefficients, b1(f)/S(f) in the notation of ref. 24] for the two hemispheric temperature series and L(t). The higher-order coefficients have similar characteristics, but lower
amplitudes. The similarity of the low-frequency features in the temperature and irradiance data lend credence to many of the suggested solar-climate relations and also imply that extracting the
statistical details of these relationships will be complicated and difficult. Turning to the low-frequency maximum in Fig. 3, narrower bandwidth spectrum estimates hint at unusual activity at low
frequencies in both Tn and Ts. The harmonic F-test shows moderate evidence for a periodic component with an estimated period of 180 years. The estimated period is longer than the length of the data
series, and while the nominal significance level is 99.7%, this is not much higher than 1 – 1/N = 0.993, the level where one expects at least one false detection of a line in noise. However, because
the ±1 – s confidence interval on the period of 153 to 220 years includes the 208-year Suess period, the signal is detected in both hemispheres, and the magnitude-squared coherence between them is
about 0.6, the possibility that there is a complicated natural signal in the data near a period of 200 years should not be dismissed. The FIG. 2. The estimated jackknife variance of the natural log
of the spectrum of solar irradiance changes. The dashed curve is the expected value and varies slowly as the adaptive weighting process changes the degrees-of-freedom of the spectrum estimate. Note
the high peak near the 22-year Hale cycle and the low variance near the 11-year cycle, 0.09 cycle/year.
OCR for page 98
similarity of the filtered amplitude estimates, 4.4 mK and 4.0 mK, and phases, −135.0° and −138.5° (referenced to 1900.0), in the Northern and Southern Hemispheres, respectively, further support this
conclusion. Analysis of long series of 14C data (21) shows significant components at periods of 230.6, 208.4, and 199.4 years, so because the frequency difference between the latter two periods
corresponds to 4,600 years, one cannot expect these components to be resolved in the 137-year series, nor the single line F-test to work reliably. FIG. 3. Quadratic-inverse tests for unresolved
spectral details for the prewhitened Northern and Southern Hemisphere temperature series. The distribution of the test statistics is approximately X24] and the dotted lines show the expected value
and 95% levels. Note that the Northern Hemisphere estimate is lower than expected over much of the frequency range. A possible alternative explanation to deterministic forcing as the cause of the low
variances of the temperature spectra is nonstationarity, and this was tested for using the methods in ref. 25. This test shows less variability in time than expected at periods longer than 8 years.
Thus, near the 11-year solar cycle, both the tests for unresolved structure and stationarity show more stability than one would expect in a random series. Examination of the nonstationary
quadratic-inverse coefficients, am(f), shows that the a2(f)s, approximately, the second time-derivatives of the spectra † are reasonably significant and similar in both hemispheres. Looking at
details of the spectrum of the solar irradiance series, L(t), the predominant feature is the character of the jackknife variance ( Fig. 2 ). There is significant excess variability near 22-year
periods and a paucity around 11-year periods. The unresolved structure test is uniformly higher than expected out to a frequency of about 0.13 cycle/year, that is, until the high-frequency edge of
the 11-year band, followed by a low region. The nonstationary quadratic-inverse coefficient a1(f), roughly the time-derivative of the spectrum, or S(f) in both hemispheres shows decreasing power near
22-year periods and increasing power in the 7- to 11-year period band. The singular value decomposition of the log-spectrogram described in ref. 21 suggests mild nonstationarity in Tn(t), Ts(t), and
L(t) concentrated about the 104-year Suess period, and possibly related frequencies. The phases of the annual cycle of the temperature data shows similar characteristics. Note that this 104-year
period is not a simple additive term, but a periodic modulation of the stochastic structure of the process. FIG. 4. Stationary quadratic-inverse coefficients, b1(f)/S(f), for the Northern and
Southern Hemispheres and solar irradiance. The dashed lines near ±0.4 are one SD from the mean for a Gaussian process with a white spectrum. Note the close tracking between all three series at low
frequencies and of the temperature series generally. I emphasize that the statistical peculiarities just described are those of the individual data series; they contrast markedly with the statistics
of the temperature residuals to be described. To the extent that the climate has a linear response to forcing, nonstationarity in the forcing should appear as a similar nonstationarity in the
temperature series so, internal oscillations aside, removing the effects of such forcing should leave stationary residuals. The estimate of the spectrum of the log2 CO2(t) data, shown in Fig. 5 .
shows two important features: first, the range of the spectrum, seven decades, is larger than that of either the temperature or irradiance data, and second, the jackknife variance estimate of ln Ŝ(f)
has a maximum of 12.06 at periods near 62 years, compared to an expected value of about 0.13, or 90 times expected. Thus the 5% to 95% confidence region for the CO2 spectrum estimated by the
jackknife method covers a range of about 104, instead of the factor of 2 that would be expected from Gaussian theory. This uncertainty in the spectrum of CO2 variations carries through to the
coherences and transfer function estimates and, consequently, standard fre FIG. 5. An estimate of the spectrum of the CO2 data from 1854–1990 with 5% and 95% confidence intervals determined by
jackknifing. The large uncertainty in this estimate precludes making reliable frequency-domain estimates of the transfer functions. † In a nonstationary process the spectrum S(f1, f2) is a function
of two frequencies and several representations are useful. (Stationary processes are much simpler because the spectrum is concentrated on the line f1 = f2 and, consequently, a function of a single
frequency.) A one-dimensional Fourier transform of S(f1, f2) taking the variable f1 – f2 into to gives a dynamic spectrum D(f, to) where the frequency f = (f1 + f2)/2. The “time derivatives” of the
spectrum referred to are expansion coefficients of lnD(f, t) and, approximately, of the form ∂/∂t lnD(f, t).
OCR for page 98
quency-domain estimates of transfer functions are not presented here. Temperature Response Models and Transfer Function Estimates I attempted to assess the simultaneous effects of solar variability
and greenhouse gases by estimating transfer functions. Because both effects have been advanced as plausible explanations for the increasing temperature, one should expect that changes in solar
irradiance and greenhouse gas concentrations should appear coherent as well. It has been shown ( 29 ) that, considered as a pair, changes in temperature and CO2 were coherent, with changes in
temperature leading those in CO2, instead of vice-versa, as popularly supposed. Marland’s fossilfuel production series, in contrast, leads both the temperature and CO2 data. Changes in solar
irradiance also lead those in both CO2 and temperature. In addition to these complicated relationships, there are other difficulties to consider; the physically distributed and multicomponent nature
of the problem, combined with spatially and temporally discrete data, makes the appropriate form of the transfer functions obscure. The roughly periodic character of the solar cycle makes lead-lag
relations ambiguous. In an attempt to untangle these relationships, I tried several different forms of estimates; the most successful of these is described below. Here I define “successful” as low
prediction error, few free parameters, and physically realistic implications. For example, I consider a negative low-frequency response to either increases in solar irradiance or CO2 unacceptable.
Transfer functions implying noncausal response to changes in solar irradiance imply either that the transfer function is unacceptable, or that there is a problem with the assumed irradiance. I also
require that the estimates be reasonably insensitive to linear filtering operations applied to the raw data, which eliminates many of the possible models. The rationale for this requirement is
two-fold: first, if there is a linear relationship between the input and output of a system, estimates of the transfer function should not change significantly when the same linear filter is applied
to both input and output, and second, residuals in the series are autocorrelated, so if the filtering reduces the residual autocorrelation, the requirements of the Gauss–Markov theorem are more
nearly satisfied. The estimated transfer functions are clearly not unique, but other acceptable estimates with similar prediction errors have similar implications. A simple model for the atmospheric
CO2 concentration at time t is [1] where the coefficient ζ on the previous concentration allows for exchange and sequestration by the oceans and biosphere, β is the part of the carbon present in
fossil fuels that contributes to atmospheric CO2, and γ allows for the temperature forcing described in ref. 29. Doing a robust fit with the more accurate Keeling data weighted a factor of 6 higher
than the pre-1958 data gives implying that the atmosphere acts, as expected, as an integrator. The estimated coefficients are , somewhat lower than the estimate in ref. 8 or direct estimates between
CO2 and the integrated fossil fuel record. This is offset by the temperature feedback, GT/K. Replacing the Tg(t) term with L(t) gives a slightly poorer fit with a coefficient ≈ 0.38 GT/(W/m2). In the
following paragraphs, I describe some simple timedomain estimates of the transfer functions. If one takes the simplest conceptual model [2] and substitutes Eq. 1 for C(t) one obtains, approximately,
the equation [3] with suitable definitions of the constants, and F(t) incorporated into CL(t). This can be generalized to the class of models with suitable choices of A, P, and Q. When A = 0 the
autoregressive feedback terms are absent, and the model that of a direct finite impulse response filter from L(t) and C(t) to temperature. In particular the models A, P, Q = 0, 1, 0 are the direct
least-squares fits, mentioned earlier, to L(t) only; 0, 0, 1 to C(t) only; and 0, 1, 1 to both simultaneously. I choose the interval 1854–1965 to estimate the coefficients for the particular model
and use the 25-year interval, 1966– 1990, for validation. Reconsider the direct least-squares estimate model 0, 1, 1. The coefficients obtained by fitting from 1854–1965 are ŝ0 = 0.1115, ĉ0 = 1.984.
The mean-square error in the fitting period is 6,695 (mK)2, or an rms error of 81.8 mK. This fit is not particularly impressive but, nonetheless, it suggests that changes in solar irradiance lead
those in temperature by a few years. Using the data for the 25 years 1966–1990, with the coefficients found for the earlier period gives a prediction error of 39,820 (mK)2, for an rms error of 199.5
mK. Because the autocorrelations of the residuals in all these cases are large, the assumptions of the Gauss–Markov theorem are violated, and ordinary least-squares does not give reliable results. To
see how badly, replace Eq. 2 with [4] where ∆rT(t) = T(t) − rT(t − 1) and similarly with L and C. Using the 1-year autocorrelation of the temperature data, r = 0.881 for all three series gives with
corresponding fitting and prediction errors of 1,369 (mK)2 and 2,436 (mK)2 respectively. Because the innovations variance of the temperature data [5] where B(v) ≈ 1.07 is a bias correction depending
on the degrees-of-freedom (30) of the estimate and Ŝ(f) the estimated spectrum at frequency f, is about 1,303 (mK)2 and the variance of ∆rT is 1,835 (mK)2 the errors in Eq. 4 are far less correlated
than are those of Eq. 2, so the conditions of the Gauss–Markov theorem are closer to being satisfied. Note that using this simplest of filters on the data increases the estimated sensitivity to CL by
57% and changes the sign of ŝ0. Because a decrease in average temperature as a response to an increase in solar irradiance is implausible, I reject such simple models. Direct frequency-domain
estimates of the transfer functions appear to vary too rapidly to be reliably estimated from the available data as their Fourier transforms are significantly noncausal. For example, the impulse
response estimated between Tn(t) and L(t) is noncausal with a width about the length of a solar cycle. This is an artifact of the near periodicity of the solar cycle as, without other knowledge or
assumptions, one cannot assign the response to a periodic process to a given time without ambiguities of N periods. Similar problems are common in transfer function estimation problems ranging from
magnetotellurics ( 31 ) to the design of telephone equalizers. The estimated impulse response between CL(t) and T(t) is causal and oscillatory with about a 20-year duration. If this
OCR for page 98
characteristic persists with improved early CO2 data, it implies that large transient responses may be possible. In addition to thermal inertia, the observed feedback from temperature to CO2 also
generates persistence. To model this together with the effects of CO2 and solar irradiance changes on the temperature record, I fit the filtered Jones–Wigley series as a 1, 1, 1 model, explicitly Eq.
3. Here the feedback term aT(t - 1) results in a rational (as opposed to polynomial) approximation of the transfer functions allowing more rapid low-frequency changes in them than possible with
simple models, it includes some persistence, and generally permits simple feedback effects. Direct least-squares estimates of the coefficients and some performance measures are given in Table 1 . To
assess the model’s predictive power I start at 1966 and iterate forward to t = 1990 using the observed solar irradiance and CO2 data. The predictions were run in two ways: in the first, the
temperature data from the previous year was used; in the second, the model was started with the 1965 temperatures and the prediction of the previous year’s temperature used from there on. The 1966–
errors in Table 1 refer to the first case. The second is not as accurate, but is not ridiculously inaccurate either; for example, the 25-year prediction of the 1990 Northern Hemisphere temperature is
622 mK, not too dissimilar to the 648 mK observed. The data, fit, and extrapolations obtained with these models are shown in Fig. 6 (Northern Hemisphere), and Fig. 7 (Southern Hemisphere). The
general 25-year prediction agrees reasonably well with the observed temperature in level, if not in exact details. It might be noted that the autoregressive nature of the feedback term in these
models causes them to be biased slightly toward zero. Examination of the residuals from this model shows them to be mundane and the exotic characteristics of the individual series missing. First, the
residuals show no significant departures from normality; their range is ±2.35 s for the global series, there is no detectable skewness, and the standardized fourth moment is 2.65 for the 1854–1965
data, and similar in the validation interval. The Southern Hemisphere residuals are slightly long-tailed. The extreme residual, -107 mK, is a result of the Krakatau eruption in 1883. Thus the errors
of 30.4 mK and 26.9 mK obtained for the global temperature with a model having only four parameters imply that the 1990 temperature of 636 mK is about 21 SD above the 1854–1918 base level measured on
the scale of the residual SD. The feedback term, aT(t - 1) in Eq. 3, reduces this to about a 9.3s increase above what persistence would normally generate. The probability of a Gaussian random
variable being 9s from the mean is about 10-19. Second, the range of the spectrum of the residuals from this model is only 18, just slightly larger than expected for a white-noise process. This
spectrum hints at a weak 4.6-year echo-like term. Table 1. The estimated parameters for Eq. 3 for the three temperature series considered Temperature series a s c –1965, mK 1966–, mK Northern
Hemisphere 0.8965 0.0106 0.200 37.1 34.7 Southern Hemisphere 0.8122 0.0033 0.426 32.4 25.1 Global 0.8817 0.0069 0.255 30.4 26.9 The column headed -1965 gives the rms errors in the modeling period,
1854-1965. The column headed 1966- gives the rms error for a 1-year prediction using the parameters estimated from the 1854-1965 and averaged over the 25-year validation interval, 1966-1990. FIG. 6.
The Northern Hemisphere temperature data, solid line, and fits. The model coefficients were determined using data before 1965 with the fit shown by the dashed line. After 1965 the fit includes CL(t),
L(t) and the previous year’s temperature. The smoother line, short dashes, shows the prediction starting with the 1965 temperature using only CL(t) and L(t). Third, the residuals were tested for
stationarity using the quadratic-inverse methods of ref. 25 and, generally, see Fig. 8 , the test statistics are slightly less than their expected value, suggesting that the residuals due not contain
any serious nonstationary terms. There are, nonetheless, some features of interest. The “time derivative” of the spectrum a1(f)/a0(f) is FIG. 7. The Southern Hemisphere temperature and fits. The
curves are as in Fig. 6 .
OCR for page 98
mostly negative, implying that the residual spectrum is decreasing as a function of time. This probably reflects nothing more than the improvements in spatial coverage and data quality that have
occurred over the course of the record. The tests for both hemispheres show significant nonstationarity at periods of 4 years, perhaps an artifact of leap years in the original monthly averaging
process. In addition, the Southern Hemisphere test has a significant peak in the El-Nino band, and the Northern Hemisphere data are quite nonstationary at the quasibiennial oscillation frequency. At
low frequencies, the test statistics from both hemispheres are well below their expected value. Thus the tests for stationarity reveal no evidence for long-term unexplained variability. The transfer
functions for this model are for solar irradiance, and similarly for CL. The long-term response is given by the low-frequency response; evaluated at f = 0 with the coefficients estimated above one
obtains 0.109K/(W/m2) for solar irradiance changes and 2.075 K/(2 × CO2). To simplify the comparison of the different units one may examine the low-frequency variance explained for the two components
and and = 6,067 (mK)2 in the Northern Hemisphere, with corresponding figures of 28,320 (mK)2 and 317 (mK)2 in the Southern Hemisphere. Thus CO2 explains over 3 times as much variance in changes in
solar irradiance in the Northern Hemisphere, over 100 times as much in the Southern. Using simply the change in CO2 concentration from 287.7 in 1854 to 352.7 parts per million in 1990 the
low-frequency transfer function predicts an increase of 0.61 K for CO2 while an irradiance change of 2.4 W/m2 (from 1365.7 to 1368.1 W/m2 results in an estimated temperature change of 0.26 K. Given
the great improvement that the addition of the feedback term makes on the direct least-squares estimates, it seems reasonable to investigate related models. A suite of such models, incorporating a
range of time delays and prewhitening filters was tested. None of these simple models significantly improve on Eq. 3. Changes in the Timing of the Annual Cycle I presented evidence ( 2 ) that the
frequency of the annual temperature cycle at many locations is closer to the anomalistic year than it is to the tropical year, and, in addition, the character of the phase of the annual cycle began
to change rapidly near the middle of this century with the average change in phase coherent with the changes in atmospheric concentration of CO2. I also suggested that the cause of the rapid phase
shift was the changing balance between direct insolation at a given latitude and transported energy. Recall that, at temperate latitudes, the periodicity of insolation is the tropical year and peaks
at the summer solstice. The total radiation received by the earth, on the other hand, has the periodicity of the anomalistic year and is maximum at perihelion, currently in early January. In the
Northern Hemisphere the phases of the two components are thus nearly opposed and, consequently, the phase of the resultant is easily perturbed. The Sable Island example shown in ref. 6 demonstrates
that the proposed mechanism can explain the rapidly changing phase seen at individual stations. The geographic distribution of these phase changes is available in ( 32 , 33 ), and confirmatory
changes are directly observable in the CO2 record ( 34 ). The observed phase changes must be taken seriously as an indicator of climate change caused by increasing concentrations of atmospheric CO2.
First, the increasing variability of the phases and the change in distribution (Fig. 4 of ref. 2) is so large that the probability of such an occurrence in a stationary process is nearly zero.
Second, because the effects of observational errors and similar noise-like effects on the data will be uncorrelated between the low frequencies where the increase in temperature is observed and the
annual cycle, one must consider them as independent evidence for a changing climate. Discussion and Conclusions More effort must be made to obtain improved CO2 data for the 19th and early 20th
centuries if possible. Similarly, extensive effort to develop better, or independent, solar irradiance proxies is desirable. Analysis of the low-frequency and annual parts of the temperature records
yield at least three largely independent indicators of climate change; the change in distribution of individual station phase trends about the mean, the change in average phase, and the increase in
average temperature. All are unprecedented in the instrumental record. The probability of the observed changes occurring through natural, as opposed to anthropogenic, causes appears to be exceedingly
small. First, although a major ice age causes a larger temperature change than has happened so far in response to CO2, the temperature increase that occurred between 1920 and 1990 would have taken
more than 2,000 years even at the historically rapid rate of the last deglaciation. During deglaciation, transient warming in the North Atlantic after a Heinrich event was faster ( 35 ), but there is
no evidence for a Heinrich event during the last few centuries. Second, internal climate oscillations, shifts in storm tracks, and the like obviously can change local and regional climate by much
larger amounts than observed in the hemispheric averages. These regional fluctuations appear to be superimposed on the general global trend. Additionally, such internal oscillations pro- FIG. 8.
Quadratic-inverse tests for stationarity of the Northern Hemisphere (solid) and Southern Hemisphere (dashed) temperature residuals from model Eq. 3. The test statistics have an approximate X25
distribution and have been slightly smoothed for plotting. In both hemispheres the test statistic at a 4-year period exceeds the 98% level.
OCR for page 98
duce warm and cool regions that interchange over decade to century time scales ( 32 , 36 ), but whose effects largely cancel in hemispheric averages. Third, while there is reasonable evidence for
greater climate variability during the Holocene than has been observed during the period where instrumental data are available ( 37 , 38 ), there is no evidence in the statistics that a major
unidentified source of natural variation is present during the instrumental record. Such a source would have to mimic, perversely, either solar irradiance changes or the changes in atmospheric CO2 to
cause the observed temperature changes and to be mistaken for them. Similarly, while mindful of the many caveats on data quality, spatial coverage, etc. given in ref. 1, the appearance of possible
leap-year artifacts at a level below 10 mK in the residuals suggests that the data cannot be as untrustworthy as is occasionally implied. The residual temperature variation remaining once the known
effects of precession, solar irradiance changes, and atmospheric CO2 concentration are removed bound unknown effects to about 200 mK peak-to peak in the hemispheric average series during the last
century. Consider the null hypothesis that the observed temperature fluctuations and atmospheric CO2 levels are independent: The probability that the hemispheric temperatures would fluctuate purely
by chance in such a way to produce the observed coherences with CO2 is exceedingly low. Given that the records encompass more than a century, the probability is so low that one would not expect to
see such an event by chance during the age of the earth. The probability of the observed coherence between atmospheric CO2 and changes in the timing of the seasons shown in figure 13 of ref. 2
without a causal connection is similarly low. Consequently one must strongly reject the hypothesis of independence between atmospheric CO2 and temperature. The alternative hypothesis, that increasing
levels of atmospheric CO2 plus a slight change in solar irradiance are causally responsible for the observed changes in temperature, in contrast, results in test statistics that are ordinary in every
way. Because major changes in climate as a response to human use of fossil fuels have been predicted for more than a century ( 39 , 40 ), their detection can hardly be considered surprising. From
examining the data records I conclude: Changes in solar irradiance explain perhaps one-quarter of the increase in temperature during the last century. The changes in atmospheric CO2 concentration
resulting from human consumption of fossil fuels cause most of both the temperature increase and the changes in the seasonal cycle. 1. Houghton, J. T. , Meira Fihlo, L. G. , Callender, B. A. ,
Harris, N. , Kattenberg, A. & Maskell, K. ( 1996 ) Climate Change 1995 ( Cambridge Univ. Press , Cambridge, U.K. ). 2. Thomson, D. J. ( 1995 ) Science 268 , 59–68 . 3. Santer, B. D. , Taylor, K. E. ,
Wigley, T. M. L. , Penner, J. E. , Jones, P. D. & Cubasch, U. ( 1995 ) PCMDI Rep. 21 ( Lawrence Livermore Nat. Lab. , Livermore, CA ). 4. Schwartz, S. E. & Andrae, M. O. ( 1996 ) Science 272 ,
1121–1122 . 5. Jones, P. D. ( 1994 ) J. Climate 7 , 1794–1802 . 6. Karl, T. R. , Jones, P. D. , Knight, R. W. , White, O. R. , Mende, W. , Beer, J. & Thomson, D. J. ( 1996 ) Science 271 , 1879–1883 .
7. Sonett, C. P. , Giampapa, M. S. & Matthews, M. S. , eds ( 1991 ) The Sun in Time ( Univ. of Arizona Press , Tucson ). 8. Keeling, C. D. , Bacastow, R. B. , Carter, A. F. , Piper, S. C. , Whorf, T.
P. , Heimann, M. , Mook, W. G. & Roeloffzen, H. ( 1989 ) in Aspects of Climate Variability in the Pacific and the Western Americas , ed. Peterson, D.H. ( American Geophysical Union , Washington, DC
), pp. 165–363 . 9. Robock, A. ( 1996 ) Science 272 , 972–973 . 10. Haigh, J. D. ( 1996 ) Science 272 , 981–986 . 11. Bradley, R. S. & Jones, P. D. ( 1992 ) Climate Since AD 1500 ( Routledge , London
). 12. Boden, T. A. , Kaiser, D. P. , Sepanski, R. J. & Stoss, F. W. ( 1994 ) Trends ’93: A Compendium of Data on Global Change Pub. ORNL/CDIAC-65 , ( Oak Ridge National Laboratory , Oak Ridge, TN ).
13. Manabe, S. & Wetherald, R. T. ( 1967 ) J. Atmos. Sci. 24 , 241–259 . 14. Marland, G. , Anders, R. J. & Boden, T. A. ( 1994 ) Trends ’93: A Compendium of Data on Global Change Pub. ORNL/CDIAC-65 ,
( Oak Ridge National Laboratory , Oak Ridge, TN ), pp. 505–584 . 15. Foukal, P. & Lean, J. ( 1990 ) Science 247 , 347–357 . 16. Friis-Christensen, E. & Lassen, K. ( 1991 ) Science 254 , 698–700 . 17.
Hansen, J. & Lebedeff, S. ( 1987 ) J. Geophys. Res. 92 , 13345– 13372 . 18. Draper, N. R. & Smith, H. ( 1981 ) Applied Regression Analysis ( Wiley , New York ). 19. Seber, G. A. F. ( 1977 ) Linear
Regression Analysis ( Wiley , New York ). 20. Thomson, D. J. ( 1982 ) Proc. IEEE 70 , 1055–1096 . 21. Thomson, D. J. ( 1990 ) Philos. Trans. R. Soc. London A 330 , 601–616 . 22. Percival, D. B. &
Walden, A. T. ( 1993 ) Spectral Analysis for Physical Applications: Multitaper and Conventional Univariate Techniques ( Cambridge Univ. Press , Cambridge, U.K. ). 23. Thomson, D. J. & Chave, A. D. (
1991 ) in Advances in Spectrum Analysis , ed. Haykin, S. ( Prentice–Hall , Englewood Cliffs, NJ ), Chapter 2. 24. Thomson, D. J. ( 1990 ) Philos. Trans. R. Soc. London A 332 , 539–597 . 25. Thomson,
D. J. ( 1993 ) Proc. SPIE 2027 , 236–244 . 26. Thomson, D. J. ( 1977 ) Bell System Tech. J. 56 , 1769–1815 and 1983–2005 . 27. Kleiner, B. , Martin, R. D. & Thomson, D. J. ( 1979 ) J. R. Stat. Soc.
B-41 , 313–351 . 28. Thomson, D. J. ( 1994 ) Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Adelaide, Australia 6 , 73–76 . 29. Kuo, C. , Lindberg, C. R. & Thomson, D. J. ( 1990 )
Nature (London) 343 , 709–714 . 30. Jones, R. H. ( 1976 ) J. Am. Stat. Assoc. 71 , 386–388 . 31. Larsen, J. C. ( 1992 ) Philos. Trans. R. Soc. London A 338 , 169–236 . 32. Mann, M. E. & Park, J. (
1994 ) J. Geophys. Res. 99 , 25819–25833 . 33. Mann, M. E. , Park, J. & Bradley, R. S. ( 1996 ) Geophys. Res. Let. 23 , 1111–1114 . 34. Keeling, C. D. , Chin, J. F. S. & Whorf, T. P. ( 1996 ) Nature
(London) 382 , 146–149 . 35. Bond, G. , Broecker, W. , Johnsen, S. , McManus, J. , Labeyrie, L. , Jouzel, J. & Bonani, G. ( 1993 ) Nature (London) 365 , 143–147 . 36. Mann, M. E. & Park, J. ( 1995 )
Nature (London) 378 , 266–270 . 37. Overpeck, J. T. ( 1996 ) Science 271 , 1820–1821 . 38. O’ Brien, S. R. , Mayewski, P. A. , Meeker, L. D. , Meese, D. A. , Twickler, M. S. & Whillow, S. I. ( 1995 )
Science 270 , 1962–1964 . 39. Arrhenius, S. ( 1996 ) Philos. Mag. 41 , 237–276 . 40. Uppenbrink, J. ( 1996 ) Science 272 , 1122 .
OCR for page 98
OCR for page 98
Colloquium Paper: Thomson duce warm and cool regions that interchange over decade to century time scales (32, 36), but whose effects largely cancel in hemispheric averages. Third, while there is
reasonable evi- dence for greater climate variability during the Holocene than has been observed during the period where instrumental data are available (37, 38), there is no evidence in the
statistics that a major unidentified source of natural variation is present during the instrumental record. Such a source would have to mimic, perversely, either solar irradiance changes or the
changes in atmospheric CO2 to cause the observed tempera- ture changes and to be mistaken for them. Similarly, while mindful of the many caveats on data quality, spatial coverage, etc. given in ref.
1, the appearance of possible leap-year artifacts at a level below 10 mK in the residuals suggests that the data cannot be as untrustworthy as is occasionally implied. The residual temperature
variation remaining once the known effects of precession, solar irradiance changes, and atmo- spheric CO2 concentration are removed bound unknown ef- fects to about 200 mK peak-to peak in the
hemispheric average series during the last century. Consider the null hypothesis that the observed temperature fluctuations and atmospheric CO2 levels are independent: The probability that the
hemispheric temperatures would fluctuate purely by chance in such a way to produce the observed coherences with CO2 is exceedingly low. Given that the records encompass more than a century, the
probability is so low that one would not expect to see such an event by chance during the age of the earth. The probability of the observed coherence between atmospheric CO2 and changes in the timing
of the seasons shown in figure 13 of ref. 2 without a causal connection is similarly low. Consequently one must strongly reject the hypothesis of independence between atmospheric CO2 and temperature.
The alternative hypothesis, that increasing levels of atmospheric CO2 plus a slight change in solar irradiance are causally responsible for the observed changes in temperature, in contrast, results
in test statistics that are ordinary in every way. Because major changes in climate as a response to human use of fossil fuels have been predicted for more than a century (39, 40), their detection
can hardly be considered surprising. From examining the data records I conclude: Changes in solar irradiance explain perhaps one-quarter of the increase in temperature during the last century. The
changes in atmo- spheric CO2 concentration resulting from human consumption of fossil fuels cause most of both the temperature increase and the changes in the seasonal cycle. 23. 24. 1. Houghton, J.
T., Meira Fihlo, L. G., Callender, B. A., Harris, N., Kattenberg, A. & Maskell, K. (1996) Climate Change 1995 (Cambridge Univ. Press, Cambridge, U.K.~. Thomson, D. J. (1995) Science 268, 59-68.
Santer, B. D., Taylor, K. E., Wigley, T. M. L., Penner, J.E., Jones, P. D. & Cubasch, U. (1995) PCMDI Rep. 21 (Lawrence 36. Livermore Nat. Lab., Livermore, CA). 37. 4. Schwartz, S. E. & Andrae, M. O.
(1996) Science 272, 1121-1122. 38. 5. Jones, P. D. (1994) J. Climate 7, 1794-1802. 6. Karl, T. R., Jones, P. D., Knight, R. W., White, O. R., Mende, W., Beer, J. & Thomson, D. J. (1996) Science 271,
1879-1883. Proc. Natl. Acad. Sci. USA 94 (1997J 8377 18. 19. 7. Sonett, C. P., Giampapa, M. S. & Matthews, M. S., eds (1991) The Sun in Time (Univ. of Arizona Press, Tucson). 8. Keeling, C. D.,
Bacastow, R. B., Carter, A. F., Piper, S. C., Whorf, T. P., Heimann, M., Mook, W. G. & Roeloffzen, H. (1989) in Aspects of Climate Variability in the Pacific and the Western Americas, ed. Peterson,
D. H. (American Geophysical Union, Washington, DC), pp. 165-363. 9. Robock, A. (1996) Science 272, 972-973. 10. Haigh, J. D. (1996) Science 272, 981-986. 11. Bradley, R. S. & Jones, P. D. (1992)
Climate Since AD 1500 (Routledge, London). 12. Boden, T. A., Kaiser, D. P., Sepanski, R. J. & Stoss, F. W. (1994) Trends '93: A Compendium of Data on Global Change Pub. ORNL/CDIAC-65, (Oak Ridge
National Laboratory, Oak Ridge, TN). 13. Manabe, S. & Wetherald, R. T. (1967) J. Atmos. Sci. 24, 241-259. 14. Marland, G., Anders, R. J. & Boden, T. A. (1994) Trends '93: A Compendium of Data on
Global Change Pub. ORNL/CDIAC-65, (Oak Ridge National Laboratory, Oak Ridge, TN), pp. 505-584. 15. Foukal, P. & Lean, J. (1990) Science 247, 347-357. 16. Friis-Christensen, E. & Lassen, K. (1991)
Science 254, 698-700. 17. Hansen, J. & Lebedeff, S. (1987) J. Geophys. Res. 92, 13345 13372. Draper, N. R. & Smith, H. (1981) Applied Regression Analysis (Wiley, New York). Seber, G. A. F. (1977)
Linear Regression Analysis (Wiley, New York). 20. Thomson, D. J. (1982) Proc. IEEE 70, 1055-1096. 21. Thomson, D.J. (1990) Philos. Trans. R. Soc. London A 330, 601-616. 22. Percival, D. B. & Walden,
A. T. (1993) Spectral Analysis for Physical Applications: Multitaper and Conventional Univariate Techniques (Cambridge Univ. Press, Cambridge, U.K.~. Thomson, D. J. & Chave, A. D. (1991) in Advances
in Spectrum Analysis, ed. Haykin, S. (Prentice-Hall, Englewood Cliffs, NJ), Chapter 2. Thomson, D.J. (1990) Philos. Trans. R. Soc. London A 332, 539-597. 25. Thomson, D. J. (1993) Proc. SPIE 2027,
236-244. 26. Thomson, D. J. (1977) Bell System Tech. J. 56, 1769-1815 and 1983-2005. 27. Kleiner, B., Martin, R. D. & Thomson, D. J. (1979)J. R. Stat. Soc. B-41, 313-351. 28. Thomson, D. J. (1994)
Proc. IEEEInt. Coni on acoustics, Speech and Signal Processing, Adelaide, Australia 6, 73-76. 29. Kuo, C., Lindberg, C. R. & Thomson, D. J. (1990) Nature (Lon don) 343, 709-714. 30. Jones, R. H.
(1976) J. Am. Stat. Assoc. 71, 386-388. 31. Larsen, J. C. (1992) Philos. Trans. R. Soc. London A 338, 169-236. 32. Mann, M. E. & Park, J. (1994) J. Geophys. Res. 99, 25819-25833. 33. Mann, M. E.,
Park, J. & Bradley, R. S. (1996) Geophys. Res. Let. 23, 1111-1114. 34. Keeling, C. D., Chin, J. F. S. & Whorf, T. P. (1996) Nature (London) 382, 146-149. 35. Bond, G., Broecker, W., Johnsen, S.,
McManus, J., Labeyrie, L., Jouzel, J. & Bonani, G. (1993) Nature (London) 365, 143-147. Mann, M. E. & Park, J. (1995) Nature (London) 378, 266-270. Overpeck, J. T. (1996) Science 271, 1820-1821.
O'Brien, S. R., Mayewski, P. A., Meeker, L. D., Meese, D. A., Twickler, M. S. & Whillow, S. I. (1995) Science 270, 1962-1964. 39. Arrhenius, S. (1996) Philos. Mag 41, 237-276. 40. Uppenbrink, J.
(1996) Science 272, 1122. | {"url":"http://www.nap.edu/openbook.php?record_id=6238&page=98","timestamp":"2014-04-19T04:24:23Z","content_type":null,"content_length":"101289","record_id":"<urn:uuid:0b99f21d-44fe-4b0f-9561-3a8680aed744>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numbers Divisible by 6
Numbers are evenly divisible by 6 if they are evenly divisible by both 2 AND 3. Even numbers are always evenly divisible by 2. Numbers are evenly divisible by 3 if the sum of all the individual
digits is evenly divisible by 3. For example, the sum of the digits for the number 3627 is 18, which is evenly divisible by 3 but 3627 is an odd number so the number 3627 is not evenly divisible
by 6. | {"url":"http://www.321know.com/fra72_x7.htm","timestamp":"2014-04-19T07:00:18Z","content_type":null,"content_length":"5298","record_id":"<urn:uuid:bc6e8f10-242d-4054-b3f0-88e242e09009>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the Sets of Convergence for Sequences of the
Abstract and Applied Analysis
VolumeΒ 2012Β (2012), Article IDΒ 185948, 19 pages
Research Article
On the Sets of Convergence for Sequences of the -Bernstein Polynomials with
Department of Mathematics, Atilim University, Incek, 06836 Ankara, Turkey
Received 27 March 2012; Accepted 19 June 2012
Academic Editor: Ngai-ChingΒ Wong
Copyright Β© 2012 Sofiya Ostrovska and Ahmet YaΕ ar Γ zban. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution,
and reproduction in any medium, provided the original work is properly cited.
The aim of this paper is to present new results related to the convergence of the sequence of the -Bernstein polynomials in the case , where is a continuous function on . It is shown that the
polynomials converge to uniformly on the time scale , and that this result is sharp in the sense that the sequence may be divergent for all . Further, the impossibility of the uniform approximation
for the Weierstrass-type functions is established. Throughout the paper, the results are illustrated by numerical examples.
1. Introduction
Let , , and . Then, the q-Bernstein polynomial of is defined by where with being the q-binomial coefficients given by and being the -Pochhammer symbol: Here, for any nonnegative integer , are the
q-factorials with being the q-integer given by We use the notation from [[1], Ch. 10].
The polynomials , called the -Bernstein basic polynomials, form the -Bernstein basis in the linear space of polynomials of degree at most .
Although, for , the -Bernstein polynomial turns into the classical Bernstein polynomial : conventionally, the name β -Bernstein polynomialsβ is reserved for the case .
Based on the -Bernstein polynomials, the -Bernstein operator on is given by A detailed review of the results on the -Bernstein polynomials along with an extensive bibliography has been provided in [2
]. In this field, new results concerning the properties of the -Bernstein polynomials and/or their various generalizations are still coming out (see, e.g, papers [3β 8], all of which have appeared
after [2]).
The popularity of the -Bernstein polynomials is attributed to the fact that they are closely related to the -binomial and the -deformed Poisson probability distributions (cf. [9]). The -binomial
distribution plays an important role in the -boson theory, providing a -deformation for the quantum harmonic formalism. More specifically, it has been used to construct the binomial state for the
-boson. Meanwhile, the -deformed Poisson distribution, which is the limit form of -binomial one, defines the energy distribution in a -analogue of the coherent state [10]. Another motivation for this
study is that various estimates related to the natural sequences of functions and operators in functional spaces, convergence theorems, and estimates for the rates of convergence are of decisive
nature in the modern functional analysis and its applications (see, e.g., [4, 11, 12]).
The -Bernstein polynomials retain some of the properties of the classical Bernstein polynomials. For example, they possess the end-point interpolation property: and leave the linear functions
invariant: In addition, the -Bernstein basic polynomials (1.2) satisfy the identity Furthermore, the -Bernstein polynomials admit a representation via the divided differences given by (3.3), as well
as demonstrate the saturation phenomenon (see [2, 7, 13]).
Despite the similarities such as those indicated above, the convergence properties of the -Bernstein polynomials for are essentially different from those of the classical ones. What is more, the
cases and in terms of convergence are not similar to each other, as shown in [14, 15]. This absence of similarity is brought about by the fact that, for ,β β are positive linear operators on ,
whereas for , no positivity occurs. In addition, the case is aggravated by the rather irregular behavior of basic polynomials (1.2), which, in this case, combine the fast increase in magnitude with
the sign oscillations. For a detailed examination of this situation, see [16], where, in particular, it has been shown that the norm increases rather rapidly in both and . Namely, This puts serious
obstacles in the analysis of the convergence for . The challenge has inspired some papers by a number of authors dealing with the convergence of -Bernstein polynomials in the case (see, e.g., [7, 17
]). However, there are still many open problems related to the behavior of the -Bernstein polynomials with (see the list of open problems in [2]).
In this paper, it is shown that the time scale is the β minimalβ set of convergence for the -Bernstein polynomials of continuous functions with , in the sense that every sequence converges
uniformly on . Moreover, it is proved that is the only set of convergence for some continuous functions.
The paper is organized as follows. In Section 2, we present results concerning the convergence of the -Bernstein polynomials on the time scale . Section 3 is devoted to the -Bernstein polynomials of
the Weierstrass-type functions. Some of the results throughout the paper are also illustrated using numerical examples.
2. The Convergence of the -Bernstein Polynomials on
In this paper, is considered fixed. It has been shown in [15], that, if a function is analytic in , then it is uniformly approximated by its -Bernstein polynomials on any compact set in , and, in
particular, on .
In this study, attention is focused on the -Bernstein polynomials of β badβ functions, that is, functions which do not have an analytic continuation from to the unit disc. In general, such
functions are not approximated by their -Bernstein polynomials on . Moreover, their -Bernstein polynomials may tend to infinity at some points of (a simple example has been provided in [15]). Here,
it is proved that the divergence of may occur everywhere outside of , which is a β minimalβ set of convergence.
However, in spite of this negative information, it will be shown that, for any , the sequence of its -Bernstein polynomials converges uniformly on the time scale .
The next statement generalizing Lemma 1 of [15] can be regarded as a discrete analogue of the Popoviciu Theorem.
Theorem 2.1. Let . Then where is the modulus of continuity of on .
Corollary 2.2. If , then that is, converges uniformly to on the time scale .
Proof. The proof is rather straightforward. First, notice that for all , while by virtue of (1.11). Then for any . Plain calculations (see, e.g., [13], formula (2.7)) show that which implies that
Then, one can immediately derive the result by choosing .
Remark 2.3. In [7], Wu has shown that if , then for any , one has: The condition cannot be left out completely, as the following example shows.
Example 2.4. Consider a function satisfying where . Then, for large enough, we have where is a positive constant independent from .
As it has been already mentioned, the behavior of the -Bernstein polynomials in the case outside of the time scale may be rather unpredictable. The next theorem shows that the sequence may be
divergent for all .
Theorem 2.5. Let ,β β . If , then
Proof. The -Bernstein polynomial of is Since for one has it follows that where Obviously, As such, the theorem will be proved if it is shown that As , it suffices to prove that where The fact that
and the inequality lead to Now, since and the series is convergent, the Lebesgues dominated convergence theorem implies where . Moreover, How about the sum of the series in (2.21)? Consider the
following two cases.
Case 1. .
Let us show that ,β β for . Since for it follows that Notice that (2.24) holds for any . In addition, if , then The function in the r.h.s. is monotone decreasing in , so Thus, is a strictly
decreasing sequence. Since all are strictly positive, it follows that
Case 2. .
Estimate (2.24) implies that . To prove the theorem, it suffices to show that when . Denoting , , we write the following: We are left to show that is strictly positive for the specified values of and
. First of all, notice that , while , and are strictly decreasing in on . Hence, for , The function is strictly decreasing on . Indeed, and, for , whence for .
Similarly, for , Applying the same reasoning as done for , it can be shown that is strictly decreasing on . Since , it follows that for all .
Finally, for , we obtain Obviously, is a strictly decreasing function for all , whence, for , which completes the proof.
Remark 2.6. It can be seen from the proof that, the statement of the theorem is true for any and .
An illustrative example is supplied below.
Example 2.7. Let . The graphs of and for and are exhibited in Figure 1. Similarly, Figure 2 represents the graphs of and for and over the subintervals and , respectively. In addition, Table 1
presents the values of the error function with at some points . The points are taken both in and in . It can be observed from Table 1 that, while at the points , the values of the error function are
close to 0, at the points , the values of the error function may be very large in magnitude.
Remark 2.8. Table 1 also shows that while the error function changes its sign for different values of , for , its values are negative, that is, for . This is a particular case of the following
Theorem 2.9. Let . If is convex (concave) on , then for all .
Proof. It can be readily seen from (1.10) and (1.11) that while . By virtue of Jensen's inequality, if is convex on , then whenever and , there holds the following: for all satisfying . Setting and
observing that the required result is derived.
Example 2.10. Let The function is concave on and, hence, according to the previous results, as from below for all . To examine the behavior of polynomials for , consider the auxiliary function: Since
for , and whenever , it follows that, for sufficiently large , Plain computations reveal yielding Consequently, for , one obtains Since, by (1.10), , it follows that: For , the limit does not exist.
Additionally, it is not difficult to see that as uniformly on any compact set inside , while on any interval outside of , the function is not approximated by its -Bernstein polynomials. This agrees
with the result from [17], Theorem 2.3. The graphs of and for ,β β and on are given in Figure 3. The values of the error function at some points and at some exemplary points are given in Table 2.
Remark 2.11. Following Charalambides [9], consider a sequence of random variables possessing the distributions given by Let denote a random variable with the -distribution concentrated at . Theorem
2.1 implies that in distribution.
Generally speaking, Theorem 2.1 shows that the -Bernstein polynomials with possess an β interpolation-typeβ property on . Information on interpolation of functions with nodes on a geometric
progression can be found in, for example, [18] by Schoenberg.
3. On the -Bernstein Polynomials of the Weierstrass-Type Functions
In this section, the -Bernstein polynomials of the functions with β badβ smoothness are considered. Let satisfy the condition: The letter will also denote a 2-periodic continuation of on .
Definition 3.1. Let satisfy . A function is said to be Weierstrass-type if Notice that is continuous if and only if . For and a special choice of and (see, e.g., [19, Section 4]), the classical
Weierstrass continuous nowhere differentiable function is obtained. In [19], one can also find an exhaustive bibliography on this function and similar ones. For , a function analogous to the Van der
Waerden continuous nowhere differentiable function appears.
The aim of this section is to prove the following statement.
Theorem 3.2. If is a Weierstrass-type function, then the sequence of its -Bernstein polynomials is not uniformly bounded on any interval .
Proof. To prove the theorem, the following representation of -Bernstein polynomials (see [15], formulae (6) and (7)) is used: where and denote the divided differences of , that is, When , the
well-known representation for the classical Bernstein polynomials is recovered and the numbers are the eigenvalues of the Bernstein operator, see [20], Chapter 4, Section 4.1 and [21]. The latter
result has been extended to the case in [15].
Clearly, it suffices to consider the case . From (3.3), it follows that and, hence, What remains is to find a lower bound for . Due to (3.1), all terms of the series are nonnegative and, therefore,
Let be chosen in such a way that For , such a choice is possible because, in this case, inequality (3.9) implies that Since the length of the interval is 1, there is a positive integer, say, , such
that . The obvious inequality implies the following: with being a positive constant. Then, for , it follows that where due to (3.1). Consequently, which leads to where is a positive constant and .
Now, assume that is uniformly bounded on , that is, for all . By Markov's Inequality (cf., e.g., [22], Chapter 4, Section 1, pp. 97-98) it follows that This proves the theorem because the latter
estimate contradicts (3.14).
To present an illustrative example, let us denote the th partial sum of the series in (3.2) by , that is: Clearly, the function is an approximation of (3.2) satisfying the error estimate
Example 3.3. Let , and . For , one has . The graphs of and the associated -Bernstein polynomials for ,β β , and on the subintervals and are presented in Figures 4 and 5, respectively.
The authors would like to express their sincere gratitude to Mr. P. Danesh from Atilim University Academic Writing and Advisory Centre for his help in the preparation of the paper.
1. G. E. Andrews, R. Askey, and R. Roy, Special Functions, vol. 71, Cambridge University Press, Cambridge, UK, 1999. View at Zentralblatt MATH
2. S. Ostrovska, β The first decade of the $q$-Bernstein polynomials: results and perspectives,β Journal of Mathematical Analysis and Approximation Theory, vol. 2, no. 1, pp. 35β 51, 2007.
3. N. Mahmudov, β The moments of $q$-Bernstein operators in the case $0<q<1$,β Numerical Algorithms, vol. 53, no. 4, pp. 439β 450, 2010. View at Publisher Β· View at Google Scholar
4. M. Popov, β Narrow operators (a survey),β in Function Spaces IX, vol. 92, pp. 299β 326, Banach Center Publications, 2011. View at Publisher Β· View at Google Scholar
5. P. Sabancıgil, β Higher order generalization of $q$-Bernstein operators,β Journal of Computational Analysis and Applications, vol. 12, no. 4, pp. 821β 827, 2010.
6. H. Wang, β Properties of convergence for $\omega ,q$-Bernstein polynomials,β Journal of Mathematical Analysis and Applications, vol. 340, no. 2, pp. 1096β 1108, 2008. View at Publisher Β·
View at Google Scholar
7. Z. Wu, β The saturation of convergence on the interval $\left[0,1\right]$ for the $q$-Bernstein polynomials in the case $q>1$,β Journal of Mathematical Analysis and Applications, vol. 357, no.
1, pp. 137β 141, 2009. View at Publisher Β· View at Google Scholar
8. X.-M. Zeng, D. Lin, and L. Li, β A note on approximation properties of $q$-Durrmeyer operators,β Applied Mathematics and Computation, vol. 216, no. 3, pp. 819β 821, 2010. View at Publisher Β·
View at Google Scholar
9. C. A. Charalambides, β The $q$-Bernstein basis as a $q$-binomial distribution,β Journal of Statistical Planning and Inference, vol. 140, no. 8, pp. 2184β 2190, 2010. View at Publisher Β· View
at Google Scholar
10. S. C. Jing, β The $q$-deformed binomial distribution and its asymptotic behaviour,β Journal of Physics A, vol. 27, no. 2, pp. 493β 499, 1994. View at Publisher Β· View at Google Scholar
11. I. Ya. Novikov and S. B. Stechkin, β Fundamentals of wavelet theory,β Russian Mathematical Surveys, vol. 53, no. 6, pp. 53β 128, 1998. View at Publisher Β· View at Google Scholar
12. M. I. Ostrovskiĭ, β Regularizability of inverse linear operators in Banach spaces with a basis,β Siberian Mathematical Journal, vol. 33, no. 3, pp. 470β 476, 1992. View at Publisher Β· View
at Google Scholar
13. V. S. Videnskii, β On some classes of $q$-parametric positive linear operators,β in Selected Topics in Complex Analysis, vol. 158, pp. 213β 222, Birkhäuser, Basel, Switzerland, 2005. View at
Publisher Β· View at Google Scholar
14. A. Il'inskii and S. Ostrovska, β Convergence of generalized Bernstein polynomials,β Journal of Approximation Theory, vol. 116, no. 1, pp. 100β 112, 2002. View at Publisher Β· View at Google
Scholar Β· View at Scopus
15. S. Ostrovska, β $q$-Bernstein polynomials and their iterates,β Journal of Approximation Theory, vol. 123, no. 2, pp. 232β 255, 2003. View at Publisher Β· View at Google Scholar Β· View at
Zentralblatt MATH
16. S. Ostrovska and A. Y. Özban, β The norm estimates of the $q$-Bernstein operators for varying $q>1$,β Computers & Mathematics with Applications, vol. 62, no. 12, pp. 4758β 4771, 2011. View at
Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
17. S. Ostrovska, β On the approximation of analytic functions by the $q$-Bernstein polynomials in the case $q>1$,β Electronic Transactions on Numerical Analysis, vol. 37, pp. 105β 112, 2010.
18. I. J. Schoenberg, β On polynomial interpolation at the points of a geometric progression,β Proceedings of the Royal Society of Edinburgh, vol. 90, no. 3-4, pp. 195β 207, 1981. View at
Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
19. A. Pinkus, β Weierstrass and approximation theory,β Journal of Approximation Theory, vol. 107, no. 1, pp. 1β 66, 2000. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
20. G. G. Lorentz, Bernstein Polynomials, Chelsea, New York, NY, USA, 2nd edition, 1986.
21. S. Cooper and S. Waldron, β The eigenstructure of the Bernstein operator,β Journal of Approximation Theory, vol. 105, no. 1, pp. 133β 165, 2000. View at Publisher Β· View at Google Scholar Β·
View at Zentralblatt MATH
22. R. A. DeVore and G. G. Lorentz, Constructive Aapproximation, vol. 303, Springer, Berlin, Germany, 1993. | {"url":"http://www.hindawi.com/journals/aaa/2012/185948/","timestamp":"2014-04-20T04:56:32Z","content_type":null,"content_length":"670587","record_id":"<urn:uuid:f03b43db-6286-423b-aca5-7aa2a6a1a7da>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Archives of the Caml mailing list > Message from Kaustuv Chaudhuri
Date: -- (:)
From: Kaustuv Chaudhuri <kaustuv.chaudhuri@i...>
Subject: Re: [Caml-list] mutable and polymorphism
On Wed, Sep 15, 2010 at 9:38 PM, Radu Grigore <radugrigore@gmail.com> wrote:
> In any case, I'm more interested in an explanation of what happens,
As you are probably aware, ML-style languages have a so-called "value
restriction" on polymorphism, which in its most vanilla form can be
stated simply as: only values can have a polymorphic type. The
let x = e in fun y -> () (1)
is not a value, meaning that it can reduce further, and therefore
the vanilla value restriction says that it cannot have a polymorphic
Now, some very clever people have reasoned that in certain cases the
expression (1) is indistinguishable from a value and by Leibniz's
principle of equality of indistinguishables should therefore be
allowed to have the polymorphic type its equivalent value does.
As an example, one such situation is if e is built from only pure
constructors (i.e., constructors with no mutable fields) and values,
such as:
let x = () in fun y -> ()
One might even say that (1) should be treated as a value when it
is equivalent to the value expression:
fun y -> let x = e in ()
There are a number of scenarios in which OCaml can deduce that a
certain non-value expression can have a polymorphic type. For a
comprehensive list of such situations, you can read Jacques Garrigue's
paper on this topic, which also has a great section on the history of
the value restriction [1]. However, OCaml's implementation is
conservative -- it doesn't cover all cases where this fun/let
permutation doesn't change the denotation.
You can easily discover that one kind of expression that OCaml doesn't
allow for this "value-interpretation" of (1) is if e is a function
application. After all, this expression
let x = infinite_loop () in fun y -> ()
and this:
fun y -> let x = infinite_loop () in ()
are easily distinguished.
That your function is called "ref" instead of "infinite_loop" is
irrelevant. Mutation is a red herring. You would get the same result
for a completely pure expressions for e; for example:
# let z = let x = fst (1, 2) in fun y -> () ;;
val z : '_a -> unit = <fun>
-- Kaustuv
[1] http://caml.inria.fr/pub/papers/garrigue-value_restriction-fiwflp04.ps.gz | {"url":"http://caml.inria.fr/pub/ml-archives/caml-list/2010/09/86d7dc96fbb79d10de6c6973c23c6e90.en.html","timestamp":"2014-04-17T13:04:22Z","content_type":null,"content_length":"8924","record_id":"<urn:uuid:0c9614c3-bbce-41e8-a747-a34539cd1da2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: What makes a topic unimportant?
Replies: 1 Last Post: Apr 29, 2001 7:49 PM
Messages: [ Previous | Next ]
What makes a topic unimportant?
Posted: Apr 29, 2001 5:27 PM
The issue of what makes a topic important is well worth debate. It is
not a trivial issue and it is hard to make progress.
On the other side of the coin, it is possible to make progress on the
issue of what makes a topic unimportant.
As a first screening, I offer this rule:
If most the students who have been taught a certain topic are likely
to have forgotten that topic shortly after the final, then that topic
is likely to be unimportant. It either needs to be reworked so that
students recognize its importance or it should be dropped.
Partial fractions decomposition, Inverse Secants,convergence at the
endpoints, polar equations of the conics (unless Kepler is in the
course), convergence tests for series of numbers are examples of the
-Jerry Uhl
Jerry Uhl juhl@cm.math.uiuc.edu
Professor of Mathematics, University of Illinois at Urbana-Champaign
Member, Mathematical Sciences Education Board of National Research Council
Calculus&Mathematica, Vector Calculus&Mathematica,
DiffEq&Mathematica, Matrices,Geometry&Mathematica, NetMath
http://www-cm.math.uiuc.edu , http://netmath.math.uiuc.edu, and
"Is it life, I ask, is it even prudence,
To bore thyself and bore the students?"
. . . Johann Wolfgang von Goethe
To UNSUBSCRIBE from the calc-reform mailing list,
send mail to:
with the following in the message body:
unsubscribe calc-reform your_email_address
-Information on the subject line is disregarded.
Date Subject Author
4/29/01 What makes a topic unimportant? Jerry Uhl
4/29/01 Re: What makes a topic unimportant? Osher Doctorow | {"url":"http://mathforum.org/kb/thread.jspa?threadID=223029","timestamp":"2014-04-21T00:17:43Z","content_type":null,"content_length":"18914","record_id":"<urn:uuid:6b5db804-b945-45ab-9df1-9573a1a8056a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Natural examples
Stephen Fenner fenner at cs.sc.edu
Tue Oct 12 12:28:30 EDT 1999
On Mon, 11 Oct 1999, Joe Shipman wrote:
> Precise version 1.4: What is the most "natural" intermediate degree
> known, where "natural" is defined in terms of Kolmogorov complexity?
> Answer: I believe this would be one of the original two incomparable
> degrees constructed in the Friedberg-Muchnik proof, which is tricky but
> fairly short. But I don't know the following:
> However, if the definition of the set
> depends on a particular basis for computation (universal TM) in such a
> way that different such bases give rise to different degrees, then the
> word "natural" applied to the degree need not connote rigidity in the
> model-theoretic sense.
I don't have a reference, but I strongly suspect that the
degrees of the Friedberg-Muchnik sets are very sensitive to the choice of
the G"odel numbering (equivalently, universal TM) of computable partial
functions. There are so many different, "natural" models of computation
(besides TMs) that one really cannot favor one over all others. The
choice of computational model is a source of much additional Kolmogorov
complexity in the F-M construction.
BTW, the solution of Post's problem presented first in Soare's book is the
construction of a noncomputable low c.e. set, and is simpler than the F-M
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1999-October/003413.html","timestamp":"2014-04-19T07:04:22Z","content_type":null,"content_length":"3646","record_id":"<urn:uuid:0f1f730d-ec9b-4211-92d2-f8973387ac1e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry Homework Help..... - Fandom Forums
Originally Posted by beautiful-green-beast
I SERIOUSLY need some help, I'll post the questions and if anyone can help me, it would greatly be appreciated....
1.)Use the forumal A=1/2bh for the area of a triangle to develop formulas for the area of (a) a paralleogram, and (b) a trapezoid.
2.)Find the area and the perimeter if each polygon.
a.)square with radius 1
b.)regular hexagon with apothem 1
3.)Iaminge a regula polygon with many sides-20, 50, or even 100.
a.)What does such a figure approximate?
b.)If the polygon were inscribed in a circle, explain the relationship between the apothem (a) and the radius (r), and between the perimeter (p) and the circumference (C)
c.)Discuss the connection between the area of the polygon, A = 1/2ap, and the area of the circle, A = pie (r) squared
1) a. a parallelogram is basically two triangles put together, so its just b x h
b. an trapezoid is kind of like a triangle, but the top has a side too, so its 1/2(b1 + b2)h (b = bottom base and top base)
2)a. if the square has a radius of one (i hope u know what it looks like drawn, cuz then this wont make sense) u can draw a line from the center of the square to one of the corners on the same side
where the radius hits, forming a right triangle, then using the 30-60-90 triangle rule, solve for the length of the side of the triangle that would be the square, then double it cuz the lenght u
found is only half of the square, then use the length found to find the area and perimeter
b.same principle as in 'a', but the triangle will be different, so a diff rule will need to be used to find the 1/2 length of one of the sides of the hexagon (which would be the side of one of the
3) meh, im to tired from typing to explain that one, maybe later | {"url":"http://www.fandom.com/forums/showthread.php?t=12519&goto=nextoldest","timestamp":"2014-04-17T02:01:54Z","content_type":null,"content_length":"113393","record_id":"<urn:uuid:2d323b75-2b43-490b-a23c-55277b99fd5b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
What's Happening in the Mathematical Sciences, Volume 5
The American Mathematical Society suspected it had found a winner back in the early 1990s. It was willing to title a collection of ten mathematics articles by Barry Cipra not just What's Happening in
the Mathematical Sciences, but rather What's Happening in the Mathematical Sciences, Volume 1.
A winner indeed it was, and we are now at Volume 5. Cipra must have begun on some sort of probation, since his name did not appear on the cover of Volumes 1 and 2. But since Volume 3, he's been
acknowledged on the cover as the force behind this very successful series. Only once did the editors insert an article by another author—a certain Henri Poincaré. One can say without overstatement
that the standards in these volumes are very high indeed.
What makes these volumes so well-received? Here are five reasons. Budding expositors of mathematics, take note!
First, the articles are all written in a very lively style. Cipra's personal style involves plenty of word play. In Volume 5, two of the titles are "Nothing to Sphere but Sphere Itself" and "Ising on
the Cake." Another title is "A Celestial Pas de trois," and the metaphor of periodic solutions to the three body problem as dances is effectively pursued well into the article. Throughout all the
articles, the witty and sophisticated writing adds an extra level of interest. All articles come with photos, figures, and sidebars. Readers can enter at least somewhat into each article
effortlessly, as if they were reading a non-technical magazine.
Second, the articles give a balanced treatment of what is indeed happening in the mathematical sciences. Cipra functions as an investigative reporter and he faithfully covers his beat. As in the
other volumes, Cipra treats topics across the pure-applied spectrum. In Volume 5, the applied end is represented by articles on protein folding, traffic jams, and the shape of the universe.
Cross-disciplinary material also attracts Cipra's attention. In Volume 5, one of the articles is about a novel interpretation of a 4000-year-old Babylonian clay tablet and its relation to the
"Pythagorean" theorem.
Third, the primary goal of reaching readers besides professional mathematicians is kept in sight throughout. Topics which fit into the story but are too technical for the readership are appropriately
finessed. The article "Think and Grow Rich" on the Clay Mathematics Institute's seven prize problems devotes some five paragraphs to each problem. But what to do about the Hodge conjecture, so
removed as it is from the general reader's experience? Having just introduced the concepts of manifold and higher dimensions in the discussion of the Poincaré conjecture, Cipra writes "The Hodge
conjecture concerns the analysis of high-dimensional manifolds defined by systems of algebraic equations. It says, very roughly, that everything you always wanted to know about algebraically defined
manifolds (but were afraid to ask) is to be found in the theory of calculus." These two sentences are a good start towards capturing the general nature of the Hodge conjecture. Cipra then goes into
more detail, making these sentences clearer in a way appropriate to his readership.
Fourth, the articles are short and sharply focused. The whole book is less than a hundred pages long! The first article traces the grand epic which starts with Taniyama's conjecture in 1955 that
every elliptic curve is modular, goes through the proof of Fermat's last theorem, and continues to this day in the framework of the Langlands program. How to present all this mathematics and history
to a wide audience in ten large-margin photo-packed pages? Cipra keeps to a narrow path: nothing about Taniyama's tragic death, nothing about the controversial assignment of credit for the now-proved
conjecture. As a reward, Cipra gets to conclude by communicating in an understandable way some stunning recent work on the Langlands program, some of the deepest current happenings in the
mathematical sciences.
Fifth, there is mathematical meat in every article. The exposition is informal throughout, but professional mathematician readers will sometimes suddenly even get the feeling, "you know, I think I
might be able to piece together the exact statement of that theorem." For example, readers are given a very good feel for a new theorem in which the inverse-square law for interaction in "small world
networks" is strikingly distinguished from all other power laws.
As an indication that the above praise is genuine, rather than derived from some sort of Minnesota solidarity, let me offer a negative comment as well. I myself would be happier if each of Cipra's
articles came with a short bibliography. It might detract somewhat from the "light" feel of the series, but it would allow readers who have been attracted to a certain topic to more easily pursue
their newly kindled interest. A volume of What's Happening is similar in some ways to an issue of Scientific American, and articles in Scientific American have bibliographies.
Each of the five volumes of What's Happening has exactly ten articles. Together the fifty articles are remarkable not only for their excellence but also for the consistency of style maintained over a
ten-year period. The mathematical community should thank Barry Cipra and congratulate him for a job very well done. I hope to see another five volumes so that our present mathematical generation can
be viewed by future historians as enjoyably accessible via the Cipra Decameron!
David Roberts is an assistant professor of mathematics at the University of Minnesota, Morris. | {"url":"http://www.maa.org/publications/maa-reviews/whats-happening-in-the-mathematical-sciences-volume-5","timestamp":"2014-04-17T19:53:26Z","content_type":null,"content_length":"100538","record_id":"<urn:uuid:cd01211b-6874-400f-b999-a1969f03e0ee>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A quadratic regression based on old sales data reveals the following demand equation for the T-shirts: q = −2p^2 + 24p (9 ≤ p ≤ 15). Here, p is the price the club charges per T-shirt, and q is the
number it can sell each day at the flea market. (a) Obtain a formula for the price elasticity of demand for E = mc2 T-shirts. (b) Compute the elasticity of demand if the price is set at $15 per
shirt. (c) How much should the Physics Club charge for the T-shirts in order to obtain the maximum daily revenue? (d)What will the revenue be?
Best Response
You've already chosen the best response.
Very confused and frustrated with this elasticity stuff . . . uuughhh
Best Response
You've already chosen the best response.
I think this is college stuff because i'm in the tenth grade and i don't have a clue what that means lol sorry I wish I could help!
Best Response
You've already chosen the best response.
yeah it is college stuff =/ and it's kicking my butt . . .
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4e9e2c960b8b81e6880567de","timestamp":"2014-04-18T23:44:07Z","content_type":null,"content_length":"32870","record_id":"<urn:uuid:fa583585-507b-4088-96c0-088e2eedb456>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newton-Raphson's method for root finding (C)
From LiteratePrograms
Other implementations: ALGOL 68 | C
The Newton-Raphson method finds approximations of zeroes of differentiable functions. We present the use of this method for efficient calculation of square roots (in fact, it can be easily
generalized to any roots) of positive numbers.
We can use Newton's method to find the zero of the function f(x) = x^2 − N. The x where f(x) = 0 is the square root of N. Newton's method starts with a guess x[0] and iteratively refines this guess.
We can control the quality of the approximation by the number of iterations made; the fewer we do the faster we get the result but the less accurate it is. The outline of the algorithm thus looks
<<Newton's method>>=
define local variables
make an initial guess for root of n
while (!(guess sufficient))
refine guess
First we want to look at how the refinement works. Given x[i], the next x[i + 1] is chosen as the zero in the tangent at f(x[i]). The following figure illustrates this for the first iteration step in
calculating the square root of 2 with an initial guess of x[0] = 1.5. The red line is the tangent at f(1.5). Where it crosses the x axis is our refined approximation x[1]. With each iteration, we
make a step towards the true zero of f(x) = x^2 − 2, marked as r.
Given x[i], we calculate x[i + 1] by $x_{i+1} = x_i - \frac{f(x_i)}{f'(x_i)}$. For f(x) = x^2 − N we have f'(x) = 2x and thus assuming xn was the result of our last iteration we calculate our new
guess xn as
<<refine guess>>=
x = xn;
xn = x - (x * x - n) / (2 * x);
This uses variables x and xn which we need to define.
<<define local variables>>=
double x = 0.0;
double xn = 0.0;
How long do we iterate? Let's say 100 steps are sufficient for our purposes. We could iterate shorter, getting a less accurate result. We could iterate longer, getting a more accurate one should we
really need it.
<<guess sufficient>>=
iters++ >= 100
iters is another variable we must define.
<<define local variables>>=
int iters = 0;
However, we don't have to do all these iterations. If at any time x[n] = x[n + 1] then it is obvious that we need not iterate any longer. Our guess is also sufficient then. In practice, you'll notice
that the algorithm will often terminate on this condition long before the 100 iterations are reached.
<<guess sufficient>>=
|| x == xn
Now all that is left to do is finding a suitable initial guess. We choose a very simple method and just calculate the values of f(x) for all integer numbers $0 \leq x \leq n$. For n > 0 these will
initially be negative. So we know when we get to the first f(x) > 0 then the square root must be between x and x − 1. If f(x) = 0, we have found the square root and can simply return it.
<<make an initial guess for root of n>>=
int i;
for (i = 0; i <= (int)n; ++i)
double val = i*i-n;
if (val == 0.0)
return i;
if (val > 0.0)
xn = (i+(i-1))/2.0;
For taking square roots of very large or very small fractional values of n, we could instead use an approach based on where we successively double or halve the guess until it changes sign.
We're done and can place our algorithm into a function definition and wrap it up with some small test code.
#include <stdio.h>
double newton_sqrt(double n)
Newton's method
return xn;
int main(void)
printf("Square root of 5: %f\n", newton_sqrt(5));
printf("Square root of 25: %f\n", newton_sqrt(25));
printf("Square root of 20: %f\n", newton_sqrt(20));
return 0;
As mentioned, the algorithm can be generalized to the nth root of N by using f(x) = x^n − N and f'(x) = nx^n − 1.
hijacker hijacker | {"url":"http://en.literateprograms.org/Newton-Raphson's_method_for_root_finding_(C)","timestamp":"2014-04-21T00:10:46Z","content_type":null,"content_length":"30269","record_id":"<urn:uuid:5193c132-ad90-45cf-96dd-556b1f17d223>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Express the concentration of a .0880M aqueous solution of fluoride in mass percentage. assume the denisty is 1.00g/ml.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c3f074e4b066f22e10aca1","timestamp":"2014-04-17T19:16:20Z","content_type":null,"content_length":"25287","record_id":"<urn:uuid:0457c5e4-b9cb-4385-a26c-465d854d9c27>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jean-Pierre Serre
Jean-Pierre Serre
Jean-Pierre Serre is a French mathematician, Fields medalist and one of the mathematically most influential mathematicians of the 20th century with main contributions to algebraic topology, algebraic
and analytic geometry, algebra and number theory. He introduced a number of innovative calculations and concepts in the aparatus of algebraic topology including a number of useful spectral and exact
sequences, the notion fo Serre fibration, localization with respect to Serre class and so on. In algebraic and analytic geometry he introduced the technique of coherent sheaves, along with basic
theorems on categories of (quasi)coherent sheaves on affine and projective varieties (Serre’s theorem on Proj, affine Serre’s theorem, Serre duality…), a deep correspondence between the complex
algebraic geometry and analytic geometry (GAGA principle), number of deep results in arithmetic geometry, Galois theory, cohomological study of algebraic varieties, class field theory
Created on April 5, 2010 22:30:59 by
Zoran Škoda | {"url":"http://www.ncatlab.org/nlab/show/Jean-Pierre+Serre","timestamp":"2014-04-21T07:32:26Z","content_type":null,"content_length":"13095","record_id":"<urn:uuid:9d7f5d5c-6422-46ee-8dc8-9bb3619ace8b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Countability proof.
November 29th 2010, 08:01 AM
Countability proof.
I need help on proving
(1) prove that if B http://www.mathhelpforum.com/math-he...ab35a80a37.png A and A is countable, then B is countable
(2) prove that if B http://www.mathhelpforum.com/math-he...ab35a80a37.png A, A is infinite, and B is finite, then A\B is infinite
I don't even know where to start... THx
November 29th 2010, 08:07 AM
I have a theorem which says
Suppose A is a set. The following statements are equivalent:
1. A is countable
2. Either A = empty or there is a function f:z+ -> A that is onto.
3. There is a function f:A -> z+ that is one-to-one
I am supposed to use those however I am stuck....
November 29th 2010, 08:22 AM
If $A$ is denumerable then $A=\{a_1,a_2,\ldots\}$ . If $B=\emptyset$ then, $B$ is finite . If $Beq \emptyset$ let $n_1$ be the least positive integer such that $a_{n_1}\in B$, let $n_2$ be the
least positive integer such that $n_2>n_1$ and $a_{n_2}\in B$ ...
Could you continue?
Fernando Revilla
November 29th 2010, 08:33 AM
You can use the equivalent condition #3 to show (1). Namely, suppose $B\subseteq A$ and there exists a one-to-one function $f:A\to\mathbb{Z}^+$. Consider the restriction of f on B, i.e., the
function whose domain is B and that acts just like f on any $x\in B$. Is this restriction still one-to-one?
For (2), note that $A = B\cup(A\setminus B)$. What happens when $A\setminus B$ is finite?
November 29th 2010, 08:40 AM
I get the (2) part , however could you explain more on (1) part please???
November 29th 2010, 09:14 AM
could you explain more on (1) part please???
Whom are you asking? Fernando suggests using condition #2 to prove (1), i.e., that B is countable. Going with #2, let's assume that there is an onto function $f:\mathbb{Z}^+\to A$ and that B is
nonempty, i.e., there exists some fixed $x_0\in B$. I think it is easier to construct another function $g:\mathbb{Z}^+\to B$ as follows:
$g(n)=<br /> \begin{cases}<br /> x_0 & f(n)otin B\\<br /> f(n) & \text{otherwise}<br /> \end{cases}$
You have to check that indeed $g:\mathbb{Z}^+\to B$ and that g is onto.
If you are going ti use #3 to show (1), then assume that there exists an $f:A\to\mathbb{Z}^+$. Consider $g(x)$ such that $g(x)=f(x)$ for $x\in B$ and $g(x)$ is undefined otherwise. You need to
show that $g:B\to\mathbb{Z}^+$ and that g is one-to-one. | {"url":"http://mathhelpforum.com/discrete-math/164731-countability-proof-print.html","timestamp":"2014-04-18T06:50:02Z","content_type":null,"content_length":"11557","record_id":"<urn:uuid:0995917c-afc5-4839-9e3f-5995767ae058>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the limit. Use l'Hospital's Rule if appropriate. If there is a more elementary method, consider using it. lim sin x ln 4x x→0+
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a08467e4b05517d5367007","timestamp":"2014-04-16T16:58:23Z","content_type":null,"content_length":"137932","record_id":"<urn:uuid:afa51efa-d271-423f-9909-19820dc4f8e2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coupling finite element and impedance element to model the
ASA 128th Meeting - Austin, Texas - 1994 Nov 28 .. Dec 02
4pEA13. Coupling finite element and impedance element to model the radiation of piezoelectric transducers in boreholes.
Didace Ekeom
Bertrand Dubus
Inst. d'Electron. et de Microelectron. du Nord, UMR CNRS 9929, Dept. ISEN, 41 boulevard Vauban, 59046 Lille Cedex, France
In the context of petroleum acoustics, it is of great interest to modelize the radiation of the piezoelectric transducer in a borehole surrounded by an homogeneous isotropic elastic formation of
infinite extent without restrictive assumptions on the geometry, radiation pattern, or types of waves. The finite element method is well suited to solve such problems if the troncature of the
infinite formation is correctly treated. This troncature generates ingoing waves which normally do not exist in infinite domain. In this paper, classical finite elements (atila code) are used to
model in steady state the transducer, the fluid-filled borehole, and part of the formation inside a spherical boundary. On this exterior boundary, impedance elements are used to take into account the
infinite character of the formation. These elements are obtained by discretizing the mechanical impedance of outgoing spherical P and S waves. The method is validated by studying two configurations
having analytical solutions: the pulsating sphere and the oscillating point. Results include displacement fields and radiation patterns for P or/and S waves. Finally, the radiation of a cylindrical
piezoelectric transducer in an oil-filled borehole [S. Kostek et al., J. Acoust. Soc. Am. 95, 109 (1994)] is analyzed. P and S components of the displacement field, electrical admittance of the
transducer, directivity patterns, and distribution of radiated energy are displayed. | {"url":"http://www.auditory.org/asamtgs/asa94aus/4pEA/4pEA13.html","timestamp":"2014-04-18T21:11:47Z","content_type":null,"content_length":"2232","record_id":"<urn:uuid:5363718a-5bba-4fcf-bdc1-d71c906ab06a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
I'm having trouble with Fractions & Decimals on a Number Line
So my daughter is working with Fractions & Decimals on a Number Line and I'm totally confused and need help in explaining it to her. I will list the following problems below. I would greatly
appreciate the help.
Name the mixed number that is shown with a dot on the number line. Express your answer in simplest form.
strong>I----I----I----I----I----I----I----I----I----I----I---*I----I----I----I >
0 +1 +2 +3 +4 +5 +6 +7
I'm using the star to represent the dot in the book.
Comment by Dacia Myhre on May 29, 2012 at 4:38pm
Hi there,
Looking at your example, there are 10 small lines in between each number. Those lines each stand for 1/10 of a whole. There are 10 lines in between each number. Think about money...10 dimes make
a dollar (or a whole). So looking at the number line think of the whole numbers as dollars and the little ones as dimes ( use real money with your child if you have to). It looks like you dot
would translate to $6.90 and that translates to 6 9/10 ( 9 dimes out of a dollar). Hope that helps!
Crazy Math Mom | {"url":"http://www.mathconcentration.com/profiles/blog/show?id=6296855%3ABlogPost%3A20812&commentId=6296855%3AComment%3A29692&xg_source=activity","timestamp":"2014-04-19T22:06:34Z","content_type":null,"content_length":"75219","record_id":"<urn:uuid:abe53f62-a061-47ec-858b-dc2fbad1b1b2>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
Roch on phylogenetic trees, learning ultrametrics from noisy measurements, and the shrimp-dog
Sebastien Roch gave a beautiful and inspiring talk here yesterday about the problem of reconstructing an evolutionary tree given genetic data about present-day species. It was generally thought that
keeping track of pairwise comparisons between species was not going to be sufficient to determine the tree efficiently; Roch has proven that it’s just the opposite. His talk gave me a lot to think
about. I’m going to try to record a probably corrupted, certainly filtered through my own viewpoint account of Roch’s idea.
So let’s say we have n points P_1, … P_n, which we believe are secretly the leaves of a tree. In fact, let’s say that the edges of the tree are assigned lengths. In other words, there is a secret
ultrametric on the finite set P_1, … P_n, which we wish to learn. In the phylogenetic case, the points are species, and the ultrametric distance d(P_i, P_j) between P_i and P_j measures how far back
in the evolutionary tree we need to go to find a comon ancestor between species i and species j.
One way to estimate d(P_i, P_j) is to study the correlation between various markers on the genomes of the two species. This correlation, in Roch’s model, is going to be on order
which is to say that it is very close to 0 when P_i and P_j are far apart, and close to 1 when the two species have a recent common ancestor. What that means is that short distances are way easier
to measure than long distances — you have no chance of telling the difference between a correlation of exp(-10) and exp(-11) unless you have a huge number of measurements at hand. Another way to put
it: the error bar around your measurement of d(P_i,P_j) is much greater when your estimate is small than when your estimate is high; in particular, at great enough distance you’ll have no real
confidence in any upper bound for the distance.
So the problem of estimating the metric accurately seems impossible except in small neighborhoods. But it isn’t. Because metrics are not just arbitrary symmetric n x n matrices. And ultrametrics
are not just arbitrary metrics. They satisfy the ultrametric inequality
d(x,y) <= max(d(x,z),d(y,z)).
And this helps a lot. For instance, suppose the number of measurements I have is sufficient to estimate with high confidence whether or not a distance is less than 1, but totally helpless with
distances on order 5. So if my measurements give me an estimate d(P_1, P_2) = 5, I have no real idea whether that distance is actually 5, or maybe 4, or maybe 100 — I can say, though, that it’s that
it’s probably not 1.
So am I stuck? I am not stuck! Because the distances are not independent of each other; they are yoked together under the unforgiving harness of the ultrametric inequality. Let’s say, for
instance, that I find 10 other points Q_1, …. Q_10 which I can confidently say are within 1 of P_1, and 10 other points R_1, .. , R_10 which are within 1 of P_2. Then the ultrametric inequality
tells us that
d(Q_i, R_j) = d(P_1, P_2)
for any one of the 100 ordered pairs (i,j)! So I have 100 times as many measurements as I thought I did — and this might be enough to confidently estimate d(P_1,P_2).
In biological terms: if I look at a bunch of genetic markers in a shrimp and a dog, it may be hard to estimate how far back in time one has to go to find their common ancestor. But the common
ancestor of a shrimp and a dog is presumably also the common ancestor of a lobster and a wolf, or a clam and a jackal! So even if we’re only measuring a few markers per species, we can still end up
with a reasonable estimate for the age of the proto-shrimp-dog.
What do you need if you want this to work? You need a reasonably large population of points which are close together. In other words, you want small neighborhoods to have a lot of points in them.
And what Roch finds is that there’s a threshold effect; if the mutation rate is too fast relative to the amount of measurement per species you do, then you don’t hit “critical mass” and you can’t
bootstrap your way up to a full high-confidence reconstruction of the metric.
This leads one to a host of interesting questions — interesting to me, that is, albeit not necessarily interesting for biology. What if you want to estimate a metric from pairwise distances but you
don’t know it’s an ultrametric? Maybe instead you have some kind of hyperbolicity constraint; or maybe you have a prior on possible metrics which weights “closer to ultrametric” distances more
highly. For that matter, is there a principled way to test the hypothesis that a measured distance is in fact an ultrametric in the first place? All of this is somehow related to this previous post
about metric embeddings and the work of Eriksson, Darasathy, Singh, and Nowak.
4 thoughts on “Roch on phylogenetic trees, learning ultrametrics from noisy measurements, and the shrimp-dog”
1. Very cool Jordan!
In response to your last paragraph, many trees used in evolutionary biology are not ultrametric: their branch lengths are measured as numbers of substitutions and don’t necessarily line up nicely
at the present time. I would bet that many of Sebastien’s results hold for these trees, though.
2. Dear Jordan,
I am not sure this can be of any use to you (especially because I understand little of this), but I was surprised recently to see ultrametrics used in cosmology:
It seems that a useful analog of the p-adic integers, or perhaps (integral points of) all p-adic affine spaces, are de Sitter spaces (spheres in Minkowski space). The time coordinate in dS
corresponds to the distance to 1. I guess other submanifolds of Minkowski space could be used.
In any case it is really interesting to see ultrametric distances arise naturally.
3. I saw Susskind talk about this at Stanford. I don’t think it’s truly about p-adics: it’s about ultrametrics and trees, that’s for sure, but the p-adic integers are only one kind of ultrametric
space and it didn’t seem to me (in the talk) that anything he was doing singled those spaces out from the others.
4. [...] already put up a UW homepage!); and probabilist Sebastian Roch, whose job talk I enthusiastically blogged about a few weeks back. Share this:EmailFacebookTwitterLike this:LikeBe the first
to like this post. [...] | {"url":"http://quomodocumque.wordpress.com/2012/03/15/roch-on-phylogenetic-trees-learning-ultrametrics-from-noisy-measurements-and-the-shrimp-dog/","timestamp":"2014-04-20T23:45:54Z","content_type":null,"content_length":"68290","record_id":"<urn:uuid:6b18f26e-1beb-433a-b1c5-c9ad223c7a7e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
Milwaukee Makerspace
Giant, Ominous Wind Chimes
A while back I bought five 4.5 foot long aluminum tubes because the price was so low that I couldn’t resist. They are 3.25 inches in (outer) diameter, and have a wall thickness of 0.1 inches.
Recently, I decided to make them into the longest and loudest wind chimes I’ve ever heard. The longest tube rings for over a minute after being struck by the clapper. After thinking for a while
about which notes I should tune the tubes to, I found that fairly large chimes are commercially available, but they are tuned to happy, consonant intervals. I consulted a few musically savvy friends
(Thanks Brian and Andrew!) to gather some more ideas for interesting intervals on my chosen theme of “Evil & Ominous.” I ended up with quite a few ideas, and with Andrew’s help, I sampled the sound
of the longest tube being struck, and recorded mp3’s of each set of notes to simulate the sound of the chimes ringing in the wind. I ended up with something delightful: D4, G#4, A4, C#5 and D5
(which are 294 Hz, 415 Hz, 440 Hz, 554 Hz, and 587 Hz). That’s right, there are two consonant intervals (octave and major 5^th), but look at all those minor seconds and tritones: Delightfully
Then the science started: How to determine the tube lengths to achieve the desired notes? How to suspend the chimes so they sound the best, and are the loudest? Where should the clapper strike the
chimes in order to produce the loudest sound or the best timbre?
Wind chimes radiate sound because they vibrate transversely like a guitar string, not because they support an internal acoustic standing wave like an organ pipe. Pages 152 & 162 of Philip Morse’s
book “Vibration and Sound” show that the natural frequencies, v, of hanging tubes are given by the following expression:
Pretty simple, right? One only needs to know rho and Q, the density and Young’s modulus of aluminum, l, the length of the tube, a & b, the inner and outer radius of the tube, and the beta of each
tube mode of interest. Don’t worry though, there is a simpler way. If all of the tubes have identical diameter and are made of the same material (6061-T6 Aluminum!), the equation indicates that
the natural frequency of a hanging tube scales very simply as the inverse of the tube length squared.
Using the above relationship (frequency ~ 1/(length*length)) to compute the ratios of tube lengths based on the ratio of frequencies produces:
Length of D4 tube = 1.000 * Length of D4 tube
Length of G#4 tube = 0.841 * Length of D4 tube
Length of A4 tube = 0.817 * Length of D4 tube
Length of C#5 tube = 0.728 * Length of D4 tube
Length of D5 tube = 0.707 * Length of D4 tube
The longest tube is 133.1 cm (52.40 inches) long, so all the tubes were scaled relative to it. Note that the frequencies are slightly different than the notes I was aiming for, but absolute pitch is
only a requirement when playing with other instruments.
~D4 = 293.66 Hz = 133.1 cm = 280.3 Hz
~G#4 = 415.3 Hz = 111.9 cm = 396.4 Hz
~A4 = 440.0 Hz = 108.7 cm = 420.0 Hz
~C#5 = 554.37 Hz = 96.9 cm = 529.1 Hz
~D5 = 587.33 Hz = 94.1 cm = 560.6 Hz
How accurately do these tubes need to be cut? For example, how important is it to cut the tube length to within 1 mm? This can be calculated simply, using the above equation. A length of 108.7cm
gives 420.0 Hz, whereas a length of 108.8cm gives 419.23 Hz. This spread is 0.67 Hz, which is a fairly small number, but these small intervals are often expressed in cents, or hundredths of a
half-step. This 1 mm length error gives a frequency shift of 31cents. Does this matter? Well, the difference in pitch of a major third in just and standard tuning is 14 cents, which is definitely
noticeable. It is preferable to be somewhat closer than this 1mm, or 2/3 Hz to the target interval.
The tubes were rough-cut to 2 mm longer than the desired length on a bandsaw to allow the ends to be squared up in case the cut was slightly crooked. The resonance frequency was then measured by
playing the desired frequency from a speaker driven by a sine wave generator with a digital display. I then struck the tube and listened for (and counted) the beats. If two beats per second are
heard, the frequency of the tube is 2 Hz different than the frequency played through the speaker. With this method using minimal equipment, I quickly experimentally measured the resonance frequency
to less than 0.5 Hz (one beat every two seconds), which is ~10 cents. I then fine tuned the tube length using a belt sander, and measured the resonance frequency several times while achieving the
correct length. In reality though, if I missed my target lengths I’d only be adding a little more beating and dissonance, which might have only added to the overall ominous timbre.
How to suspend the tubes? Looking at the mode shapes of the tube for guidance, I suspended the tubes by drilling a hole through the tube at one of its vibrational nodes, and running a plated steel
cable through it. Check out the plot below from Blevins’ New York Times Bestselling book “Formulas for Natural Frequency and Mode Shape.”
This plot shows a snapshot of the tube’s deflection as a function of position along the tube. Imagine that the left side of the tube is at 0, and the right side of the tube is at L. This plot shows
the first three mode shapes of a “straight slender free-free beam,” which my 1.33 meter long, 83mm diameter tube qualifies as. Just like a guitar string, this tube has multiple overtones (higher
modes, or harmonics) that can be excited to varying degree depending where the clapper strikes the tube. The guitar analog of this is the timbre difference one hears when picking (striking) the
string closer to or further from the end of the string (the bridge). This plot also shows where the tube should be suspended – from the locations where the tube has no motion in its first,
fundamental mode. Those two places, a distance of 0.221L from the tube’s ends, are circled in red. When striking the tube suspended from either of these locations, the tube rings the loudest and
for the longest time duration (as compared with any other suspension location). Similarly, when striking the tube in the location noted by the red arrow (the midpoint of the tube), the tube rings
the loudest. I won’t get into more math and fancy terms like “modal participation factor,” but it is true that suspending the tube from the circled red locations also results in the lack of
excitation of the third mode (which has a motional maximum at this location). Similarly, striking the tube at its midpoint results in the lack of excitation of the second mode, due to its motional
minimum at this location.
Thanks to David for the Ominous Photo. An Ominous Chime video will soon follow. | {"url":"http://milwaukeemakerspace.org/2011/09/giant-ominous-wind-chimes/","timestamp":"2014-04-20T00:51:14Z","content_type":null,"content_length":"52663","record_id":"<urn:uuid:0f01952b-3045-496e-a742-dd174d98b818>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Multiverse According to Ben
This blog entry arises from an email I sent to the SL4 email list, in response to a suggestion by Marc Geddes that perhaps the universe can best be considered as a logically inconsistent formal
I find that Marc's suggestion ties in interestingly with a prior subject I've dealt with in this blog: Subjective Reality.
I think it is probably not the best approach to think about the universe as a formal system. I find it more useful to consider formal systems as approximate and partial models of the universe.
So, in my view, the universe is neither consistent nor inconsistent, any more than a brick is either consistent or inconsistent. There may be mutually consistent or mutually inconsistent models of
the universe, or of a brick.
The question Marc has raised, in this perspective, is whether the "best" (in some useful sense) way of understanding the universe involves constructing multiple mutually logically inconsistent models
of the universe.
An alternative philosophical perspective is that, though the universe is not in itself a formal system, the "best" way of understanding it involves constructing more and more comprehensive and
sophisticated consistent formal systems, each one capturing more aspects of the universe than the previous. This is fairly close to being a rephrasing of Charles S. Peirce's philosophy of science.
It seems nice to refer to these two perspectives as Inconsistent versus Consistentist views of the universe. (Being clear however that the inconsistency and consistency refer to models of the
universe rather than the universe itself.)
Potentially the Inconsistentist perspective ties in with a previous thread in this blog regarding the notion of Subjective Reality. It could be that, properly formalized, the two models
A) The universe is fundamentally subjective, and the apparently objective world is constructed out of a mind's experience
B) The universe is fundamentally objective and physical, and the apparently subjective world is constructed out of physical structures and dynamics
could be viewed as two
• individually logically consistent
• mutually logically inconsistent
• separately useful
models of the universe. If so, this would be a concrete argument in favor of the Inconsistentist philosophical perspective.
Inconsistentism also seems to tie in with G. Spencer Brown's notion of modeling the universe using "
imaginary logic",
in which contradiction is treated as an extra truth value similar in status to true and false. Francisco Varela and Louis Kauffmann extended Brown's approach to include two different imaginary truth
values I and J, basically corresponding to the series
I = True, False, True, False,...J = False, True, False, True,...
which are two "solutions" to the paradox
X = Not(X)
obtained by introducing the notion of time and rewriting the paradox as
X[t+1] = Not (X[t])
In Brownian philosophy, the universe may be viewed in two ways
• timeless and inconsistent
• time-ful and consistent
Tying this in with the subjective/objective distinction, we obtain the interesting idea that time emerges from the feedback between subjective and objective. That is, one may look at a paradox such
creates(subjective reality, objective reality)creates(objective reality, subjective reality)creates(X,Y) --> ~ creates(Y,X)
and then a resolution such as
I = subjective, objective, subjective, objective,...J = objective, subjective, objective, subjective,...
embodying the iteration
creates(subjective reality[t], objective reality[t+1])
creates(objective reality[t+1], subjective reality[t+2)
If this describes the universe then it would follow that the subjective/objective distinction only introduces contradiction if one ignores the existence of time.
Arguing in favor of this kind of iteration, however, is a very deep matter that I don't have time to undertake at the moment!
I have said above that it's better to think of formal systems as modeling the universe rather than as being the universe. On the other hand, taking the "patternist philosophy" I've proposed in my
various cognitive science books, we may view the universe as a kind of formal system comprised of a set of propositions about patterns.
A formal system consists of a set of axioms.... OTOH, in my "pattern theory" a process F is a pattern in G if
• F produces G
• F is simpler than G
So I suppose you could interpret each evaluation "F is a pattern in G"as an axiom stating"F produces G and F is simpler than G"
In this sense, any set of patterns may be considered as a formal system.
I would argue that, for any consistent simplicity-evaluation-measure, the universal pattern set is a consistent formal system; but of course inconsistent simplicity-evaluation-measures will lead to
inconsistent formal systems.
Whether it is useful to think about the whole universe as a formal system in this sense, I have no idea... | {"url":"http://multiverseaccordingtoben.blogspot.com/2006_01_25_archive.html","timestamp":"2014-04-17T21:29:34Z","content_type":null,"content_length":"95749","record_id":"<urn:uuid:5bc0e694-6f50-43b2-b0a7-c5e40a59e7ee>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
- Internat. J. Bifur. Chaos Appl. Sci. Engrg , 1995
"... ..."
"... Abstract. Invasions in oscillatory systems generate in their wake spatiotemporal oscillations, consisting of either periodic wavetrains or irregular oscillations that appear to be spatiotemporal
chaos. We have shown previously that when a finite domain, with zero-flux boundary conditions, has been f ..."
Cited by 2 (1 self)
Add to MetaCart
Abstract. Invasions in oscillatory systems generate in their wake spatiotemporal oscillations, consisting of either periodic wavetrains or irregular oscillations that appear to be spatiotemporal
chaos. We have shown previously that when a finite domain, with zero-flux boundary conditions, has been fully invaded, the spatiotemporal oscillations persist in the irregular case, but die out in a
systematic way for periodic traveling waves. In this paper, we consider the effect of environmental inhomogeneities on this persistence. We use numerical simulations of several predator-prey systems
to study the effect of random spatial variation of the kinetic parameters on the die-out of regular oscillations and the long-time persistence of irregular oscillations. We find no effect on the
latter, but remarkably, a moderate spatial variation in parameters leads to the persistence of regular oscillations, via the formation of target patterns. In order to study this target pattern
production analytically, we turn to λ–ω systems. Numerical simulations confirm analagous behavior in this generic oscillatory system. We then repeat this numerical study using piecewise linear
spatial variation of parameters, rather than random variation, which also gives formation of target patterns under certain circumstances, which we discuss. We study this in detail by deriving an
analytical approximation to the targets formed when the parameter λ0 varies in a simple, piecewise linear manner across the domain, using perturbation theory. We end by discussing the applications of
our results in ecology and chemistry.
"... . We have performed calculations on two dimensional cellular automata models with cluster formation for various catalytic networks, especially a simple auto-catalytic species and the hypercycle
with few species. We discuss the mechanisms of cluster formation. A mechanism for resistance to parasites ..."
Cited by 2 (0 self)
Add to MetaCart
. We have performed calculations on two dimensional cellular automata models with cluster formation for various catalytic networks, especially a simple auto-catalytic species and the hypercycle with
few species. We discuss the mechanisms of cluster formation. A mechanism for resistance to parasites is discovered for clusters with a single auto-catalytic species and other networks giving
essentially homogeneous clusters: As the parasite attacks a cluster from one side, the cluster may grow fast enough in the opposite direction to prevent destruction. Thus, the parasite will be
chasing the main species in the cluster. As correlations and local effects are important for these results, they are expected to be obtained only in cellular automata and not in models based on
partial differential equations. 1 Introduction Reaction-diffusion systems often lead to spatial patterns which appear for a large number of different applications [1, 2, 3, 4]. They have been studied
frequently with many...
, 2000
"... neurocomputation involves discrete signals communicated along fixed transmission lines between discrete computational elements. This concept is shown to be inadequate to account for invariance
in recognition, as well as for the holistic global aspects of perception identified by Gestalt theory. A Ha ..."
Cited by 1 (0 self)
Add to MetaCart
neurocomputation involves discrete signals communicated along fixed transmission lines between discrete computational elements. This concept is shown to be inadequate to account for invariance in
recognition, as well as for the holistic global aspects of perception identified by Gestalt theory. A Harmonic Resonance theory is presented as an alternative paradigm of neurocomputation, that
exhibits both the property of invariance, and the emergent Gestalt properties of perception, not as special mechanisms contrived to achieve those properties, but as natural properties of the
resonance itself.
, 1990
"... excitable ..."
, 2008
"... Subquantum kinetics, a physics methodology that applies general systems theoretic concepts to the field of microphysics has gained the status of being a viable unified field theory. Earlier
publications of this theory had proposed that a subatomic particle should consist of an electrostatic field th ..."
Add to MetaCart
Subquantum kinetics, a physics methodology that applies general systems theoretic concepts to the field of microphysics has gained the status of being a viable unified field theory. Earlier
publications of this theory had proposed that a subatomic particle should consist of an electrostatic field that has the form of a radial Turing wave pattern whose form is maintained through the
ongoing activity of a nonlinear reaction-diffusion medium that fills all space. This subatomic Turing wave prediction now finds confirmation in recent nucleon scattering form factor data which show
that the nucleon core has a Gaussian charge density distribution with a peripheral periodicity whose wavelength approximates the particle's Compton wavelength and which declines in amplitude with
increasing radial distance. The subquantum kinetics explanation for the origin of charge correctly anticipates the observation that the proton's charge density wave pattern is positively biased while
the neutron's is not. The phenomenon of beta decay is interpreted as the onset of a secondary bifurcation leading from the uncharged neutron solution to the charged proton solution. The Turing wave
dissipative structure prediction is able to account in a unitary fashion for nuclear binding, particle diffraction, and electron orbital quantization. The wave packet model is shown to be
fundamentally flawed implying that quantum mechanics does not realistically represent the microphysical world. This new conception points to the possible existence of orbital energy states below the
Balmer ground state whose transitions may be tapped as a new source of energy. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2352022","timestamp":"2014-04-20T10:21:11Z","content_type":null,"content_length":"25600","record_id":"<urn:uuid:4ff094e6-971e-4e46-bf7e-42af17021580>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimum communication spanning trees
Results 1 - 10 of 57
- In Proceedings of the 35th Annual ACM Symposium on Theory of Computing , 2003
"... In this paper, we show that any n point metric space can be embedded into a distribution over dominating tree metrics such that the expected stretch of any edge is O(log n). This improves upon
the result of Bartal who gave a bound of O(log n log log n). Moreover, our result is existentially tight; t ..."
Cited by 269 (7 self)
Add to MetaCart
In this paper, we show that any n point metric space can be embedded into a distribution over dominating tree metrics such that the expected stretch of any edge is O(log n). This improves upon the
result of Bartal who gave a bound of O(log n log log n). Moreover, our result is existentially tight; there exist metric spaces where any tree embedding must have distortion#sto n)-distortion. This
problem lies at the heart of numerous approximation and online algorithms including ones for group Steiner tree, metric labeling, buy-at-bulk network design and metrical task system. Our result
improves the performance guarantees for all of these problems.
- In Proceedings of the 30th Annual ACM Symposium on Theory of Computing , 1998
"... This paper is concerned with probabilistic approximation of metric spaces. In previous work we introduced the method of ecient approximation of metrics by more simple families of metrics in a
probabilistic fashion. In particular we study probabilistic approximations of arbitrary metric spaces by \hi ..."
Cited by 260 (13 self)
Add to MetaCart
This paper is concerned with probabilistic approximation of metric spaces. In previous work we introduced the method of ecient approximation of metrics by more simple families of metrics in a
probabilistic fashion. In particular we study probabilistic approximations of arbitrary metric spaces by \hierarchically wellseparated tree" metric spaces. This has proved as a useful technique for
simplifying the solutions to various problems.
, 2002
"... Following the long-held belief that the Internet is hierarchical, the network topology generators most widely used by the Internet research community, Transit-Stub and Tiers, create networks
with a deliberately hierarchical structure. However, in 1999 a seminal paper by Faloutsos et al. revealed tha ..."
Cited by 165 (14 self)
Add to MetaCart
Following the long-held belief that the Internet is hierarchical, the network topology generators most widely used by the Internet research community, Transit-Stub and Tiers, create networks with a
deliberately hierarchical structure. However, in 1999 a seminal paper by Faloutsos et al. revealed that the Internet's degree distribution is a power-law. Because the degree distributions produced by
the Transit-Stub and Tiers generators are not power-laws, the research community has largely dismissed them as inadequate and proposed new network generators that attempt to generate graphs with
power-law degree distributions.
, 2005
"... ... as a subgraph a spanning tree into which the edges of G can be embedded with average stretch exp (O ( √ log n log log n)), and that there exists an n-vertex graph G such that all its
spanning trees have average stretch Ω(log n). Closing the exponential gap between these upper and lower bounds i ..."
Cited by 66 (10 self)
Add to MetaCart
... as a subgraph a spanning tree into which the edges of G can be embedded with average stretch exp (O ( √ log n log log n)), and that there exists an n-vertex graph G such that all its spanning
trees have average stretch Ω(log n). Closing the exponential gap between these upper and lower bounds is listed as one of the long-standing open questions in the area of low-distortion embeddings of
metrics (Matousek 2002). We significantly reduce this gap by constructing a spanning tree in G of average stretch O((log n log log n) 2). Moreover, we show that this tree can be constructed in time O
(m log 2 n) in general, and in time O(m log n) if the input graph is unweighted. The main ingredient in our construction is a novel graph decomposition technique. Our new algorithm can be immediately
used to improve the running time of the recent solver for diagonally dominant linear systems of Spielman and Teng from to m2 (O( √ log n log log n)) log(1/ɛ) m log O(1) n log(1/ɛ), and to O(n(log n
log log n) 2 log(1/ɛ)) when the system is planar. Applying a recent reduction of Boman, Hendrickson and Vavasis, this provides an O(n(log n log log n) 2 log(1/ɛ)) time algorithm for solving the
linear systems that arise when applying the finite element method to solve twodimensional elliptic partial differential equations. Our result can also be used to improve several earlier approximation
algorithms that use low-stretch spanning trees.
"... We study the problem of finding small trees. Classical network design problems are considered with the additional constraint that only a specified number k of nodes are required to be connected
in the solution. A prototypical example is the kMST problem in which we require a tree of minimum weight s ..."
Cited by 65 (2 self)
Add to MetaCart
We study the problem of finding small trees. Classical network design problems are considered with the additional constraint that only a specified number k of nodes are required to be connected in
the solution. A prototypical example is the kMST problem in which we require a tree of minimum weight spanning at least k nodes in an edge-weighted graph. We show that the kMST problem is NP-hard
even for points in the Euclidean plane. We provide approximation algorithms with performance ratio 2 p k for the general edge-weighted case and O(k 1=4 ) for the case of points in the plane.
Polynomial-time exact solutions are also presented for the class of treewidth-bounded graphs which includes trees, series-parallel graphs, and bounded bandwidth graphs, and for points on the boundary
of a convex region in the Euclidean plane. We also investigate the problem of finding short trees, and more generally, that of finding networks with minimum diameter. A simple technique is used to
- Proceedings of the First IEEE Conference on Evolutionary Computation , 1994
"... We consider the problem of representing trees (undirected, cycle-free graphs) in Genetic Algorithms. This problem arises, among other places, in the solution of network design problems. After
comparing several commonly used representations based on their usefulness in genetic algorithms, we describe ..."
Cited by 46 (1 self)
Add to MetaCart
We consider the problem of representing trees (undirected, cycle-free graphs) in Genetic Algorithms. This problem arises, among other places, in the solution of network design problems. After
comparing several commonly used representations based on their usefulness in genetic algorithms, we describe a new representation and show it to be superior in almost all respects to the others. In
particular, we show that our representation covers the entire space of solutions, produces only viable offspring, and possesses locality, all necessary features for the effective use of a genetic
algorithm. We also show that the representation will reliably produce very good, if not optimal, solutions even when the problem definition is changed. I. Introduction In this paper, we consider the
problem of representing trees in genetic algorithms. A tree is an undirected graph which contains no closed cycles. There are many optimization problems which can be phrased in terms of finding the
optimal tree wit...
, 1998
"... Given an undirected graph with nonnegative costs on the edges, the routing cost of any of its spanning trees is the sum over all pairs of vertices of the cost of the path between the pair in the
tree. Finding a spanning tree of minimum routing cost is NP-hard, even when the costs obey the triangle i ..."
Cited by 43 (6 self)
Add to MetaCart
Given an undirected graph with nonnegative costs on the edges, the routing cost of any of its spanning trees is the sum over all pairs of vertices of the cost of the path between the pair in the
tree. Finding a spanning tree of minimum routing cost is NP-hard, even when the costs obey the triangle inequality. We show that the general case is in fact reducible to the metric case and present a
polynomial-time approximation scheme valid for both versions of the problem. In particular, we show how to build a spanning tree of an n-vertex weighted graph with routing cost within (1 + ffl) from
the minimum in time O(n O( 1 ffl ) ). Besides the obvious connection to network design, trees with small routing cost also find application in the construction of good multiple sequence alignments in
computational biology. The communication cost spanning tree problem is a generalization of the minimum routing cost tree problem where the routing costs of different pairs are weighted by different
, 2000
"... Recently there have been several papers examining aspects of the Internet topology. This paper follows in that tradition and addresses two issues related to Internet topology. First, we use
three properties -- expansion, resilience, and distortion -- to characterize real and generated networks. For ..."
Cited by 40 (3 self)
Add to MetaCart
Recently there have been several papers examining aspects of the Internet topology. This paper follows in that tradition and addresses two issues related to Internet topology. First, we use three
properties -- expansion, resilience, and distortion -- to characterize real and generated networks. For these metrics, we find that existing network topology generators differ qualitatively from most
real networks. Second, we ask what impact topology has on four different multicast design questions. We find that, for many of these questions, a single topology metric appears to influence the
- Comput. Commun. Rev
"... It has long been thought that the Internet, and its constituent networks, are hierarchical in nature. Consequently, the network topology generators most widely used by the Internet research
community, GT-ITM [7] and Tiers [11], create networks with a deliberately hierarchical structure. However, rec ..."
Cited by 32 (5 self)
Add to MetaCart
It has long been thought that the Internet, and its constituent networks, are hierarchical in nature. Consequently, the network topology generators most widely used by the Internet research
community, GT-ITM [7] and Tiers [11], create networks with a deliberately hierarchical structure. However, recent work by Faloutsos et al. [13] revealed that the Internet’s degree distribution — the
distribution of the number of connections routers or Autonomous Systems (ASs) have — is a power-law. The degree distributions produced by the GT-ITM and Tiers generators are not power-laws. To
rectify this problem, several new network generators have recently been proposed that produce more realistic degree distributions; these new generators do not attempt to create a hierarchical
structure but instead focus solely on the degree distribution. There are thus two families of network generators, structural generators that treat hierarchy as fundamental and degree-based generators
that treat the degree distribution as fundamental. In this paper we use several topology metrics to compare the networks produced by these two families of generators to current measurements of the
Internet graph. We find that the degree-based generators produce better models, at least according to our topology metrics, of both the AS-level and router-level Internet graphs. We then seek to
resolve the seeming paradox that while the Internet certainly has hierarchy, it appears that the Internet graphs are better modeled by generators that do not explicitly construct hierarchies. We
conclude our paper with a brief study of other network structures, such as the pointer structure in the web and the set of airline routes, some of which turn out to have metric properties similar to
that of the Internet. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=80611","timestamp":"2014-04-18T08:50:49Z","content_type":null,"content_length":"39004","record_id":"<urn:uuid:17cc88eb-7c10-433e-aca3-164b5bbeddb9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
CalcTool: Two-photon absorption calculator
Two-photon absorption
The number of two-photon excitations per molecule are found for a given laser pulse, at the center of a
beam. Saturation effects are not considered, and negligible
linear absorption
is assumed. The
two-photon cross-section must be provided (usually in GM).
occurrs only in nonlinear optical molecules, and even then only at high
intensity. Thus, focused lasers are used. Here, the
flux at the center of a
beam with given parameters is calculated (enter the
, not the
power), and from this value we find the average number of times each molecule at that point gets excited by the pulse. If the number approaches 1 and the excited-state
is of the order of the
length, then saturation may take place. | {"url":"http://www.calctool.org/CALC/chem/photochemistry/2pa","timestamp":"2014-04-19T20:01:30Z","content_type":null,"content_length":"18062","record_id":"<urn:uuid:44c863a0-7e0a-42ca-99f5-ff3b026881ab>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
This is an archive of the Spring 2012 version of 15-150, taught by Dan Licata. This spring semester was the most polished version of the three times I taught the course, Spring 2011, Fall 2011, and
Spring 2012. I designed 15-150 with Robert Harper, along with the Spring 2011 TAs, Ian Voysey, Michael Arntzenius, Arbob Ahmad, and Zach Sparks.
The purpose of this course is to introduce the theory and practice of functional programming (FP). The characteristic feature of FP is the emphasis on computing by calculation. The traditional
distinction between program and data characteristic of imperative programming (IP) is replaced by an emphasis on classifying expressions by types that specify their behavior. Types include familiar
(fixed and arbitrary precision) numeric types, tuples and records (structs), classified values (objects), inductive types such as trees, functions with specified inputs and outputs, and commands such
as input and output. Execution of imperative programs is generalized to evaluation of well-typed expressions by a smooth extension of mathematical experience to programming practice.
The advantages of FP are significant:
• Verification: There is a close correspondence between the reasoning that justifies the correctness of a program and the program itself. Principles of proof by induction go hand-in-hand with the
programming technique of recursion.
• Parallelism: Evaluation of compound expressions is naturally parallel in that the values of subexpressions may be determined simultaneously without fear of interference or conflict among them.
This gives rise to the central concepts of the work (sequential) and span (idealized parallel) complexity of a program, and allows programs to exploit available parallelism without fear of
disrupting their correctness.
• Abstraction: FP stresses data-centric computation, with operations that act on compound data structures as whole, rather than via item-by-item processing. More generally, FP emphasizes the
isolation of abstract types that clearly separate implementation from interface. Types are used to express and enforce abstraction boundaries, greatly enhancing maintainability of programs, and
facilitating team development.
Moreover, FP generalizes IP by treating commands as forms of data that may be executed for their effects on the environment in which they occur. Upon completion of this course, students will have
acquired a mastery of basic functional programming technique, including the design of programs using types, the development of programs using mathematical verification techniques, and the
exploitation of parallelism in applications.
Prerequisites: 21-127: Concepts of Mathematics, or permission of instructor. Students will require some basic mathematical background, such as the ability to do a proof by mathematical induction, in
order to reason about program correctness.
Successful completion of this course is necessary and sufficient for entry into 15-210 Data Structures and Algorithms, which will build on the functional model of computation to develop a modern
account of parallel algorithms for a wide variety of abstract types. | {"url":"http://www.cs.cmu.edu/~15150/previous-semesters/2012-spring/","timestamp":"2014-04-18T19:46:31Z","content_type":null,"content_length":"5660","record_id":"<urn:uuid:c97f8d5c-7dc7-4017-84e7-61b199a77877>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brushless DC Motor Drive
Implement brushless DC motor drive using Permanent Magnet Synchronous Motor (PMSM) with trapezoidal back electromotive force (BEMF)
The high-level schematic shown below is built from six main blocks. The PMSM, the three-phase inverter, and the three-phase diode rectifier models are provided with the SimPowerSystems™ library. The
speed controller, the braking chopper, and the current controller models are specific to the Electric Drives library. It is possible to use a simplified version of the drive containing an
average-value model of the inverter for faster simulation.
Speed Controller
The speed controller is based on a PI regulator, shown below. The output of this regulator is a torque set point applied to the current controller block.
Current Controller
The current controller contains four main blocks, shown below. These blocks are described below.
The T-I block performs the conversion from the reference torque to the peak reference current. The relation used to convert torque to current assumes pure rectangular current waveforms. In practice,
due to the motor inductance, it's impossible to obtain these currents. Therefore the electromagnetic torque may be lower than the reference torque, especially at high speed.
The Hall decoder block is used to extract the BEMF information from the Hall effect signals. The outputs, three-level signals (−1, 0, 1), represent the normalized ideal phase currents to be injected
in the motor phases. These type of currents will produce a constant torque. The following figure shows the BEMF of phase A and the output of the Hall decoder for the phase A.
The current regulator is a bang-bang current controller with adjustable hysteresis bandwidth.
The Switching control block is used to limit the inverter commutation frequency to a maximum value specified by the user.
When using the average-value inverter, the abc current references are sent to the simplified inverter.
Braking Chopper
The braking chopper block contains the DC bus capacitor and the dynamic braking chopper, which is used to absorb the energy produced by a motor deceleration.
Average-Value Inverter
The average-value inverter is shown in the following figure.
It is composed of one controlled current source on the DC side and of two controlled voltage sources on the AC side. The DC current source allows the representation of the DC bus current behavior
described by the following equation:
I[dc] = (P[out] + P[losses]) / V[in],
with P[out] being the output AC power, P[losses] the losses in the power electronic devices, and V[in] the DC bus voltage.
On the AC side, the voltage sources are fed by the instantaneous voltages provided by the Trapezoidal PMSM dynamic model (see PMSM documentation for machine model). This dynamic model takes the
reference currents (the rate of these currents has been limited to represent the real life currents), the measured BEMF voltages and the machine speed to compute the terminal voltages to be applied
to the machine.
The dynamic rate limiter limits the rate of the reference currents when transitions occurs. The rate depends of the inverter saturation degree.
During loss of current tracking due to insufficient inverter voltage, the dynamic rate limiter saturates the reference current in accordance to this operation mode.
The model is discrete. Good simulation results have been obtained with a 2 µs time step. To simulate a digital controller device, the control system has two different sampling times:
● Speed controller sampling time
● Current controller sampling time
The speed controller sampling time has to be a multiple of the current controller sampling time. The latter sampling time has to be a multiple of the simulation time step. The average-value inverter
allows the use of bigger simulation time steps since it does not generate small time constants (due to the RC snubbers) inherent to the detailed converter. For a current controller sampling time of
40 µs, good simulation results have been obtained for a simulation time step of 40 µs. The simulation time step can, of course, not be higher than the current controller time step.
Dialog Box
Permanent Magnet Synchronous Machine Tab
The Permanent Magnet Synchronous Machine tab displays the parameters of the Permanent Magnet Synchronous Machine block of the powerlib library.
Select how the output variables are organized. If you select Multiple output buses, the block has three separate output buses for motor, converter, and controller variables. If you select Single
output bus, all variables output on a single bus.
Select between the detailed and the average-value inverter.
Select between the load torque, the motor speed and the mechanical rotational port as mechanical input. If you select and apply a load torque, the output is the motor speed according to the
following differential equation that describes the mechanical system dynamics:
This mechanical system is included in the motor model.
If you select the motor speed as mechanical input, then you get the electromagnetic torque as output, allowing you to represent externally the mechanical system dynamics. The internal mechanical
system is not used with this mechanical input selection and the inertia and viscous friction parameters are not displayed.
For the mechanical rotational port, the connection port S counts for the mechanical input and output. It allows a direct connection to the Simscape™ environment. The mechanical system of the
motor is also included in the drive and is based on the same differential equation.
Converters and DC Bus Tab
The rectifier section of the Converters and DC bus tab displays the parameters of the Universal Bridge block of the powerlib library. Refer to the Universal Bridge for more information on the
universal bridge parameters.
The inverter section of the Converters and DC bus tab displays the parameters of the Universal Brige block of the powerlib library. Refer to the Universal Bridge for more information on the
universal bridge parameters.
The average-value inverter uses the following parameter.
The on-state resistance of the inverter switches (ohms).
The DC bus capacitance (F).
Braking Chopper Section
The braking chopper resistance used to avoid bus over-voltage during motor deceleration or when the load torque tends to accelerate the motor (ohms).
The braking chopper frequency (Hz).
The dynamic braking is activated when the bus voltage reaches the upper limit of the hysteresis band. The following figure illustrates the braking chopper hysteresis logic.
The dynamic braking is shut down when the bus voltage reaches the lower limit of the hysteresis band. The chopper hysteresis logic is shown in the following figure.
Controller Tab
This pop-up menu allows you to choose between speed and torque regulation.
When you press this button, a diagram illustrating the speed and current controllers schematics appears.
Speed Controller section
The speed measurement first-order low-pass filter cutoff frequency (Hz). This parameter is used in speed regulation mode only.
The speed controller sampling time (s). The sampling time must be a multiple of the simulation time step.
The maximum change of speed allowed during motor acceleration (rpm/s). An excessively large positive value can cause DC bus under-voltage. This parameter is used in speed regulation mode only.
The maximum change of speed allowed during motor deceleration (rpm/s). An excessively large negative value can cause DC bus overvoltage. This parameter is used in speed regulation mode only.
The speed controller proportional gain. This parameter is used in speed regulation mode only.
The speed controller integral gain. This parameter is used in speed regulation mode only.
The maximum negative demanded torque applied to the motor by the current controller (N.m).
The maximum positive demanded torque applied to the motor by the current controller (N.m).
Current Controller Section
The current controller sampling time (s). The sampling time must be a multiple of the simulation time step.
The current hysteresis bandwidth. This value is the total bandwidth distributed symmetrically around the current set point (A). The following figure illustrates a case where the current set point
is Is^* and the current hysteresis bandwidth is set to dx.
This parameter is not used when using the average-value inverter.
│ Note This bandwidth can be exceeded because a fixed-step simulation is used. A rate transition block is needed to transfer data between different sampling rates. This block causes a delay in │
│ the gates signals, so the current may exceed the hysteresis band. │
Block Inputs and Outputs
The speed or torque set point. The speed set point can be a step function, but the speed change rate will follow the acceleration / deceleration ramps. If the load torque and the speed have
opposite signs, the accelerating torque will be the sum of the electromagnetic and load torques.
The mechanical input: load torque (Tm) or motor speed (Wm). For the mechanical rotational port (S), this input is deleted.
The three phase terminals of the motor drive.
The mechanical output: motor speed (Wm), electromagnetic torque (Te) or mechanical rotational port (S).
When the Output bus mode parameter is set to Multiple output buses, the block has the following three output buses:
The motor measurement vector. This vector allows you to observe the motor's variables using the Bus Selector block.
The three-phase converters measurement vector. This vector contains:
● The DC bus voltage
● The rectifier output current
● The inverter input current
All current and voltage values of the bridges can be visualized with the Multimeter block.
The controller measurement vector. This vector contains:
● The torque reference
● The speed error (difference between the speed reference ramp and actual speed)
● The speed reference ramp or torque reference
When the Output bus mode parameter is set to Single output bus, the block groups the Motor, Conv, and Ctrl outputs into a single bus output.
Model Specifications
The library contains a 3 hp drive parameter set. The specifications of the 3 hp drive are shown in the following table.
3 HP Drive Specifications
┃ Drive Input Voltage │ │ ┃
┃ │ Amplitude │ 220 V ┃
┃ │ Frequency │ 60 Hz ┃
┃ Motor Nominal Values │ │ ┃
┃ │ Power │ 3 hp ┃
┃ │ Speed │ 1650 rpm ┃
┃ │ Voltage │ 300 Vdc ┃
The ac7_example example illustrates an AC7 motor drive simulation with standard load condition. At time t = 0 s, the speed set point is 300 rpm.
There are two design tools in this example. The first block calculates the gains of the speed regulator in accordance with your specifications. The second block plots the operating regions of the
drive. Open these blocks for more information.
As shown in the following figure, the speed precisely follows the acceleration ramp. At t = 0.5 s, the nominal load torque is applied to the motor. At t = 1 s, the speed set point is changed to 0
rpm. The speed decreases to 0 rpm. At t = 1.5 s., the mechanical load passes from 11 N.m to −11 N.m. The next figure shows the results for the detailed converter and for the average-value converter.
Observe that the average voltage, current, torque, and speed values are identical for both models. Notice that the higher frequency signal components are not represented with the average-value
AC7 Example Waveforms (Blue: Detailed Converter, Red: Average-Value Converter)
[1] Bose, B. K., Modern Power Electronics and AC Drives, Prentice-Hall, N.J., 2002.
[2] Krause, P. C., Analysis of Electric Machinery, McGraw-Hill, 1986.
[3] Tremblay, O., Modélisation, simulation et commande de la machine synchrone à aimants à force contre-électromotrice trapézoïdale, École de Technologie Supérieure, 2006. | {"url":"http://www.mathworks.se/help/physmod/sps/powersys/ref/brushlessdcmotordrive.html?nocookie=true","timestamp":"2014-04-23T17:16:44Z","content_type":null,"content_length":"60667","record_id":"<urn:uuid:196b5eeb-a2bf-40f9-a8fd-259868e63fb9>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Easton, CT Math Tutor
Find an Easton, CT Math Tutor
...I have taught after school classes in prepping for PSAT and SAT in a local district. I have developed Powerpoint lectures that explain what to expect on the test and how it is scored along
with strategies to attain high scores. I have a bachelor's degree in mathematics, and I am currently certified by CT to teach all secondary math subjects.
11 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I taught an undergraduate course in Probability & Statistics 7 times from 2005-2008. I passed the first actuarial exam with a perfect 10. I received a grade of A in both "Introduction to
Logic" and the more difficult course "Symbolic Logic" as an undergraduate.
16 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have over 100 hours of experience tutoring the broad range of subjects included under the heading Precalculus. In some ways it is more challenging for students than Calculus, because so many
unrelated math concepts are encountered. In contrast, Calculus is conceptually unified.
7 Subjects: including algebra 1, algebra 2, calculus, SAT math
I am a Cornell graduate and soon to be PhD looking for an opportunity to share my passion for science through tutoring. My academic career has given me training in engineering and scientific
research which gives me a unique perspective on science in the real world. I have experience teaching high school students, college undergraduates and even PhD students.
20 Subjects: including algebra 1, algebra 2, calculus, chemistry
...I am currently a Kindergarten teacher and have a variety of experience as a substitute in other grade levels, as well as with Special Education programs and classes. I have worked with
students of all levels and find ways to help everyone succeed. As a Kindergarten teacher I know the importance...
12 Subjects: including algebra 1, algebra 2, reading, grammar
Related Easton, CT Tutors
Easton, CT Accounting Tutors
Easton, CT ACT Tutors
Easton, CT Algebra Tutors
Easton, CT Algebra 2 Tutors
Easton, CT Calculus Tutors
Easton, CT Geometry Tutors
Easton, CT Math Tutors
Easton, CT Prealgebra Tutors
Easton, CT Precalculus Tutors
Easton, CT SAT Tutors
Easton, CT SAT Math Tutors
Easton, CT Science Tutors
Easton, CT Statistics Tutors
Easton, CT Trigonometry Tutors
Nearby Cities With Math Tutor
Beacon Falls Math Tutors
Bethel, CT Math Tutors
Derby, CT Math Tutors
Georgetown, CT Math Tutors
Middlebury, CT Math Tutors
Monroe, CT Math Tutors
New Canaan Math Tutors
Patterson, NY Math Tutors
Port Chester Math Tutors
Redding Center Math Tutors
Redding Ridge Math Tutors
Redding, CT Math Tutors
Trumbull, CT Math Tutors
Weston, CT Math Tutors
Wilton, CT Math Tutors | {"url":"http://www.purplemath.com/Easton_CT_Math_tutors.php","timestamp":"2014-04-21T02:23:39Z","content_type":null,"content_length":"23874","record_id":"<urn:uuid:f535a336-ab64-4911-85c7-f40163c0b594>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
There's no Trig Identity for this is there?
For a wave described by:
I'd like to describe it in terms of a single sine or cosine function.
It does reduces to A * sqrt(3) *sin(x-PI/6), but only if A=B
I can't seem to find any identity that would work if A is not equal to B though. Am I right in assuming that it's not possible to describe this wave in terms of a single sine or cosine function when
A is not equal to B?
Strill wrote:For a wave described by: A*sin(x)+B*sin(x-PI/3)
I'd like to describe it in terms of a single sine or cosine function.
Is this the sort of thing you're looking for?
Wikipedia wrote:More generally, for an arbitrary phase shift, we have
$a\sin x+b\sin(x+\alpha)= c \sin(x+\beta)\,$
$c = \sqrt{a^2 + b^2 + 2ab\cos \alpha},\,$
$\beta = \arctan \left(\frac{b\sin \alpha}{a + b\cos \alpha}\right) + \begin{cases} 0 & \text{if } a + b\cos \alpha \ge 0, \\ \pi & \text{if } a + b\cos \alpha < 0. \end{cases}$
Re: There's no Trig Identity for this is there?
Yes, thank you that's exactly what I was looking for. | {"url":"http://www.purplemath.com/learning/viewtopic.php?f=12&t=2319&p=6741","timestamp":"2014-04-21T13:12:59Z","content_type":null,"content_length":"20876","record_id":"<urn:uuid:480af87c-2986-4650-b666-eee56a418465>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
An approach to matched field processing in correlated multipath
ASA 129th Meeting - Washington, DC - 1995 May 30 .. Jun 06
2aUW4. An approach to matched field processing in correlated multipath environments.
Robert Zeskind
The MITRE Corp., 7525 Colshire Dr., McLean, VA 22102-3481
Mark Owen
PRESEARCH, Inc., Fairfax, VA 22031
Matched field processing (MFP) is based on some type of comparison of actual received signal across the elements of an array to predicted array element response based on complicated propagation
models for postulated source locations. It is well known that these techniques are very sensitive to model mismatch. Investigators have proposed modified MFP techniques of varying complexity to be
more robust to mismatch. A simplified approach to MFP in a correlated multipath environment is to estimate the parameters of the path arrivals at the array based on an assumed wavefront model for
each path. The parameters to be estimated are arrival angles, relative received level for each path, and the correlation coefficient between pair of paths. For spherical wavefronts, the distance
traveled along each path is also estimated. It is assumed that there are only a few dominant paths and their number is known. The parameter estimation problem is posed as a well-known optimization
problem. The solution to the optimization problem not only acts as a detector, but provides the set of estimated parameters to be used in a raytrace propagation model to back propagate the multipath
to localize the detected target. An example is presented to illustrate the method. | {"url":"http://www.auditory.org/asamtgs/asa95wsh/2aUW/2aUW4.html","timestamp":"2014-04-19T18:47:54Z","content_type":null,"content_length":"1994","record_id":"<urn:uuid:ffbfbbef-90ff-4a9a-9ae1-0a9ca8421f0c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number of swaps in Bubble Sort
up vote 5 down vote favorite
I have a version of bubble sort:
int i, j;
for i from n downto 1
for j from 1 to i-1
if (A[j] > A[j+1])
swap(A[j], A[j+1])
I want to calculate the expected number of swaps using the above version of bubble sort. The method used by me is shown below :
// 0 based index
float ans = 0.0;
for ( int i = 0; i < n-1; i++ )
for ( int j = i+1; j < n; j++ ) {
ans += getprob( a[i], a[j]); // computes probability that a[i]>a[j].
Am i going the correct way or am I missing something?
c++ algorithm math bubble-sort
7 Why don't you run this on a randomised dataset, and find out? – Oli Charlesworth Jul 4 '12 at 14:49
2 "The number of" somethings is rarely float. And I don't understand getprob() at all, it gets the numbers, so it can just ... answer exactly, what's with the probability? – unwind Jul 4 '12 at
1 This is probably easier to solve on paper than in a program. – larsmans Jul 4 '12 at 14:54
@unwind The number is float because i have to calculate the expected number of swaps, and I have to make it for a general case when an element a[i] > a[j] ( i < j ) with some probability p. –
TheRock Jul 4 '12 at 15:22
see this answer. – Nicholas Mancuso Aug 9 '12 at 13:54
add comment
3 Answers
active oldest votes
The best way to get the answer is by running the bubble-sort algorithm itself and including a counter after the swap() call. Your calculation function would (a) need almost as long as the
sort itself (depending on the runtime of swap() vs. getprob()) and (b) miss the point that the order of the elements changes while sorting.
up vote 5 Btw, the exact number of swap() calls depends on the data you need to sort - you have n*(n-1)/2 comparisions and any of them could result in a swap (on average, half of the time you need to
down vote swap the compared elements).
@C Stoll: I get your point that on average half of the time you need to swap the compared elements but this has a assumption that each element a[i] > a[j] ( i < j ) with a probability of
1/2. But I need for something when I previously know that a[i] > a[j] ( i < j ) with a probability p, thus it doesn't include the complexity of bubble sort. My getprob() function works in
O(1) time. As far as I understand number of swaps basically depends on the number of inversions of the array. – TheRock Jul 4 '12 at 15:26
@TheRock The number of swaps is the number of inversions in the array. If all array entries are different and the permutation is uniformly distributed, the expected number of swaps is
simply n*(n-1)/4. If getprob() is independent of the values/positions, p*n*(n-1)/2. But you seem to have some more complicated constraints. – Daniel Fischer Jul 4 '12 at 15:32
@DanielFischer : I don't have to do for the general case actually in my case each of array elements can change with some probabilty, and i can get the probability of a[i] > a[j] ( i < j
). Then I need the expected number of swaps to sort the array using bubble sort. – TheRock Jul 4 '12 at 15:36
@TheRock: What do you want to do when you know the number of swaps needed for sorting? (btw, swap() typically works in O(1) time too - at least for build-in types and STL classes) – C.
Stoll Jul 5 '12 at 5:52
@C.Stoll : As there are different possible configurations of the array, So I have calculate the expected number of swaps that would take place o sort the array using bubble sort. I have
listed all constraints here stackoverflow.com/questions/11340223/… . – TheRock Jul 5 '12 at 8:18
add comment
Maybe this helps. Basically this provides a framework to run bubble sorts on a set of simulation datasets and to calculate the swap probability.
Let this probability = p Then to find the expected number of swap operations, you need to apply this on a real dataset. Let n be the size of this dataset. Then expected number =
swapProbability * n * n
n*n comes because the bubble sort has n * n number of expected operations.
float computeSwapProbability()
int aNumSwaps = 0
int aTotalNumberOfOperations = 0
For all simulation datasets
int i, j;
for i from n downto 1
up vote 2 down {
for j from 1 to i-1
if (A[j] > A[j+1])
swap(A[j], A[j+1])
return (float)aNumSwaps/aTotalNumberOfOperations;
add comment
The best way to count swap is to include counter variable inside swap if condition .
int swapCount=0;
for (i = 0; i < (length-1); ++i) {
for (j = 0; j < (length-i-1); ++j) {
if(array[j] > array[j+1]) {
temp = array[j+1];
array[j+1] = array[j];
up vote 0 down vote array[j] = temp;
printf("Swap count : %d" ,swapCount);
add comment
Not the answer you're looking for? Browse other questions tagged c++ algorithm math bubble-sort or ask your own question. | {"url":"http://stackoverflow.com/questions/11331314/number-of-swaps-in-bubble-sort","timestamp":"2014-04-17T22:26:07Z","content_type":null,"content_length":"86878","record_id":"<urn:uuid:76e910af-3439-4ff0-b2a2-a6a4c0f16751>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plainfield, IL ACT Tutor
Find a Plainfield, IL ACT Tutor
...Having taught mathematics in both a traditional high school as well as an alternative education setting, I have learned effective ways to develop rapport with students to help them become
successful with regard to mathematics concepts. My recent work with students includes two years of tutoring ...
16 Subjects: including ACT Math, calculus, geometry, algebra 1
...It all boils down to some pretty straightforward concepts: - Numbers (Natural, Whole, Integers, Rational, Real, Imaginary, Complex) - Operations (Add, Multiply, Exponent) - Formatting (Money,
Percents, Fractions, Decimals, Measurements, etc.) I keep no mystery from my students. I find most co...
14 Subjects: including ACT Math, geometry, GRE, ASVAB
...I tutored my two wonderful kids from elementary to high school and college years. They are now both highly educated and successful in their respective fields. I have also tutored for the Dupage
Literacy Program.
23 Subjects: including ACT Math, chemistry, algebra 1, algebra 2
...I grew up in Bloomingdale, graduated from Lake Park High School in Roselle, and returned to Itasca in 2001 to be closer to family. I married my high school sweetheart 16 years ago and now have
two daughters (5th grade and 2nd grade) and our new Golden Retriever puppy, Peaches. Learning is truly my passion.
13 Subjects: including ACT Math, calculus, statistics, geometry
...And let me finish with my credo: In school and after school, only systematic efforts will bring good results. Believing otherwise is wishful thinking.I am from Bulgaria, my native language is
Bulgarian. However, being a child of a teacher in Russian language, I qualified to be a student in a Russian language elementary and middle school.
31 Subjects: including ACT Math, reading, physics, geometry | {"url":"http://www.purplemath.com/Plainfield_IL_ACT_tutors.php","timestamp":"2014-04-19T17:47:12Z","content_type":null,"content_length":"23831","record_id":"<urn:uuid:f7fcf6bd-c29e-4125-aa99-43dc91768bcb>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sanderson M. Smith
HYPOTHESIS TESTING EXAMPLES USING NORMAL DISTRIBUTION
Candidate Jones is one of two candidates running for mayor of Central City. A random polling of 672 registered voters finds that 323 (48% of those polled) will vote for him. It is reasonable to
assume that the race for mayor is a tossup? (That is, if p is the proportion of the population who will vote for Jones, is it reasonable to assume that p = 0.5?)
Analysis: The population involved is the registered voters in Central City. As worded, this is a 2-tail situation. We are testing the null hypothesis
H[o]: p = 0.5
against the alternate hypothesis
H[a]: p is not equal to 0.5.
This is a binomial setting with N = 672 and p = 0.5.
Let p(hat)is the proportion of registered voters in a sample of size 672 who will vote for Jones, then if H[o] is true, the distribution of p(hat) is approximately normal, the mean of the
distribution of p(hat) is 0.5 and the standard deviation of p(hat) is SQRT[(0.5)(0.5)/672] = 0.019287, or about 1.93%.
The mean of our specific sample is 0.48. If H[o] is true, the associated z score is (0.475 - 0.5)/0.0193 = -1.3. At the 2-tail, 5% level of significance, the critical z scores are z > 1.96 or z <
-1.96. There is not enough evidence to reject H[o] at the 5% level.
[We can also note that normalcdf(-1E99,0.475,0.5,.0193) = 0.0976, or about 10%. Basically, if Ho is true, the probability of obtaining the sample proportion 48% is about 10%. We don't have
"strong" evidence to reject H[o. ] And, we are simply saying that it is not unreasonable to assume that the population parameter is 50%.]
In a quality control situation, the mean weight of objects produced is supposed to be 16 ounces with a standard deviation of 0.4 ounces. A random sample of 70 objects yields a mean weight of 15.8
ounces. Is it reasonable to assume that the production standards are being maintained?
Analysis: The population involved is the means of random samples of size 70 chosen from a population with mean = 16 and standard deviation = 0.4. Call this population P. The Central Limit Theorem
says that the distribution of P is normal, that the mean of P is 16, and that the standard deviation of P is 0.4/Ã(70) = 0.048. As worded, this is a 2-tail situation. The null and alternate
hypotheses are, respectively
H[o]: The sample came from a population with mean = 16.
H[a]. The sample did not come from a population with mean = 16.
If we test at the 5% level of significance, the critical z scores are z > 1.96 and z < -1.96. The sample z score is z = (15.8-16)/0.048 = -4.17. We therefore reject H[o] at the 5% level of
[Note: The critical z scores at the 1% level of significance are z < -2.58 and z > 2.58. We would reject Ho at the 1% level of significance. In other words, there is very strong evidence to
reject H[o].]
I roll a single die 1,000 times and obtain a "6" on 204 rolls. Is there significant evidence to suggest that the die is not fair?
Analysis: The population involved consists of the proportion of 6's obtained when a die is rolled 1,000 times. This population consists of 1,001 proportions: {0/1000, 1/1000, 2/1000,É.,999/1000,
1000/1000}. If p is the proportion of 6's obtained, then our null and alternate hypotheses are, respectively
H[o]: p = 1/6
H[a]: p is not equal to 1/6
This is a 2-tail situation. If H[o ]is true, we have a binomial setting with p = 1/6 and N = 1,000. If p(hat) is a sample proportion of 6's, then the distribution of p(hat is approximately
normal, the mean of the distribution of p(hat) is 1/6 and the standard deviation is SQRT[(1/6)(5/6)/1000] = 0.011785, or approximately 0.012.
Our sample proportion is p(hat) = 204/1000 = 0.204, or 20.4%. If H[o] is true, the probability of obtaining this result is 0.00107, or approximately (1/10)%. [=normalcdf(.2035,1E99,1/6,.012)]
There is strong evidence to reject H[o], since it is highly unlikely that this result would be obtained if H[o] is true.
[Note: Our z statistics would be z = (0.204-1/6)/.012 = 3.11, which is "way out there" on a normal distribution curve.] | {"url":"http://www.herkimershideaway.org/writings/hyptsn.htm","timestamp":"2014-04-17T06:49:01Z","content_type":null,"content_length":"7660","record_id":"<urn:uuid:ffdf575c-05ad-4f0f-9097-b158170a8f2d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Laguna Beach Algebra 1 Tutor
Find a Laguna Beach Algebra 1 Tutor
Hi, my name is Jonathan and I'm a student at UC Irvine. I'm nearly done with my undergraduate degree in Chemistry, but I've been tutoring students and colleagues since high school. My favorite
part of tutoring, by far, is the reward of seeing someone succeed in an area they were struggling with.
9 Subjects: including algebra 1, chemistry, calculus, algebra 2
...I am proficient in math (up to Algebra 1), literature, essay writing, history, and English (all ages). I'm a very patient person and believe that knowledge is useless unless it is shared. I
believe that knowledge and education are gifts that must be used for the betterment of the individual and society. I consider myself a very well-rounded and easy-going yet responsible person.
21 Subjects: including algebra 1, English, reading, writing
...During law school, I had to speak in front of peers, judges, etc. Recently, I spoke at several events on behalf of a high-profile law firm in Los Angeles. I am a very experienced essay writer.
18 Subjects: including algebra 1, English, writing, geometry
...Let's go back to the basics. Unit conversion. Visualization.
12 Subjects: including algebra 1, chemistry, writing, English
...As for my Psychology experience, my BA in Psychology has given me the knowledge to tutor anything from introductory Psychology to special topics of Psychology such as Social Psychology,
Research Methods and more. Although, I have not tutored in a formal setting. I do have teaching experience.
9 Subjects: including algebra 1, Spanish, geometry, algebra 2 | {"url":"http://www.purplemath.com/Laguna_Beach_algebra_1_tutors.php","timestamp":"2014-04-17T13:33:31Z","content_type":null,"content_length":"23941","record_id":"<urn:uuid:e5bba386-458a-4e5e-8366-4c039a716b8d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Categorical Syllogisms
5.2 Venn Diagrams
Slide 1
• Categorical syllogisms have three terms.
• So, Venn diagrams for categorical syllogisms will require three circles.
• The Venn diagrams for categorical syllogisms will be based upon the same strategy as used for categorical propositions.
• So, review the Venn diagrams for the four categorical propositions.
Slide 2
• Here is a sample categorical syllogism:
• No P are M.
• All S are M.
No S are P.
• To start representing this argument on a diagram, we would draw three circles as represented on the following slide.
Slide 3
• Next, the premises should be represented on the diagram.
• If there is a particular premise and a universal premise, diagram the universal premise first.
• Since both premises are universal in this example, either can be diagrammed first.
• So, we will start with the first premise, "No P are M".
Slide 4
• In order to diagram "No P are M", concentrate only on circles P and M.
• The areas where P and M overlap should be shaded, just as they were with two circled diagrams.
• Try to do this on your own, before looking at the diagram on the next slide.
Slide 5
• Now, the next premise must be diagrammed on the Venn diagram.
• So, "All S are M" must be put on the diagram.
• Concentrate only on the S and M circles.
• Just as with two circle diagrams, the regions that are S and NOT M must be shaded.
• Try to do this on your own before viewing the next slide.
Slide 6
• NEVER draw the conclusion on the diagram.
• LOOK for the conclusion on the diagram once you have drawn the premises.
• If the conclusion is represented by drawing the premises, the argument is valid.
• If the conclusion is not represented by drawing the premises, the argument is invalid.
• Look for "No S are P" on the diagram.
Slide 7
• Since regions 3 and 6 are shaded on the Venn diagram, and regions 3 and 6 represent "No S are P", the argument is valid.
• Were regions 3 and 6 not both shaded, the argument would have been invalid. | {"url":"http://itdc.lbcc.edu/cps/philosophy/rh/ch5_2phil12/5-2-notes.html","timestamp":"2014-04-20T22:15:26Z","content_type":null,"content_length":"3736","record_id":"<urn:uuid:ad94b14d-b401-4aab-a621-5299dfb031fb>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to Eigenvalues and Eigenvectors
Click here to open a new window with a Java applet. Arrange these two windows so that they overlap and you can move easily back-and-forth between them by clicking on the exposed portion of the
inactive window to make it active.
This applet can be used to explore a linear transformation written in the form
y = Ax
where A is a two-by-two matrix. The elements of the matrix are displayed at the right hand side of the applet. By clicking or dragging on the graph you can move the vector x which is drawn on red. As
you move the vector x, the vector y = Ax also moves. It is drawn in blue. Before reading on, move the vector x to get an idea what this particular linear transformation does. For example, you might
note that
• The vector y is always the same length as the vector x except that it has been rotated counter-clockwise by pi/2 radians.
• If you stretch the vector x then the vector y stretches as well.
You can change the entries in the matrix A by clicking on the entry you want to change. A dialog box will ask you for the new entry.
This applet provdes a good way for students to literally see some of the ideas involved in linear equations. For example, change the matrix A to a singular matrix like
a[11] = 0.5       a[12] = 1.0
a[21] = 1.0       a[22] = 2.0
Notice that you can observe the following things experimentally and visually
• No matter what the value of x the value of y always lies along the same line. Thus, the linear transformation A is not onto and is not invertible.
• There is a line that is mapped into zero -- that is, you can determine the kernel visually.
We begin our study of eigenvalues and eigenvectors by looking at two applications where eigenvalues and eigenvectors come up naturally and add significantly to our understanding.
Application 1:
Collegetown has two pizza restaurants and a large number of hungry pizza-loving students and faculty. 5,000 people buy one pizza each week. Joe's Chicago Style Pizza has the better pizza and 80%
of the people who buy pizza each week at Joe's return the following week. Steve's New York Style Pizza uses lower quality cheese and doesn't have a very good sauce. As a result only 40% of the
people who buy pizza at Steve's each week return the following week. As usual, we can represent this situation by a discrete dynamical system
P[n + 1] = AP[n]
a[11] = 0.80       a[12] = 0.60
a[21] = 0.20       a[22] = 0.40
Click here to open a Mathematica notebook to investigate this situation. Evaluate the notebook. Notice that if we start with 2500 customers at each pizza restaurant in the first week then after a
very few weeks Joe's Chicago Style Pizza seems to have three times as many customers each week as Steve's New York Style Pizza.
Next enter the matrix above in the Java applet. Play with the applet a bit to see if you notice anything worthy of note.
The two screenshots below show some behavior that is worthy of note.
Notice in the screen shot above that when the first coordinate (Joe's Pizza) of the vector x is roughly three times its second coordinate (Steve's Pizza) then Ax = x. Notice in the screen shot
below that when the first and second coordinates have the same absolute value but opposite signs that Ax lines up with x but is roughly one-fifth as long.
Notice the following lines from the evaluated Mathematica notebook.
Do you notice any connection between this rather cryptic Mathematica output and our observations above? We will return to this point later but first we look at another application.
Application 2:
The three figures below show the current population by age and gender and the projected population by age and gender in 25 years for three countries. These figures were obtained from the United
States Census Bureau International Data Base. Demographic information like this is extraordinarily important for understanding the vitality, needs, and resources of a country. The three examples
given above have very different properties. The United States and japan are highly developed countries with quite different demographics. Compare the fraction of the population that is young in
the two countries both currently and, as projected by the U.S. Census Bureau, in 25 years. Notice how different these two countries are from Pakistan.
We will look at a very simplified model of population growth for a hypothetical species and habitat. You can build much more realistic models using data available at the United States Census
Bureau. Our simplified model will ignore gender and immigration and emigration and will use only two age groups. These are serious simplifications but the tools we will develop this semester will
enable us to look at much more realistic models. This example is just a starting point. Ignoring immigration, for example, ignores one of the most important differences between Japan whose
immigration is close to zero and the United States.
Our model has two age groups -- the young (less than one year old) and the old (one or more years old). Thus, each year the population is represented by a two-dimensional vector
P = (A, B)
whose first coordinate is the young population and whose second coordinate is the old population. In this example the fertility rate for young people is 30% which means that on the avergae each
young individual gives birth to 0.30 new (and hence young) individuals. The fertility rate for old individuals is 80% which means that on the average each old individual gives birth to 0.80 new
(and hence young) individuals. In our model the survival rate for young individuals is 90% and for old individuals is 10%. This gives us the model
Enter the matrix A into the Java applet and experiment. Do you notice anything noteworthy?
The two screen shots below show two things that are worthy of note. The first one shows a vector that when multiplied by the matrix A results in another vector that is in the same direction but
somewhat longer. The second one shows a vector that when multiplied by the matrix A results in anotrher vector in exactly the opposite direction that is considerably shorter.
Click here for another Mathematica notebook. This Mathematica notebook explores this same model. Evaluate the notebook and compare the results with your observations above. Notice that over time
the total population is growing at a rate of 5.44% per year and seems to be settling into a pattern with the young population being 51.4% of the total population. | {"url":"https://www.math.duke.edu/education/webfeatsII/Lite_Applets/Eigenvalue/start.html","timestamp":"2014-04-21T14:42:05Z","content_type":null,"content_length":"8491","record_id":"<urn:uuid:6bb2e0bf-dfd1-43a5-9fe0-56c32970d617>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is a division ladder in math?
The ladder paradox (or barn-pole paradox) is a thought experiment in special relativity. It involves a ladder travelling horizontally and undergoing a length contraction, the result of which being
that it can fit into a much smaller garage. On the other hand, from the point of view of an observer moving with the ladder, it is the garage that is moving and the garage will be contracted to an
even smaller size, therefore being unable to contain the ladder at all. This apparent paradox results from the assumption of absolute simultaneity. In relativity, simultaneity is relative to each
observer and thus the ladder can fit into the garage in both instances.
Business Finance
Business Finance
Related Websites: | {"url":"http://answerparty.com/question/answer/what-is-a-division-ladder-in-math","timestamp":"2014-04-18T14:35:12Z","content_type":null,"content_length":"21077","record_id":"<urn:uuid:002a6eb8-cd2a-44bb-ae7d-c9c0d8afc115>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
Predicting f0; 1g functions on randomly drawn points
Results 1 - 10 of 46
, 1995
"... We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated
by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas pr ..."
Cited by 419 (16 self)
Add to MetaCart
We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by
training the given learning algorithm on a different set of examples. Our algorithm is based on ideas presented by Schapire in his paper "The strength of weak learnability", and represents an
improvement over his results. The analysis of our algorithm provides general upper bounds on the resources required for learning in Valiant's polynomial PAC learning framework, which are the best
general upper bounds known today. We show that the number of hypotheses that are combined by our algorithm is the smallest number possible. Other outcomes of our analysis are results regarding the
representational power of threshold circuits, the relation between learnability and compression, and a method for parallelizing PAC learning algorithms. We provide extensions of our algorithms to
cases in which the conc...
- JOURNAL OF THE ASSOCIATION FOR COMPUTING MACHINERY , 1997
"... We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no
assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the ..."
Cited by 317 (66 self)
Add to MetaCart
We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no
assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit
sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions. We show that the minimum
achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching
leading constants in most cases. We then show howthis leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently known in
this context. We also compare our analysis to the case in which log loss is used instead of the expected number of mistakes.
- In Proceedings of the 31st Annual Symposium on Foundations of Computer Science , 1990
"... Abstract. An algorithm is presented for learning the class of Boolean formulas that are expressible as conjunctions of Horn clauses. (A Horn clause is a disjunction of literals, all but at most
one of which is a negated variable.) The algorithm uses equivalence queries and membership queries to prod ..."
Cited by 112 (16 self)
Add to MetaCart
Abstract. An algorithm is presented for learning the class of Boolean formulas that are expressible as conjunctions of Horn clauses. (A Horn clause is a disjunction of literals, all but at most one
of which is a negated variable.) The algorithm uses equivalence queries and membership queries to produce a formula that is logically equivalent to the unknown formula to be learned. The amount of
time used by the algorithm is polynomial in the number of variables and the number of clauses in the unknown formula.
- Machine Learning , 1994
"... In this paper we study a Bayesian or average-case model of concept learning with a twofold goal: to provide more precise characterizations of learning curve (sample complexity) behavior that
depend on properties of both the prior distribution over concepts and the sequence of instances seen by the l ..."
Cited by 108 (12 self)
Add to MetaCart
In this paper we study a Bayesian or average-case model of concept learning with a twofold goal: to provide more precise characterizations of learning curve (sample complexity) behavior that depend
on properties of both the prior distribution over concepts and the sequence of instances seen by the learner, and to smoothly unite in a common framework the popular statistical physics and VC
dimension theories of learning curves. To achieve this, we undertake a systematic investigation and comparison of two fundamental quantities in learning and information theory: the probability of an
incorrect prediction for an optimal learning algorithm, and the Shannon information gain. This study leads to a new understanding of the sample complexity of learning in several existing models. 1
Introduction Consider a simple concept learning model in which the learner attempts to infer an unknown target concept f , chosen from a known concept class F of f0; 1g-valued functions over an
instance space X....
, 1992
"... : Let V ` f0; 1g n have Vapnik-Chervonenkis dimension d. Let M(k=n;V ) denote the cardinality of the largest W ` V such that any two distinct vectors in W differ on at least k indices. We show
that M(k=n;V ) (cn=(k + d)) d for some constant c. This improves on the previous best result of ((cn ..."
Cited by 93 (4 self)
Add to MetaCart
: Let V ` f0; 1g n have Vapnik-Chervonenkis dimension d. Let M(k=n;V ) denote the cardinality of the largest W ` V such that any two distinct vectors in W differ on at least k indices. We show that M
(k=n;V ) (cn=(k + d)) d for some constant c. This improves on the previous best result of ((cn=k) log(n=k)) d . This new result has applications in the theory of empirical processes. 1 The author
gratefully acknowledges the support of the Mathematical Sciences Research Institute at UC Berkeley and ONR grant N00014-91-J-1162. 1 1 Statement of Results Let n be natural number greater than zero.
Let V ` f0; 1g n . For a sequence of indices I = (i 1 ; . . . ; i k ), with 1 i j n, let V j I denote the projection of V onto I, i.e. V j I = f(v i 1 ; . . . ; v i k ) : (v 1 ; . . . ; v n ) 2 V g:
If V j I = f0; 1g k then we say that V shatters the index sequence I. The Vapnik-Chervonenkis dimension of V is the size of the longest index sequence I that is shattered by V [VC71] (t...
- Mathematical Finance , 1998
"... We present an on-line investment algorithm which achieves almost the same wealth as the best constant-rebalanced portfolio determined in hindsight from the actual market outcomes. The algorithm
employs a multiplicative update rule derived using a framework introduced by Kivinen and Warmuth. Our algo ..."
Cited by 80 (10 self)
Add to MetaCart
We present an on-line investment algorithm which achieves almost the same wealth as the best constant-rebalanced portfolio determined in hindsight from the actual market outcomes. The algorithm
employs a multiplicative update rule derived using a framework introduced by Kivinen and Warmuth. Our algorithm is very simple to implement and requires only constant storage and computing time per
stock ineach trading period. We tested the performance of our algorithm on real stock data from the New York Stock Exchange accumulated during a 22-year period. On this data, our algorithm clearly
outperforms the best single stock aswell as Cover's universal portfolio selection algorithm. We also present results for the situation in which the We present an on-line investment algorithm which
achieves almost the same wealth as the best constant-rebalanced portfolio investment strategy. The algorithm employsamultiplicative update rule derived using a framework introduced by Kivinen and
Warmuth [20]. Our algorithm is very simple to implement and its time and storage requirements grow linearly in the number of stocks.
- Machine Learning , 1994
"... Abstract. In this paper we consider the problem of tracking a subset of a domain (called the target) which changes gradually over time. A single (unknown) probability distribution over the
domain is used to generate random examples for the learning algorithm and measure the speed at which the target ..."
Cited by 68 (3 self)
Add to MetaCart
Abstract. In this paper we consider the problem of tracking a subset of a domain (called the target) which changes gradually over time. A single (unknown) probability distribution over the domain is
used to generate random examples for the learning algorithm and measure the speed at which the target changes. Clearly, the more rapidly the target moves, the harder it is for the algorithm to
maintain a good approximation of the target. Therefore we evaluate algorithms based on how much movement of the target can be tolerated between examples while predicting with accuracy e. Furthermore,
the complexity of the class 7-/of possible targets, as measured by d, its VC-dimension, also effects the difficulty of tracking the target concept. We show that if the problem of minimizing the
number of disagreements with a sample from among concepts in a class 7 { can be approximated to within a factor k, then there is a simple tracking algorithm for 7-t which can achieve a probability e
of making a mistake if the target movement rate is at most a constant times e2/(k(d + k) In 1), where d is the Vapnik-Chervonenkis dimension of 7-t. Also, we show that if 7- / is properly
PAC-learnable, then there is an efficient (randomized) algorithm that with high probability approximately minimizes disagreements to within a factor of 7d + 1, yielding an efficient tracking
algorithm for 7-I which tolerates drift rates up to a constant times e2/(d 2 In ). In addition, we prove complementary results for the classes of halfspaces and axisaligned hyperrectangles showing
that the maximum rate of drift that any algorithm (even with unlimited computational power) can tolerate is a constant times e2/d.
- MACHINE LEARNING , 1995
"... Within the framework of pac-learning, we explore the learnability of concepts from samples using the paradigm of sample compression schemes. A sample compression scheme of size k for a concept
class C ` 2 X consists of a compression function and a reconstruction function. The compression function r ..."
Cited by 61 (3 self)
Add to MetaCart
Within the framework of pac-learning, we explore the learnability of concepts from samples using the paradigm of sample compression schemes. A sample compression scheme of size k for a concept class
C ` 2 X consists of a compression function and a reconstruction function. The compression function receives a finite sample set consistent with some concept in C and chooses a subset of k examples as
the compression set. The reconstruction function forms a hypothesis on X from a compression set of k examples. For any sample set of a concept in C the compression set produced by the compression
function must lead to a hypothesis consistent with the whole original sample set when it is fed to the reconstruction function. We demonstrate that the existence of a sample compression scheme of
fixed-size for a class C is sufficient to ensure that the class C is pac-learnable. Previous work has shown that a class is pac-learnable if and only if the Vapnik-Chervonenkis (VC) dimension of the
class i...
, 1997
"... This paper describes new and efficient algorithms for learning deterministic finite automata. Our approach is primarily distinguished by two features: (1) the adoption of an average-case setting
to model the ``typical'' labeling of a finite automaton, while retaining a worst-case model for the under ..."
Cited by 48 (10 self)
Add to MetaCart
This paper describes new and efficient algorithms for learning deterministic finite automata. Our approach is primarily distinguished by two features: (1) the adoption of an average-case setting to
model the ``typical'' labeling of a finite automaton, while retaining a worst-case model for the underlying graph of the automaton, along with (2) a learning model in which the learner is not
provided with the means to experiment with the machine, but rather must learn solely by observing the automaton's output behavior on a random input sequence. The main contribution of this paper is in
presenting the first efficient algorithms for learning nontrivial classes of automata in an entirely passive learning model. We adopt an on-line learning model in which the learner is asked to
predict the output of the next state, given the next symbol of the random input sequence; the goal of the learner is to make as few prediction mistakes as possible. Assuming the learner has a means
of resetting the target machine to a fixed start state, we first present an efficient algorithm that
- Journal of Computer and System Sciences , 1997
"... The PAC learning of rectangles has been studied because they have been found experimentally to yield excellent hypotheses for several applied learning problems. Also, pseudorandom sets for
rectangles have been actively studied recently because (i) they are a subproblem common to the derandomization ..."
Cited by 44 (3 self)
Add to MetaCart
The PAC learning of rectangles has been studied because they have been found experimentally to yield excellent hypotheses for several applied learning problems. Also, pseudorandom sets for rectangles
have been actively studied recently because (i) they are a subproblem common to the derandomization of depth-2 (DNF) circuits and derandomizing Randomized Logspace, and (ii) they approximate the
distribution of n independent multivalued random variables. We present improved upper bounds for a class of such problems of "approximating" high-dimensional rectangles that arise in PAC learning and
pseudorandomness. Key words and phrases. Rectangles, machine learning, PAC learning, derandomization, pseudorandomness, multiple-instance learning, explicit constructions, Ramsey graphs, random
graphs, sample complexity, approximations of distributions. 2 1 Introduction A basic common theme of a large part of PAC learning and derandomization/computational pseudorandomness is to
"approximate" a stru... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1950421","timestamp":"2014-04-25T02:36:58Z","content_type":null,"content_length":"41740","record_id":"<urn:uuid:91f435a6-2eb4-4d74-8e18-e12df4e7409b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
The patio must slope away from the house, hopefully you have that? You also need to have a step up from the patio to the slab of at the very min. a...
wooden ramp
I agree with Piffin, a good quality exterior paint and a med. course sand.
Sliding Glass Door onto Deck - water leakage
That's right Piffin, I was actually at this home last week and suggested the Anderson retrofit when it comes time for replacement which is around 1...
Sizing baseboard/crown
The Greeks discovered the secret,they called it "The Golden Mean or ratio". It says that you should think in percentages for color, mass, and even ...
Types of different plywood core
Birch or Maple Veneer would be my choice.Your veneer should be sanded to 180 prior to applying the stain. Use a washcoat, Mix one part sanding sea...
How to prevent screw heads from stripping?
4" whoa.. that seems a little long for decking screws even with 1 1/2 decking.You must be using ipe (iron wood) if you're finding that much of a pr...
need help useing t bevel gauge
To measure angles.
Stair Winding
Layout the rectangular landing, then the newel or balistrade return(where in your case the 4" will be)... take a trammel (a long straight 1x4 will ...
Stair Winding
Sorry that'd be a landing & 2 ply boxes wouldn't it... by the way I'm pretty sure it's 6 inches and 12 at the line of travel (section 1003.3.3.8.2)...
Stair Winding
You can either call a carpenter, or use 3 ply boxes each smaller than the first. Layout the landing on the floor & mark off your radii then make th...
Wall Frames
Oh you mean wainscotting, you will divide the total wall into equal dimension as if the window wasn't there.Spraying is best but you can roll & bac...
Sliding Glass Door onto Deck - water leakage
Probably around 70 per pan & 500 for the doors. We also use fortiflashing over the fins of our windows & doors here is an install link for you: htt...
Sliding Glass Door onto Deck - water leakage
You need to have a pro remove the door and install a "door pan" what that does is divert the water back outside where it should be. A tin smith wil...
window seats
For an inset you can use 2x4's on flat against the bay, then place 3/4 ply (oak-maple-birch your choice) across the top.If you are a beginner use a...
window seats
Is it a bay that goes out (outset) or just a window that you are placing a seat under so then it would be built in (inset).. then we can tackle a d...
window seats
Inset or outset... lift, or drawer... paint or stain... is it a large seat or small?
Wall Frames
By wall frames do you mean shelving of some sort?
coping an odd angle
You need to bisect the angle, miter the trim & cope that... a 90 is mitered @ 45 & coped, if you have a typical 45 bay then 22.5 miter & cope.
doors and carpeting
What type of carpet & what type of doors? Typically you will have around a 1/2 inch above the carpet & you'll score the surface along the line befo...
foundation for a hot tub?
You didn't give any details but all you do is turn down the edge of the slab to create the footings. We just use a 5-6" floating slab with 6x6 10ga... | {"url":"http://www.bobvila.com/user/altereagle?page=17","timestamp":"2014-04-19T22:53:57Z","content_type":null,"content_length":"48305","record_id":"<urn:uuid:2a9a5ec4-9da7-4a97-8179-58ad763d5d9c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics 103 > Dasu > Notes > PDF Lecture 8 | StudyBlue
Simply amazing. The flashcards are smooth, there are many different types of studying tools, and there is a great search engine. I praise you on the awesomeness. - Dennis
I have been getting MUCH better grades on all my tests for school. Flash cards, notes, and quizzes are great on here. Thanks! - Kathy
I was destroying whole rain forests with my flashcard production, but YOU, StudyBlue, have saved the ozone layer. The earth thanks you. - Lindsey
This is the greatest app on my phone!! Thanks so much for making it easier to study. This has helped me a lot! - Tyson | {"url":"http://www.studyblue.com/notes/note/n/pdf-lecture-8/file/541461","timestamp":"2014-04-21T12:13:33Z","content_type":null,"content_length":"34029","record_id":"<urn:uuid:4b0acadf-54c0-4bed-9cb2-c1fbd5ec2598>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |