content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
A 0.495 M solution of nitrous acid, HNO2, has a pH of 1.83.a. Find the [H+] and the percent ionization of nitrous... - Homework Help - eNotes.com
A 0.495 M solution of nitrous acid, HNO2, has a pH of 1.83.
a. Find the [H+] and the percent ionization of nitrous acid in this solution.
`HNO_2 harr H^++NO_2^-`
Since `HNO_2` is not a strong acid it's ionization is not complete. So in the final mixture we have both `H^+` and `HNO_2` .
We know that;
`P_H = -log[H^+]`
`1.83 = -log[H^+]`
`[H^+] = 10^(-1.83)`
`[H^+] = 0.0148M`
So the concentration of `H^+` is 0.0148M.
We had 0.495M `HNO_2` solution. But only 0.0148M has ionized.
% of ionization `= (0.0148/0.495)xx100% = 29.9%`
So the percentage ionization of `HNO_2` is 29.9%.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/0-495-m-solution-nitrous-acid-hno2-has-ph-1-83-442016","timestamp":"2014-04-21T02:10:25Z","content_type":null,"content_length":"25758","record_id":"<urn:uuid:7e2d3867-1228-44a9-8a7d-32fbd96bca46>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof help
September 21st 2011, 06:16 PM #1
Sep 2011
Proof help
For all integers a and b, if a^2 is congruent to b^2 (mod 20), then a is congruent to b (mod 5). prove true or false
Re: Proof help
If If $a^{2} \equiv b^{2}\ \text{mod}\ 20$, then there is a constant k for which is...
$(a+b)\ (a-b)= 20\ k$ (1)
From (1) is we derive that must be $5|(a+b)$ or $5|(a-b)$. If $5|(a-b)$ then is $a \equiv b\ \text{mod}\ 5$. If $5|(a+b)$ then for (1) it must be...
$(a-b)\ \frac{a+b}{5}= 4\ k \implies a-b= \frac{5}{a+b}\ 4\ k$ (2)
... so that $5|(a-b)$...
Kind regards
September 21st 2011, 11:50 PM #2 | {"url":"http://mathhelpforum.com/number-theory/188541-proof-help.html","timestamp":"2014-04-20T14:27:05Z","content_type":null,"content_length":"35026","record_id":"<urn:uuid:b757b1ce-a5f0-4df2-b141-319d053d3611>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
College Park
Find a College Park Precalculus Tutor
...Between 2006 and 2011 I was a research assistant at the University of Wyoming and I used to cover my advisor’s graduate level classes from time to time. And, since August 2012 I have tutored
math (from prealgebra to calculus II), chemistry and physics for mid- and high-school students here in th...
14 Subjects: including precalculus, chemistry, calculus, physics
...So I work with students to help them get through their classwork and learn the material, while at the same time preparing them to ace exams, earn A.P. credit, and get ready for the rigors of a
college education - all with time left over to still enjoy being a kid. If you're interested in working...
5 Subjects: including precalculus, calculus, physics, SAT math
...If I find the student is not responding, or has difficulty staying on task, I stop and talk to them about what interests them most. This usually gets them focused and we can continue. Having
taught many years, I have many work sheets and other extra outside material to share.
21 Subjects: including precalculus, calculus, world history, statistics
...I am a biological physics major at Georgetown University and so I have a lot of interdisciplinary science experience, most especially with mathematics (Geometry, Algebra, Precalculus,
Trigonometry, Calculus I and II). Additionally, I have tutored people in French and Chemistry, even though they a...
11 Subjects: including precalculus, chemistry, calculus, French
...I have some great experience with teaching math. I've interned in two middle schools in Montgomery County as math assistant for one year, and during my sophomore year I went back to Montrose
Christian School and worked as part-time math substitute teacher. I was hired as part-time math teacher the following year teaching algebra 2.
13 Subjects: including precalculus, calculus, geometry, Chinese | {"url":"http://www.purplemath.com/college_park_md_precalculus_tutors.php","timestamp":"2014-04-17T13:16:15Z","content_type":null,"content_length":"24397","record_id":"<urn:uuid:139ad8ef-f0dd-4736-84da-8ff0015aba54>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
[plt-scheme] Project Euler
From: Matthias Felleisen (matthias at ccs.neu.edu)
Date: Sat May 5 19:03:15 EDT 2007
On May 5, 2007, at 5:45 PM, Richard Cleis wrote:
>> [... snip]
>>> After all, programs are mathematical constructs (which happen to
>>> execute) built by people. The building of programs requires some
>>> mathematical understanding even if the writer does not realize
>>> it. Perhaps, it is time to think of mathematics as programming.
>>> Now, that would be a cool paradigm shift!
>> We have worked on this shift for 12 years. See "rebels with a
>> cause" for example.
>> My first NSF proposal on TS!/PLT was "replacing mathematics with
>> programming." You can imagine where that got me then
> What is "Rebels with a Cause?" My Google searches are hammered
> with a Hacktivism book.
Try "matthias felleisen rebel with a cause". It's an article in
Thomson's quarterly magazine on what I tried to do.
> How literal is the phrase "replacing mathematics with programming?"
Nearly literal. The goal would be to bring across the idea of a
FUNCTION, the central concept of high school mathematics, without
violating the central tenet of mathematics, and yet increasing its
value to students.
> Regarding students who are afraid of mathematics... What is the
> reason that they study computer science?
I wasn't speaking of students who have chosen to study programming
but of students who are (forced to) taking mathematics courses. I
think FUNCTIONAL programming could help these kids understand so much
more about mathematics and especially functions and word problems.
I have many times considered rewriting a mathematical text book from
algebra or even geometry in FP/Scheme (with appropriate teachpacks).
I don't have the time.
-- Matthias
Posted on the users mailing list. | {"url":"http://lists.racket-lang.org/users/archive/2007-May/017738.html","timestamp":"2014-04-16T21:58:03Z","content_type":null,"content_length":"7096","record_id":"<urn:uuid:2b941792-4087-444a-bee4-59d30ad696bd>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
four color proof
Take the 2-minute tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Has four color proof been proved without the help of computer?Where can I find the paper?
There's a huge amount to say about 4CC and its various proofs, which probably people will say below in the answers. But the short answer to your first question is "no".
Kevin Buzzard Oct 5 '10 at 9:41
(i) Not as far as I am aware. (ii) What paper?
Robin Chapman Oct 5 '10 at 9:42
add comment
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying
this question so that it can be reopened, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question.
No, but the proof has been formalized into computer-checkable form, using the proof-assistant Coq. As far as I know, the proof still relies on enumeration of cases and is therefore quite
up vote 8 tedious.
down vote
For a paper, see Gonthier, Georges (2008), "Formal Proof--The Four-Color Theorem", Notices of the American Mathematical Society 55 (11): 1382–1393
I've heard from my colleague Wilbert van der Kallen that the formal proof of the four colour theorem was written over a time interval that was so long, that various parts of he proof have
been made to compile over various versions of Coq. The problem is that these versions of Coq are not backwards compatible! So, formally speaking there is no single computed verified proof
of the four colour theorem. What there is is a collection of computed checked pieces that, together, assemble into a proof.
André Henriques Oct 5 '10 at 12:46
Why are there various versions of Coq? Presumably because the early versions contain bugs, which then invalidate the certificates for the earlier parts of the proof ;-)
Kevin Buzzard Oct 5 '10 at 14:59
And by pessimistic meta-induction (i.e. the fact that there may well be future versions of Coq) doesn't this mean that we shouldn't believe the parts verified with the current version? ;-)
Kevin Buzzard Oct 5 '10 at 15:01
Different versions of Coq differ in the libraries of theories and tactics and other, often superficial, features. The core logic remains the same.
supercooldave Oct 5 '10 at 15:15
@Kevin: By design, one need worry only if changes were made to the (small, stable) type-checking kernel. The other machinery (which may be incompatible in various versions) can be seen as
an evolving set of tools for making the construction of kernel-checkable proofs easier.
Grant Olney Passmore Oct 5 '10 at 15:41
show 5 more comments
Just as background, definitely not an answer to your question: You are probably aware of the paper, "A new proof of the four colour theorem," by N. Robertson, D. P. Sanders, P. D. Seymour
up vote 3 and R. Thomas, in Electron. Res. Announc. Amer. Math. Soc. 2 (1996), 17-25 (electronic). It is still a computer proof, but simpler than Appel and Haken's: "Our unavoidable set has size 633
down vote as opposed to the 1476 member set of Appel and Haken, and our discharging method uses only 32 discharging rules, instead of the 300+ of Appel and Haken."
add comment
from review http://www.ams.org/mathscinet-getitem?mr=1403921 of a survey paper by Paul Seymour, we find...
up vote 2
down vote In 1993, Seymour, Neil Robertson, Daniel Sanders, and Robin Thomas, after trying to read the Appel-Haken proof, decided to supply their own proof, in which the data are available in
electronic form, which can be checked by hand or computer. They confirmed that the four-color theorem is true and provable by the approach used by Appel and Haken.
Replace the .proxy.lib.ohio-state.edu with the EZProxy syntax of your institution, as appropriate. :)
J. M. Oct 5 '10 at 15:13
See my parallel posting linking to that paper. This is from the [RSST] Abstract: "Here we announce another proof, still using a computer, but simpler than Appel and Haken's in several
Joseph O'Rourke Oct 5 '10 at 15:31
I edited out the proxy portion of the URL.
David Speyer Oct 6 '10 at 2:04
add comment
Not the answer you're looking for? Browse other questions tagged graph-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/41121/four-color-proof","timestamp":"2014-04-19T15:18:27Z","content_type":null,"content_length":"62769","record_id":"<urn:uuid:8422950a-2446-4a8e-9c18-1f0e89500fcb>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Porte Math Tutor
Find a La Porte Math Tutor
I was born in Taiwan. I graduated from No.1 university in Taiwan, majored in Economics and came to the USA to pursue an MBA at Lamar University in 1988. I am a loving and patient Christian mom of
three children.
12 Subjects: including algebra 1, algebra 2, vocabulary, grammar
...This was a very valuable experience because it allowed me an inside look into the state testing guidelines, curriculum, and assessments. I look forward for the opportunity to spend time with
your child and teach them one on one. It is so powerful to sit one on one with a child and develop relat...
22 Subjects: including probability, reading, writing, ESL/ESOL
...I evaluate a student's strengths and weaknesses and then put an individualize teaching plan into action. Because of my age and life experiences, parents tend to feel more comfortable with me
teaching their student or in their home. Additionally, my belief is that when a student sees how much ef...
13 Subjects: including algebra 1, SAT math, English, prealgebra
...I worked with a female college student (with ADHD) with her Pathophysiology course, she now has an A and has, as a result, been admitted to nursing school. When we began working together her
HESI score was below 60% and after working together for two months, she took the test again, and achieved...
22 Subjects: including calculus, Java, government & politics, discrete math
I can provide tutoring for various subjects up to the high school/college level. I have a Bachelor of Science degree in Economics from Texas A&M University and an MBA with a concentration in
Finance from the University of Houston. My strongest subjects are those related to math, business, and economics.I have an MBA with a concentration in Finance and Bachelor's Degree in Economics.
20 Subjects: including algebra 1, algebra 2, elementary (k-6th), vocabulary | {"url":"http://www.purplemath.com/La_Porte_Math_tutors.php","timestamp":"2014-04-18T14:19:30Z","content_type":null,"content_length":"23726","record_id":"<urn:uuid:0c64d0ac-f4d3-47c3-aa28-ee9ce601b232>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
HOT Crocs Deal + Free Shipping!
HOT Crocs Deal + Free Shipping!
It looks like two of the codes might not be working anymore! Let’s all cross our fingers that the orders ship!
Ok I have been trying to lay off the Crocs posts because I seem to have a new deal every day but this one was too good to not pass along! There are (2) 20% off codes that are stacking right now to
make for a sweet deal!
Use code: MYCOUP20 and then DEALMOM20 and you get 20% for each plus free shipping.
You can also use code: C6VMRVKY3ISA to get an extra 10% off
The outlet styles are already marked down 30%. So here is what I did:
Girls Alice Crocs $14.99
Girls Minnie Mouse Mary Jane’s $9.99
(Both are in the outlet so 30% off those prices)
Then I used DEALMOM20 and MYCOUP20 and finally C6VMRVKY3ISA
My grand total after all three codes was $10.06 for both pairs and shipping is free!!
Let us know if you spot any great deals! Make sure you check the final price not just the subtotal!
PS: The DealMom20 code expires on Thursday at 1pm!!
Thanks Tara.
{ 25 comments… read them below or add one }
How did you get the total to be $4.47? When I checked out a pair @ $9.99 my final total was $6.39.
I added the Disney Minnie Mouse Mary Janes which are $9.99. Then I used code: DEALMOM20 and it took off $1.20. Then I used MYCOUP20 and it took off $1.12. The subtotal was $6.99 and then it says
$4.47 in the cart and that is what I paid! I see the difference- the ones I selected were 30% off in the outlet so I think that is the difference. Make sure you look at the final total not just
the subtotal.
I ordered my son the Packers crocs.
They probably wotn fit him for about 2 yrs but it was the only size they had left and for that deal I couldnt pass it up
Stevie I ordered a size up too since they were a little small last time and I had to return them. What was the price on the Sports Caymans after coupons?
I got a pair of Mammoth Crocs (the one with the wooly inside) for my son and one for me for $23.99 shipped – that’s an amazing deal! I’ve been eyeing these for awhile. Thank you for giving us the
heads-up on this great deal!
I can not seem to get the last code to work.. am i entering it wrong?
C6VMRVKY31SA ?? TYIA
Sarah it’s and I not a 1…. I go the Yankees ones for my little guy!! So excited we are die hard Yankees fans!!! Thank you so so much!
Thanks Rachel
Awesome! I just ordered 2 pairs of flip flops for my parents. The total was $8.05! NICE!
I got 5 pairs of shoes for 16.00 delivered…You CAN’T beat that! TY MFA!!!!
I love Crocs and I wish that I could justify buying yet another pair at these amazing prices, but I am just not sure that I can! Thanks for the post and keep them coming!
For the pair of sports caymans I paid 4.47$
THANKS! Just purchased two pairs for a sweet Christmas deal. You’re the best.
I purchased a Cubs pair for my son a few weeks back, than a Disney pair, and yesterday I ordered the Gretel one for myself, now with this great deal I ordered another pair for winter and 2 pairs
of the Off Roads for my dad. Thank you for the posts. Keep them coming!!
TY so much MFA i love you.
I just ordered 3 pair of flip flops and my total was $12.08 shipped!! What a deal! My daughter has a pair of the Ocean Minded flip flops and loves them. So I ordered her another pair, myself a
pair and my niece!! Thanks for all the coupons codes and deals! Love your website.
Once again thanks, I ordered a pair for me and for MIL (great gift)
shoes for 4.02 WHAT A DEAL!!!! We have never worn crocs but I think I have taken advantage of everyone of your croc deals this season! Just got my croc boats….comfy!!! And i can’t wait to get all
the rest!!!
This worked out awesome for us! My husband is picky about flip flops and he found a pair he likes and they were just 4.02! I ordered a pair for myself and a pair of Mary Janes for my daughter and
spent less than $15! I am psyched.
I purchased 2 pais of the girls Alice style and 2 pairs of the Disney Hannah Montana Mary Janes and my total was only $26.58!!!
The regular price of all these would have been $65.96…so I saved almost $40!! These will make perfect gifts and at $6.65 a pair, it’s definitely a deal!
Thank you for the info. All my kids wear adult mens shoes so I was not sure what kind of deals I would find. I was able to purchase 4 pairs of adult shoes for $22.15. I bought the Mens lower
shoes, the Hamak sandals, the Mens Mokulua and the Womens Ohana for me. All the codes worked great. I had planned on purchasing more items but shoes kept disapearing from my cart. So hurry when
you place your order.
I went back and placed another order and it will not let you use all three coupon codes anymore. I tried over and over. It will take them all but it will only give you the 20% off no matter what.
You still get free shipping. I still bought 2 pairs of womens shoes for $12.58 so not bad but not as good as I had hoped. Still happy about my purchase though.
The Dealmom20 is not working. It is says it is invalid. It takes code C6VMRVKY3ISA but does not give you the extra 10% off.
It is only taking 20% off. Still good deal for outlet items.
The DEAlMOM 20 will not work. MYCOUP20 and the other 10% worked
I just ordered a pair of Women’s alice mary janes. I have a black pair of them and I love them! I got them for $6.39 with two day express shipping for free!
Leave a Comment
{ 2 trackbacks } | {"url":"http://myfrugaladventures.com/2009/12/hot-crocs-deal-free-shipping/","timestamp":"2014-04-21T07:06:10Z","content_type":null,"content_length":"65349","record_id":"<urn:uuid:f85e2d06-8c4b-460b-9c0f-2ec21164d491>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determining acceleration of an object in oscillatory motion
Okay... but since at t=0 the object has no velocity, doesn't this mean that it is at it's maximum displacement and therefore there is no phase change?
Yes, it certainly does.
But not all future problems will be as simple as this one. So if you want to show that the phase offset is zero, you should be able to prove it mathematically.
Start with
v = -ωAsin(ωt + Φ),
and plug in one of your initial conditions, (
t =
= 0)
0 = -ωAsin(0 + Φ),
Divide both sides of the equation by -ωA.
Take the arcsin of both sides of the equation.
What does that tell you about
Once you know
, you can do a similar operation using the position equation, along with your other initial condition, to mathematically solve for | {"url":"http://www.physicsforums.com/showthread.php?t=455648","timestamp":"2014-04-16T22:02:28Z","content_type":null,"content_length":"48608","record_id":"<urn:uuid:11342cee-9bfa-4919-a235-694b77fcd40d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Porte Math Tutor
Find a La Porte Math Tutor
I was born in Taiwan. I graduated from No.1 university in Taiwan, majored in Economics and came to the USA to pursue an MBA at Lamar University in 1988. I am a loving and patient Christian mom of
three children.
12 Subjects: including algebra 1, algebra 2, vocabulary, grammar
...This was a very valuable experience because it allowed me an inside look into the state testing guidelines, curriculum, and assessments. I look forward for the opportunity to spend time with
your child and teach them one on one. It is so powerful to sit one on one with a child and develop relat...
22 Subjects: including probability, reading, writing, ESL/ESOL
...I evaluate a student's strengths and weaknesses and then put an individualize teaching plan into action. Because of my age and life experiences, parents tend to feel more comfortable with me
teaching their student or in their home. Additionally, my belief is that when a student sees how much ef...
13 Subjects: including algebra 1, SAT math, English, prealgebra
...I worked with a female college student (with ADHD) with her Pathophysiology course, she now has an A and has, as a result, been admitted to nursing school. When we began working together her
HESI score was below 60% and after working together for two months, she took the test again, and achieved...
22 Subjects: including calculus, Java, government & politics, discrete math
I can provide tutoring for various subjects up to the high school/college level. I have a Bachelor of Science degree in Economics from Texas A&M University and an MBA with a concentration in
Finance from the University of Houston. My strongest subjects are those related to math, business, and economics.I have an MBA with a concentration in Finance and Bachelor's Degree in Economics.
20 Subjects: including algebra 1, algebra 2, elementary (k-6th), vocabulary | {"url":"http://www.purplemath.com/La_Porte_Math_tutors.php","timestamp":"2014-04-18T14:19:30Z","content_type":null,"content_length":"23726","record_id":"<urn:uuid:0c64d0ac-f4d3-47c3-aa28-ee9ce601b232>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
AMS: Ultra cold physics
Dr Lincoln Carr, a genial fedora-and-scarf-wearing physics professor here at Mines, visited the applied math department to deliver a talk on his research into the fascinating and complex world of
ultra-cold physics. While he is an engaging speaker, and the things he discussed were cutting-edge, the complexity and esoteric nature of the information involved ensured that a layman was all but
lost for the majority of the talk. Herein the reporter attempts to make approachable a sort of wizardry which only a mathematician or a theoretical physicist can ever truly grasp. A mathematician
looks at one of the nonlinear differential equations presented during the talk and sees its pieces laid out before her in real-world terms, just as a musician sees sheet music and hears a tune; but a
layman looks at the same equations and might as well be trying to read cuneiform.
The broad subject of Carr's talk was the nature of matter itself. As most people learned in high school science class, the universe and everything in it, other than a true vacuum, is made up of
ever-smaller particles: molecules, then atoms, then protons and neutrons, and finally bosons, quarks, neutrinos, and the like. Because matter and energy are the same - the fundamental message of
Einstein's famous equation, E=mc2 - the smallest particles are in fact small "wave packets." Light was the first substance to show off this dual nature: light is both an electromagnetic wave and a
particle called a photon. Technological advances have since allowed for the phenomenon to be studied in other particles as well. As the temperature of a particle, T, approaches absolute zero, the
particle's wave nature becomes measurable, then begins to overlap and merge with the wave nature of the particles around it. In other words, as T goes to zero, the particles slow their vibration,
appearing as wave packets, then as overlapping "matter waves" (called Einstein-Bose condensates for the scientists who first proposed their existence), and finally merging into a single giant "matter
wave", which is known as a pure Bose condensate. Carr and his colleagues work with these condensates.
The existence of Einstein-Bose condensates was first theorised in 1925, but it was not until 1995 that this existence was finally realised, as the technology to induce the ultra-cold temperatures
required to create such a condensate did not exist until then. The field has since taken off, even producing several Nobel Prize winners, thanks to further technological advances and a great deal of
multidisciplinary collaboration. This area of research has many applications to the fast-growing field of quantum computing, in which special computers kept at these same ultra-cold temperatures are
able to process exponentially larger quantities of data than conventional supercomputers.
Since, at its most fundamental level, all matter consists of waves, and a wave is describable as a mathematical equation, all particles can be described by probabilistic wave functions. (The
probability aspect comes in thanks to the theory of quantum superposition, which states that the properties of a particle simultaneously exist in all theoretically possible states until a measurement
is made on the particle.) For this talk, Carr highlighted the Nonlinear Dirac Equation, or NLDE. The NLDE has many possible solutions, with varied and surprising physical expressions, each of which
represents a different variety of particle, with exotic names like "semions" and "skyrmions". A soliton, for instance, corresponds to a kinked curve such as that seen in the bend of a DNA strand or a
curl of ribbon. This "zoology" of the NLDE is not a menagerie of the imagination. The strange shapes that the NLDE can create can be modelled, and when the models are compared to real-world data,
they match almost without flaw.
At temperatures in the sub-micro Kelvins - orders of magnitude colder than the vacuum of space - the speed of light falls to 0.272 cm/s. To put this in perspective, the speed of light in a vacuum at
normal temperatures is a little over 29 billion cm/s. At such glacial speeds, Einstein-Bose condensates can be (and have been) photographed, including in 3D; these photographs are the proof to the
theory that Carr and his colleagues deal with. In order to photograph, for instance, a solition, a scientist must first trap the particle using electromagnetic traps or a complex optical lattice made
of intersecting phase-locked lasers. Where the laser beams interfere with each other, micro-traps are formed, each catching a single particle. Here, instead of using math to give insights into
nature, Carr did the reverse: he used nature to aid in his math.
Graphene is a remarkable substance consisting of molecule-thick sheets of pure carbon. These sheets can be folded into any number of shapes, including the unique folded structures described by
solutions of equations such as the NLDE. Carr therefore used graphene as the inspiration for his optical lattice, helping him to trap particles better. Ultimately, when the theoretical conclusions of
"math world" ("a spectacular land where our imaginations can roam", as Carr put it) are translated into reality - something actually possible, thanks to modern technology - they exhibit very specific
real-world constraints which do not reveal themselves in math world. For instance, at hyper-cold temperatures a gas (such as used in these experiments) "wants" to be a solid, making the system
unsustainable... at least theoretically. Carr found, however, that many solutions of the NLDE, which are unstable in infinite time - that is, in math world - were perfectly sustainable in the finite
timescales of reality. In this way, nature can shine light on theory, just as theory can illuminate nature's inner workings. | {"url":"http://oredigger.net/news/8-news/3211-ams-ultra-cold-physics.html","timestamp":"2014-04-20T21:08:38Z","content_type":null,"content_length":"49876","record_id":"<urn:uuid:dc746699-18db-47dd-9d78-62297e18a918>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
All rights
Describing Motion
Program 12
Lesson 2.4
Text References
Spielberg & Anderson 60-71
Booth & Bloom 83-85, 97-99
Coming Up
In this lesson we will learn about Aristotle's views on motion as incorporated into the Scholastic Philosophy, and we will learn how we describe motion in moderns terms. We will learn the definitions
of speed, velocity, average velocity, instantaneous velocity, and acceleration, and the relationships between them. We will see how motion is described in terms of distance and time, and we will
begin to learn about the use of graphs and geometric figures in studying physical phenomena such as motion.
1. Aristotle described four different types of motion. What were they?
2. What is the distinction between "violent" motion and "natural" motion?
3. According to Aristotle, what are the properties of motion?
4. What did Aristotle have to say in regards to falling objects?
5. According to Aristotle what would happen if a ball was dropped from the mast of a moving ship?
6. What arguments did Aristotle give against a moving Earth?
7. Define the following terms: speed, velocity, average velocity, instantaneous velocity, acceleration.
8. How might you know whether or not an object was being uniformly accelerated?
9. What does a constant of proportion do and how would you find one on a linear graph?
10. Explain the difference between velocity and acceleration.
11. How does a graph help us to understand motion?
12. What is the relationship between initial velocity, final velocity, and average velocity?
13. On a graph of velocity vs. time what is the meaning of slope and area.
1. Be prepared to write a brief and concise response to any of the questions at the beginning of this lesson.
2. Write a short essay which demonstrates understanding of the basic premises of Scholastic Physics.
3. Demonstrate comprehension of Aristotle's view on motion by writing a short essay on the subject.
4. Demonstrate understanding of our modern description of motion by defining and distinguishing between the following terms: speed, velocity, average velocity, instantaneous velocity, acceleration
5. Recognize a linear graph and its characteristics and explain them in a short essay.
6. Describe the constant of proportion and cite several examples of linear relationships.
7. Describe the physical meaning of the slope and area of a graph of velocity and time.
1. Introduction
In this lesson we will contrast the ancient and modern views on motion in preparation for understanding Galileo's experiments. He did those experiments because he recognized that the Scholastic views
on motion, inherited as they were directly from Aristotle, we not correct. Our modern description of motion is entirely due to Galileo, who defined the terms and derived measurable relationships from
those definitions.
The first section, "Aristotle & Scholastic Physics" is a summary of the views of motion which were current in Galileo's time.
The second section is a modern description forged out of Galileo's definitions of velocity and acceleration in terms of distance and time, and Descartes' marriage of algebra and geometry into a
numerical graphic tool which was indispensable in Newton's formulation of gravity, and to all subsequent physical science.
It is through the understanding of motion in general that we come to specifically understand the motion of planets, interplanetary rockets and baseballs.
2. Aristotle & Scholastic Physics
2.1. Aristotle's view on motion are common sense.
Piaget conducted experiments which indicate that this concept of motion is that of the child. They represent the first approximation of a description of motion based largely on intuition.
Aristotle formalized the study and argued logically, but he lacked a quantified definition of the terms used to describe motion.
We have already noted that of all of Aristotle's wonderfully complex and well thought out world views, his theories of motion were the weakest and contained the most holes.
It was as if Aristotle painted himself into a logical corner, then tiptoed out through the wet paint leaving footprints that he hoped couldn't be seen.
As we learn about Aristotle's concepts of motion, try to relate them to your own experiences with motion. Do you agree that they seem reasonable according to your own perceptions of motion?
Consider that in Aristotle's time the fastest a person could travel was on a galloping horse or in a chariot pulled behind one. It's very hard to objectively observe the effects of motion while
bouncing around in a chariot or on horseback.
In Aristotle's time the smoothest motion was on a ship at sea under sail. Here the effects of wind and the relative slow rates of speed make it difficult to study motion.
What Aristotle and his followers needed was a faster car with good suspension and a smooth concrete ribbon to go gliding a mile a minute on wheels made of air and rubber.
Or else someone had come up with a way to study motion objectively and repeatedly under controlled circumstances.
Galileo did this, but only after he had defined the parameters of motion and their relationship in simple terms.
As we will see in future lessons, once we know what the parameters are, then, and only then, can we figure out a way to measure them, then we can begin to do experiments like Galileo did.
2.2. Scholastic philosophy culminated in Summa Theologica, written by St. Thomas Aquinas
You may wish to review lesson 9 to refresh your memory about Scholastic philosophy and its views on motion.
2.3. Synthesis of Aristotle's views, Platonic philosophy, Ptolemaic system and Christian doctrine
The Scholastic philosophy arose out of the need to study the ancient documents. The main ideas underlying the Scholastics was a synthesis of Aristotle's cosmology, his views on nature and motion,
with the mathematical philosophy of Plato including it's Pythagorean mysticism and circular perfection. Added to that was the geocentrism of Ptolemy where the epicycles represented actual motions and
not just mathematical convenience. All of this was combined with Christian doctrine and dogma to produce a strong system of knowledge and learning.
2.4. The paradigm for study and acquiring new knowledge
The Scholastic method became the standard paradigm for studying and for for acquiring new knowledge. It is difficult to express precisely how such a paradigm works, but one of the pressing questions
had to do with the size of angels.
The question of how many angels could fit on the head of a pin was a topic of debate and serious scholarly research, as was the size, shape, and precise location of hell as described as "The Inferno"
in Dante's Divine Comedy.
2.5. Scholastic logic
A typical argument is illustrated by this quote from a prominent astronomer of Florence who reacted with authoritative disdain to Galileo's report of his own discovery of the four largest moons of
"There are seven windows in the head, two nostrils, two ears, two eyes and a mouth; so in the heavens there are two favorable stars, two unpropitious, two luminaries, and Mercury alone
undecided and indifferent. From which and many other similar phenomena of nature such as the seven metals, etc. we gather that the number of planets is necessarily seven. Besides . . . we
have the division of the week into seven days named for the seven planets: now if we increase the number of planets, this whole system falls to the ground . . . Moreover, the satellites are
invisible to the naked eye and therefore can have no influence on the earth and therefore would be useless and therefore do not exist."
Francesco Sizi, Florentine Astronomer responding to Galileo's discovery of Jupiter's moons.
From this critique we get a feeling for the type of logic, and the type of questions which might be considered appropriate. Notice specifically what kind of things are taken as given.
For example, the significance of the number seven which corresponds to the number of metals, planets, and days in the week. These things are not all in the same category. Certainly the seven day week
is a construction of the human mind, having been passed down from the Babylonian calendar.
Also note how this strict adherence to the numerical significance precludes discovering more planets or more metals, the paradigm insisting that the number is fixed at seven.
2.6. Universe is geocentric and moves according to the Divine Plan
In the Scholastic world view, the universe is geocentric and circular, but its motion is part of God's plan for salvation. Incorporating celestial motion into the divine plan made it even more
difficult to view the motions of the heavens in the same terms as earthly motion.
2.7. The motions of the heavens are as described by Ptolemy
The deferents and epicycles of Ptolemy represented real motions and not just mathematical convenience. You may recall that Ptolemy had dismissed any links between the planetary movements, whereas
Aristotle insisted on unity and connections.
2.8. Mathematics is of little use in describing change
Today our use of mathematics is focused on describing and predicting changes. The calculus, an advanced version of analytic geometry, was invented independently by Newton in England and Liebnitz in
Germany specifically to describe how one variable changes in relation to another.
Aristotle argued that change was too ethereal to be described by mathematics. Mathematics was useful in arithmetic for adding and subtracting, and in geometry for calculating angles, perimeters, and
areas, but certainly could not describe or predict change.
2.8.1. useful for calculations but not for descriptions
2.9. Matter dominates universe and causes change
In Aristotle's cosmology, the matter dominates the universe and therefore it is the properties of matter which drive all types of change and determine the outcome of those changes.
2.9.1. prime substances and qualities = elements
Aristotle's scheme for matter allowed for the possibility of one substance changing into another by addition or removal of the elemental qualities.
2.9.1.1. earth, air, fire, water
2.9.1.2. hot, cold, wet, dry
2.9.1.3. elemental qualities responsible for motion and chemical change
In this scheme all motion and chemical change was due to the desire for the four elements to purify themselves and separate into the concentric layers.
2.9.1.4. relation to chemistry later
In later lessons we will learn how this view influenced the development of alchemy and chemistry. Aristotle's failure to distinguish between physical and chemical change may seem
shortsighted, but we will see that such changes are not always easily distinguishable.
2.9.2. Properties of matter are related to location in universe
In the concentrically tiered model of the universe which Aristotle so aptly described, the Heavens are composed of an imponderable substance, called quintessence. It is different from earthly
matter, but since it is imponderable we will never know exactly what it is like and anyway it is beyond our ability to understand so we shouldn't think about it. Because there is perfection in
the heavens, there can be no change, except for the heavenly motions, which he did not really view as change in the same way that changes take place on earth. The heavenly motions were part of
the heavenly perfection and so by definition were not changes, but a type of constancy (hence the necessity for uniform circular motion.)
In the sublunar realm, chaos and imperfection reign. The pure elements, earth, air, fire, and water, find themselves combined into the variety of substances in the physical world and they would
like to be separated. A good analogy would be a shaken bottle of oil and vinegar salad dressing which, if left undisturbed, will eventually settle into layers.
Aristotle explained that all change in the sublunar realm
2.9.2.1. Heavens are perfect quintessence
2.9.2.1.1. heavenly motion is pure and perfect
2.9.2.2. Earth is imperfect mixture of prime substances
2.9.2.2.1. sublunar motion is unnatural because of imperfection
2.9.2.2.2. changes occur to attain purity and perfection
2.9.2.2.3. prime substances are mixed, want to be unmixed
2.10. Properties of motion
Aristotle believed that motion itself had certain properties, which he claimed were intuitively obvious. As we will see, this is one good example of a case where so called common sense leads us far
These properties, incorrectly assumed as given, were used all the way into the seventeenth century to "prove" that the earth could not be moving around the sun or rotating daily on its axis.
2.10.1. rest is the natural state of matter
Obviously, Aristotle said, rest is the natural state of the universe because most things we see are not moving. To get them to move we have to actively move them and when we stop moving them, they
stop moving. For the heavenly motions, this belief necessitated the Prime Mover to turn the crank and keep the celestial sphere and its heavenly bodies turning.
Observations suggest to us that heavenly motion is circular, but motion here on earth is linear. After all, when a ball rolls down a hill, it rolls in a straight line, or at least it would if the
surface did not disrupt it. Similarly, an object in freefall falls downward in a straight line.
2.10.1.1. motion will cease when cause is removed
2.10.1.2. heavenly motion is circular, sublunar motion is linear
2.10.2. sustained motion is only by cause
Once the cause of motion is removed then motion stops. Try it. Push a pencil across a tabletop, then let go of it. It will very quickly come to rest. You are the cause of it's motion by acting upon
it and when you stop, so does it.
As for natural motion like freefall, or pebbles falling through water, Aristotle had claimed that this was due to the natural and preferred positions of the various prime substance in the heavenly
A rock fell through air and water because its composition is more earthly that the other elements so its natural place is in the bottom layer. Sure enough, a rock will fall through fire, air, and
water, and come to rest on the bottom with the other earthly substances.
Similarly, fire rises through air and air rises through water to because that is where it belongs.
Aristotle also thought that water flowing in streams was issuing from underground springs as the water sought its natural place in the ocean.
2.10.2.1. ultimate cause of all heavenly motion is Prime Mover
2.10.2.2. ultimate cause of unnatural sublunar motion is necessity for perfection and purity
2.10.3. arguments against terrestrial motion
Aristotle also gave several arguments against a moving earth based logically on these perceived properties. These arguments were used throughout the middle ages whenever the subject of a moving earth
was brought up. The logic is similar to that of astronomer Sizi, quoted above.
2.10.3.1. spinning Earth would drag air
Since earth would be spinning under the air the motion would drag the air causing tremendous wind, similar to the way the wind blows when you stick your hand out the car window. Of course,
Aristotle had no car, but the effect is noticeable even at low speeds, so at the tremendous speed required to turn the earth in 24 hours the wind would be unbearable.
2.10.3.2. nothing to keep it spinning
Since, according to Aristotle, action is required to sustain motion, there is no obvious cause which keeps the earth spinning. It would naturally stop spinning in the absence of such a force.
It's not clear why he could not have invented an imponderable mover of some sort to do the job, but he didn't.
2.10.3.3. objects thrown upwards or dropped would not fall vertically
An object thrown upwards would land at the same spot it was thrown, but if the earth moved out from under it then it would appear to land behind the spot. In this condition, you might jump into
the air and be hit from behind by a tree, a building, or another person.
This doesn't happen, so obviously the earth can't be moving.
2.10.3.4. loose objects would be thrown off the surface
Like water being flung from a wet shirt whirled overhead, objects on the earth would be thrown out into space if the earth was rotating.
This one seems rather natural, especially considering that Aristotle had no concept of gravity as a force which holds us to the earth.
2.11. Four kinds of motion
Aristotle described four distinct and separate kinds of motion, unrelated, having different causes, and following different principles. We will see the unification of these different types of motion
beginning with Galileo and culminating with Newton's gravity.
2.11.1. alteration: weathering, erosion, rusting
The type of motion that Aristotle called alteration referred to what we would call chemical change today. We might even have to stretch the imagination to call it motion at all, although we certainly
recognize it as a type of change.
2.11.2. natural local motion: vertical motion
Aristotle's description of vertical motion was based partly on his observations of the way in which different objects fell through water. He tried to generalize from water to air by correctly
assuming that air was a thinner version of water and would affect motion in a similar way. He was incorrect, however, in that he observed objects falling through water at their terminal velocity
because water is really a lot thicker than air. We might say he was generally confused about the relationship between falling through air and through water.
2.11.2.1. Aristotle's Observations
Aristotle actually observed the way in which objects of various sizes and weights settled through water. He correctly induced that weight does affect the motion, but he failed to appreciate
the short time required to reach terminal velocity, and the behavior while reaching that speed.
2.11.2.1.1. watched things settling through water
2.11.2.1.2. correctly induced there is some effect of weight under these conditions
2.11.2.1.3. failed to appreciate the short time to reach final velocity
2.11.2.2. Aristotle's Explanation
He decided that there could be no vacuum since the medium through which an object is falling controls its speed, and that water slows an object more than air. In the absence of any disturbing
Aristotle tried to explain his observations in terms of the desire of a material to be in its natural place. Deciding that earth is more waterlike than airlike, it seemed obvious that earth would
fall faster through air than water.
Since heavy objects contain more matter, they would be in more of a hurry to find their place and so would fall faster. He correctly observed that heavy objects fall faster in water but
incorrectly generalized to air. He stated that even in air heavier objects should fall faster, and at a rate proportional to their weights. In quantitative terms he was stating that if one
objects was twice as heavy then it would fall twice as fast.
In the absence of a medium such as water or air there would be nothing to control the speed at all and it would therefore be infinite. Since the concept of infinite speed doesn't make sense then
there can be no vacuum. Thus the statement, popular well into the later centuries, "nature abhors and vacuum."
2.11.2.2.1. movement is up or down according to natural place
2.11.2.2.2. movement is slower in dense materials like water
2.11.2.2.3. light and heavy objects fall at different speeds controlled by medium
2.11.2.2.4. infinite speed in vacuum so there can be no vacuum ("nature abhors a vacuum")
2.11.3. violent motion: projectiles, things pushed or pulled
Violent refers to the action necessary to move things horizontally. Unlike natural motion which happens spontaneously, violent motion does not happen without action. The idea that something must
continually push objects clouded the explanation. Today we recognize that friction with the air or where surfaces are in contact, is the cause of the loss of motion, and it is friction that must
be overcome to keep an object moving.
Explaining the movement of projectiles, objects that were thrown, was the most difficult. This is one of those places where he painted himself into an intellectual corner. The more he considered,
the deeper into the corner he got. He never did really explain his way out of this one.
To explain projectiles he imagined a process that he called antiperistasis. Any object traveling through the air must push air out of the way. Antipersistasis is the force of the air rushing back
into fill the vacuum left as the object passes through. Since nature abhors a vacuum the air will forcibly strike the projectile and continually propel it through the air.
As you might suspect, this was the weakest of all of Aristotle's theories. It did not explain, for example, why the projectile followed a curved path. In fact Aristotle stated that it did not,
that it moved in a straight line until it stopped, then fell straight down. It is obvious that this is not what happens if you simply throw a ball from one hand to the other. It also does not
explain why the projectile ever loses speed at all.
We have already seen in lesson 9, that there was significant criticism of Aristotle's theories of motion during the middle ages and into the reformation. As we will see, Galileo recognized these
weaknesses as a good place to attack Aristotle's authority.
2.11.3.1. notion that something must continually push objects clouded explanation
2.11.3.2. antiperistasis: air rushes in behind to push
2.11.3.3. weakest of all motion theories
2.11.4. celestial motion
We have already seen that celestial motion is a special kind of motion since the heavenly objects are imponderable and massless quintessence which move somehow unimpeded through ethereal crystal
spheres driven by an equally imponderable Prime Mover.
2.11.4.1. heavenly objects are massless quintessence
2.11.4.2. move unimpeded through crystal spheres
2.11.4.3. driven by imponderable Prime Mover
3.1. Is physical science Pythagorean?
3.1.1. how is it similar?
3.1.2. how is it different?
4. Understanding Motion: A modern view
Now we can look at our modern view of motion. We will use a form of symbolic language which is commonly referred to as mathematics, although the word itself is enough to cause fear in many people.
Do not be alarmed! The function of these symbols is to represent relationships between the parameters of motion.
At this point you might want to make a note to look up the word "parameter" in the dictionary. After you have done that, continue reading.
Here we would like you to think of a parameter as a particular state of a system. For example, a ball will roll downhill on a flat board, but the angle or slope of the board will affect the motion.
So we would say that the slope of a hill is a parameter which we might want to measure, but also to control and vary systematically.
For example, to discover quantitative relationships between parameters we could measure the distance traveled in a certain time and compare it to the slope. Do you see what a parameter is now?
Make a short list of some parameters which might affect a daily routine of some kind, like driving to work or to the store.
In this lesson we will establish a relationship between the parameters of motion. We will start with Galileo's definitions and end with a more modern graphical description as we learn a new and
amazing connection between numbers and shapes, undreamed of even by Pythagoras himself, and the key to understanding the motion of the planets . . .
Be sure to study the examples in the text, in the chapter which covers motion. If you made the table of contents as we suggested earlier, it should be easy to find it now. If not you might want to do
that now.
Do not be put off by the equations and the word problems in the text. They are expressing relationships, and the numerical problems are a way to help conceptualize them. Look at the examples to see
how the problems are solved. Try one or two of them yourself. If you can solve the problems it is good. If you cannot solve them it is OK. Do not get discouraged and give up if the equations do not
register with you.
It is the relationships and not the equations that are important. To truly understand motion, you must understand the relationships. The equations should serve only to organize the relationships.
In the lab exercises you will explore the nature of linear and non linear relationships in a quantitative way.
As you begin to discover these and understand them you will see that there are many different ways to organize our intuitions, and the mathematical notation is one way.
Regardless of how well you relate to the mathematical symbols, you should try restating the equations in words and try to conceptualize the meaning. This will improve your mathematical sophistication
and also help you in upcoming lessons.
4.1. Introduction
Our modern understanding of motion is very different from that of Aristotle and his Scholastic followers. For many of us Aristotle's description seems to follow common sense. It does. Unfortunately,
common sense doesn't always give us the right answers, and with motion we have to look a things a little differently.
This is what is necessary to comprehend the physical world. It takes a different way of thinking, the use of precise definitions, establishing mathematical relationships, and using models as a way of
problem solving.
In this section we will define the basic parameters of motion in terms of distance and time, and establish relationships between these parameters which will allow us to think clearly about motion.
4.1.1. describing reality is goal of physical science
Like the ancients our goal in physical science is to describe reality. We recognize that there may be a physical reality distinct from spiritual reality. We have developed a preference for the
quantitative over the qualitative, especially when it comes to deciding on the truth of our conjectures. We rely on numbers in the form of data which we collect from measurements. We look at
relationships and types of relationships between numbers and shapes, and we translate between numbers and shapes as models for testing our understanding against reality.
4.1.1.1. numbers
4.1.1.2. relationships
4.1.1.3. shapes
4.1.2. mathematics allows us to combine all three to simplify understanding
The techniques of mathematics which have evolved since Aristotle's time have grown immensely in sophistication. We now know how to see relationships between numbers in terms of graphical shapes,
a remarkable discovery made by Rene Descartes in the middle of the seventeenth century.
4.1.3. motion is natural state of universe
Today we believe that motion is the natural state of the universe rather than an unnatural state whose existence must be explained. In fact, everything is constantly in motion in our modern view.
4.1.3.1. only an illusion that things are still
The atoms that comprise matter of all kinds are constantly moving with thermal energy. In chemical reactions atoms are rearranged. The constant motion of the earth, the sun, the moon, the stars,
the oceans, the wind, clouds, etc. are all part of a larger process involving the concept of energy.
We do not directly observe the persistent motion of atoms because they are so small. In a similar way, we do not observe the waves on the ocean from a jet airliner or from near earth orbit in the
space shuttle. Relatively speaking, the atoms in a flat, apparently motionless tabletop are much smaller than the waves on the ocean from 35,000 feet.
4.1.3.1.1. surface of ocean appears flat and motionless from 35,000 ft
4.1.3.1.2. size of atoms in relation to flat tabletop is smaller than waves to flat ocean
4.1.3.2. set stage to understand motion of atoms in same terms as motion of planets
The understanding of motion which Galileo's insight and experimental genius gave us allowed us to eventually see that all types of motion, alteration, natural, violent, and celestial, can be
viewed in the same terms. We now believe that the laws of motion apply the same to all objects regardless of size, even in apparently solid objects as well as planets.
4.1.3.2.1. Newton's Gravitation implied that laws of nature are universal, apply to all objects no matter how small
4.1.3.2.2. later scientists thought it should relate to atoms in apparently solid objects as well as planets
4.1.3.3. understanding motion allows relationship with forces to be seen
The relationship between motion and forces, as stated by Newton, was the breakthrough which finally allowed us to make the connections between planetary motion and atomic motion.
We will continue to explore these topics throughout the remainder of the course.
4.2. Speed vs. velocity
Now we are ready to get down to business with the concept of motion. First of all we will define what we mean by speed, then distinguish between speed and velocity, and between average and
instantaneous speed and velocity.
It is useful, when trying to understand something, to first define what the thing is.
In an earlier lesson (lesson 2) we noted that it is impossible to study something until you know that it is a thing. The more specific the definition the better the odds that we know what we are
talking about.
4.2.1. speed: distance divided by time
It is easy enough to decide which horse moves faster in a race. The one which reaches the finish line first must have run faster. That's intuitive.
But how would you compare the speeds of two horses who ran the same track on different days, or on different tracks on the same day?
Another way to think of the speed of the horses, is that the one which runs the fastest will cover the given distance (the length of the race) in the least time.
We will define speed simply as distance divided by time. Thus the horse which runs a given distance in the least time will have a higher speed.
Please note that in a fraction, the bigger the denominator (the downstairs portion) the larger the fraction. So 1/4 is a smaller number than 1/3.
So if one horse runs 1 mile in 120 seconds it has a higher speed than one who runs the same 1 mile in 130 seconds. The faster horse will arrive at the finish line ten seconds before the slower,
and 1/120 is a bigger number than 1/130.
Although it seems obvious that this relationship adequately describes speed, the concept of time was never formalized in the description of motion until Galileo gave this definition late in the
sixteenth century. This is partly due to the absence of accurate timekeeping devices, but also due to the changing ideas about the nature of time.
We will return to this topic sporadically in upcoming lessons.
So, if an object moves thirty feet in one second, we say its speed was 30 feet divided by 1 second (30 ft / 1 sec) or thirty feet per second. If it traveled 60 feet in 2 seconds, we say its speed
was 60 feet / 2 seconds, or thirty feet per second. So we can compare the speed of different objects over different distances or times by this ration which we call speed.
Qualitatively we might think of speed as the rate of change of location or position.
4.2.2. average speed and average velocity
In some cases the direction of motion is also important. Presumably in a horse race all the horses are running in the same direction around the track, so we do not need to take that direction
into account.
We can then define velocity as change in position divided by time, or the rate of change of location.
We have no guarantee that the horse did not speed up or slow down during the race. In fact, without more careful measurements, all we know is that the horse traveled a certain distance, in a
certain direction, in a certain amount of time. We have no information about how that was achieved.
Recall the allegory of the tortoise and the hare. The turtle's slow steady pace compared with the rabbits erratic and changing pace allowed both of them to cover the same distance in
approximately the same amount of time.
Similarly, when you drive from home to work or school, it is unlikely that you move at a constant speed in traffic. Your speedometer will show many different speeds. But regardless of the details
of stopping and starting, speeding and slowing, if it takes you one hour to travel twenty miles, then your average speed was twenty miles per hour.
To be more precise we might specify not only distance, but also direction. Obviously two horse running in different directions at the same speed would reach different locations, so both speed and
direction are important in describing motion.
We might say at this point that velocity is speed in a certain direction.
The distinction between speed and velocity is not important most of the time, and it is not important when trying to visualize the concept of speed as the rate of change of position or location.
When direction is important, such as in trying to figure out where you will be one hour from now if you average sixty miles per hour, we will use it. When direction is not important, such as when
everything is moving in the same direction all the time, we will not use it.
Don't be confused by this. It is just a way to simplify our considerations by eliminating those things which are not important to understanding a situation, a step in the process of parsimony.
4.2.2.1. average is total distance divided by total time
4.2.2.2. velocity is speed in a certain direction
4.2.3. instantaneous velocity
What about the reading on the car's speedometer? Is that measuring average speed or is it measuring something else? What it is really measuring is instantaneous velocity.
4.2.3.1. average velocity over a small interval of time
We will define instantaneous velocity as the average velocity at a particular instant of time or over a small interval of time.
It is necessary to do this because we rarely move at a constant speed for sustained periods of time, for example during the daily commute. The speedometer on the car is actually measuring
instantaneous speed.
The concept of instantaneous velocity is a little more abstract than average velocity. How are we to determine what interval is small enough to be considered instantaneous? We take some small
time interval over which the speed does not change. No matter what the state of motion, there will be some interval where the speed is nearly constant. The limit of that interval is the
instantaneous velocity.
As a qualitative example, suppose you look at the speedometer on a car which is braking as it slows from 30 mph (miles per hour) to 10 miles per hour. Instantaneous speed is the position of
the speedometer needle at a given instant. But how long is an instant? Instantaneous speed could be measured by taking a photograph of the speedometer needle as it drops from 30 to 10. But a
photograph isn't really instantaneous because the light must fall on the film for some period of time in order to register an image.
Normally we take photographs of things which are stationary, unless we want to indicate motion, in which case the moving object is blurred. Whether or not the needle is blurry or sharp on the
photograph depends on two things, the rate that the needle is moving and the length of time that the shutter is open on the camera. The faster the speed changes, the faster the shutter speed
(the smaller the time interval) that is necessary in order to show a sharp image on the photo.
To get a qualitative example, let's look at some numbers.
4.2.3.2. numerical example
Suppose a car is traveling at sixty miles per hour. Then in one hour, if its speed does not change, if will have traveled sixty miles.
At that rate the car will cover eighty eight feet in one second. You can check it out by doing a conversion if you want, that 60 mi/hr = 88 feet/sec.
It is much more likely that the car will move at a constant speed for one second and actually cover 88 feet in that time. If that is the case we could say that its average velocity over that
one second interval is 88 ft/sec and we could calculate that if it continued at that rate for one hour it would travel sixty miles.
But suppose that during that one second the car speeds up or slows down?
Easy, then we could look at the distance traveled in a smaller interval, say 1/10 of a second. At 60 miles per hour, or 88 ft/sec, the car should travel 8.8 feet in 1/10 sec. But it would
also travel 0.88 ft in 1/100 of a second and 0.088 feet in 1/1000 of a second and 0.0088 feet in 1/10,000 of a second, etc.
Eventually we will find some small interval of time during which the speed will not change measurably, then we have found the instantaneous velocity.
All of these ratios are numerically equivalent:
60 miles/1 hour = 88 ft/1 sec = 8.8 ft/0.1 sec = 0.88 ft/0.01 sec, etc.
They all represent the same speed even though the car may not continue at that speed except during the interval over which we measure it.
4.2.3.2.1. if traveling at constant instantaneous velocity will cover a certain distance in a certain time
4.2.3.2.2. constant velocity means that instantaneous velocity equals average velocity for long period of time
4.2.4. Summary of Speed and Velocity
If these concepts are not immediately clear to you then you will understand why Aristotle and everyone else until Galileo had so much difficulty with describing motion.
These concepts are clearly defined, and not really as difficult to understand as they seem. You will find it useful to separate the definitions and the meaning of the terms. Remember, one of the
useful discoveries of modern times is that we do not have to completely understand everything in order to use it.
Like the VCR, we don't have to know everything about how motion works, but we do need to know where the controls are and what happens when we change them.
You should read the descriptions of motion in the textbooks and in the laboratory exercise on describing motion. Like anything new it takes awhile to assimilate and begin to feel comfortable with
this concept of motion.
So, let us say this concisely. Instantaneous velocity is the average velocity over a small interval during which speed remains constant. Whenever speed remains constant then average velocity and
instantaneous velocity are the same. An object which continues to move at a constant velocity will cover equal amounts of distance in equal times, and the distance covered in any time interval
will be proportional to the time interval.
Can you think of other ways to state this relationship?
4.3. Acceleration
Now we are ready to deal with situations in which the instantaneous speed changes. To keep it simple, we will only consider those situations in which the speed changes in a regular way. Otherwise
simple arithmetic is no longer sufficient to describe the motion, and we want to keep it simple.
4.3.1. acceleration is change of velocity divided by time
We will define acceleration as the time rate of change of instantaneous velocity.
Suppose that a car accelerates from rest to 60 miles per hour at a constant rate. Then we might measure its instantaneous velocity each second and record it in a table. We could measure the
instantaneous velocity by taking a photograph of the speedometer each second, although this is not the only method.
Suppose that it takes the car exactly 12 seconds to reach a speed of 60 mph, starting from rest. The table of speeds would look like this:
What do these numbers tell us about the speed and the rate at which it changes? Let's see.
The car went from rest (0 mph) to sixty miles per hour (60 mph) in twelve seconds (12 sec). So its rate of change of speed was sixty miles per hour in twelve seconds or five miles per hour per
second. (60 mph/12 sec = 5 mph/sec). This means that at the end of each second it was traveling five miles per hour faster than at the end of each previous second.
Sure enough, it we look at the table, that is exactly what happened. The velocity increases by 5 miles per hour for each second. We might say that five miles per hour is added to the velocity
each second.
We call this uniform acceleration because the rate of increase of speed is constant.
Notice that we have used the concept of instantaneous velocity at each second although the units are in miles per hour. We might also say that the car accelerated at the rate of five miles per
hour per second, or equivalently 7.33 feet per second per second (88 ft/sec divided by 12 sec).
Notice how the unit of time is compounded in this representation. It is used twice, once to indicate the velocity (rate of change of location) and again to indicate the acceleration (rate of
change of velocity).
This compounding of the time unit or any other unit has no precedent in any recorded thought before Galileo. It is a sophisticated concept born of genius, one of many such strokes on the part of
It is preferable, to keep things simple to speak of time in the same terms. We prefer to think of it as 7.33 feet per second per second rather than 5 miles per hour per second.
Note how the unit is constructed. Feet per second per second really means (feet per second) per second. We normally abbreviate this as feet per second squared, so we would say the acceleration of
the car was 7.33 feet per second squared.
4.3.2. acceleration is the time rate of change of speed
4.4. Graphs of Motion
Now lets turn our attention to a graphic way of visualizing these relationships. What we are about to see is one of the more amazing mathematical phenomena. It is indispensable to our modern science.
That such relationships exist between numbers and pictures is amazing enough. That we can use the same relationships to describe and visualize something abstract like motion is even more so.
The type of graph we will use dates back to the early part of the seventeenth century and Rene Descartes. Descartes was a French expatriate, self-exiled to Holland to avoid the irrational wrath of
the Inquisition in France. In Holland the climate was much looser. There the concern was more in making money and establishing trade than in persecuting and punishing people for believing in the
wrong things or for having the improper ideas.
Descartes formalized the process of analytic geometry which allows for a relationship between equations and graphs on a Cartesian coordinate system.
You have probably seen these Cartesian graphs before. Don't worry, we will be using them in a much different way than you might have in Algebra II.
Our interest in the graphs is the relationship between the quantities which are plotted, not in the plot itself.
4.4.1. Cartesian Coordinates
The Cartesian coordinate system is two number lines which intersect at a point called the origin. Each line is called an axis of the graph. The scale on the two axes may be the same, but it
doesn't have to be.
Numbers can be measured off on the two axes and a point plotted which represents the distance from the origin in both directions. For example we might choose a graph on which to plot the measured
pairs of values of time and distance from the above table (section 4.3.1). One of the data pairs is plotted on the graph below. Can you determine which one it is? If you can not, then take a few
minutes to figure out how the point is plotted before continuing.
With this understanding of the graph, its coordinates and the way a point is plotted, we are ready to look at relationship between the numbers and the shape of the graph. Before we actually study
this graph we want to back up a little and consider shapes of graphs and the relationships they reveal.
4.4.2. Constant velocity
First let's look at a graph of distance and time for an object which is moving at a constant speed. On this graph we are using Cartesian coordinates but we have not put numbers on the axis. We do
this because we want to illustrate the nature of a general relationship between distance and time. The coordinate axes are simply shown with arrows which indicate increasing distance and time.
By definition, a constant speed means that the same distance is traveled in equal time intervals. If the speed is constant, then the relationship, or ratio between distance and time remains the
same, as shown on the graph.
You will notice that this constant relationship between distance and time creates a straight line graph.
The size of the interval of distance compared to the interval of time is a measure of the speed.
Consider an object moving at a faster constant speed. It will cover a greater distance in a given time interval than the slower object. How would the graph of the faster object compare with that
of the slower object? What would the graph of a stationary object look like? What would a vertical line on the graph indicate?
4.4.3. Constant of proportion
The number which represents the ratio between the two quantities plotted on the graph can be characterized in several ways. First and foremost it is the number which must be added to the vertical
axis for each interval on the horizontal axis. In our example above, an acceleration of 5 mph/sec means that 5 mph is added to the velocity each second. The number and corresponding unit, 5 mph/
sec, is the constant of proportion. It is also the number which when multiplied by the horizontal quantity gives the corresponding vertical quantity. We saw in an earlier lesson that
multiplication is nothing more than repeated additions. Here is a concrete example of that relationship.
Note that there is both a qualitative and a quantitative relationship here. Qualitatively, we can say that when time increases, velocity will also increase in direct proportion. The term direct
proportion means that the constant of proportion, whatever the number, is unchanging. Another way to say this is that the two quantities have a constant ratio and plot as a straight line on a
graph which trends upwards. Another way to visualize the direct proportion is to note that if one quantity increases by a certain factor, the other increases by the same factor. For instance, if
the time increases from 4 seconds to 8 seconds ( a factor of two) the velocity increases from 20 mph to 40 mph (a factor of two). If you examine the table you will see similar relationships for a
factor of 4 (from 2 seconds to 8 seconds), 5 (from 2 seconds to 10 seconds), and so forth.
The constant of proportion is also represented quantitatively by a number which represents the ratio, in this case the number five.
We could say for this graph that it is qualitatively a direct proportion with a quantitative constant of proportion equal to 5 mph/sec. The constant of proportion on a linear graph is quantified
by the slope (rise over run) of the straight line on the graph. Changing the rate of acceleration will change the numbers but will not alter the type of relationship between the numbers as long
as the acceleration is uniform.
The role of a constant of proportion can be illustrated with a simple example.
Suppose an item at the store is sold by the pound (or by any weight unit), such as fresh fish. The act of selling by the pound (with no discount for large purchase) is a direct (linear)
proportion. Obviously, the more you buy the more it will cost, and if you buy twice the weight it will cost twice the money. This is the obvious qualitative relationship.
To know exactly how much fish you can buy with a certain amount of money, you have to know the price per pound. This is a ratio of cost to weight which is a constant of proportion. So if the fish
is $6.00 per pound then you can easily figure the cost for any amount of fish. It doesn't matter whether you buy one pound, two pounds, or a fraction or a decimal of a pound. The cost will always
equal the price per pound times the weight in pounds. If the price changes, it does not alter the qualitative nature of the relationship. You can still buy twice as much fish with twice as much
money. All that changes is the actual amount of fish you can buy with a certain amount of money.
This concept of a constant of proportion is extremely important in understanding the principles that we will be considering in future lessons. To be sure that you understand it, think of other
examples of direct proportion, such as the number of revolutions of a car's tire compared to the distance traveled. Can you think of others?
4.4.4. Irregular velocity
Here is a graph of an object moving at an irregular velocity. The line is not straight. You can see that the steeper the graphed line the greater distance is covered in equal time, or the faster
the speed.
From this information we can state a general principle: On a graph of distance and time, the steeper the slope of a line, the greater the velocity it represents. If the line is straight, it
indicates a constant speed, if it is curved it indicates a changing speed.
4.4.5. Regularly Increasing Velocity
In this figure we see the graph of an object which has a regularly increasing velocity. Notice that in each successive time interval the distance traveled increases a little more than in the
previous interval. This produces a smooth upward curve. Without further proof we will state: A graph of distance and time which represents uniform acceleration will curve upwards in the shape of
a parabola. The parabola is a conic section. A graph in the shape of a parabola shows a second power relationship. When the parabola bends upwards, the relationship is between the first power of
the quantity plotted on the vertical axis and the second power of the quantity plotted on the horizontal axis. In this case, for uniform or constant acceleration, distance is proportional to the
second power of time. We say, distance is proportional to time squared.
The curve is not exactly a parabola in this graph because we have plotted average velocity over a fairly large time interval rather than instantaneous velocity.
Now we are ready to look at the graph of velocity vs. time for uniform acceleration.
4.4.6. Velocity vs. time graph
Now let's look at the graph of our velocity and time data for the accelerating car. What shape would we expect the graph to have? Let's guess first, then see if we are correct.
In our graph of constant velocity above, we saw that when the relationship between distance and time was constant, the graph plotted as a straight line. Is this true in general? If so then the
graph of velocity vs. time for our accelerating car should be a straight line because we know that the velocity changes at a constant rate. So we expect a straight line on our plot of velocity
and time for uniform acceleration.
You might want to plot the data pairs in the table on the graph above before you look at our graph.
You will see that the points fall on a straight line, just as we expected.
This should not really be surprising, for the same reason that it should not be surprising that a flat board laid on a staircase should touch each of the stair treads, assuming the board is
straight and the stairs are constructed properly with each tread and riser exactly the same size.
If the same increment of velocity is added in each equal time interval, then the relationship between the two, like the relationship between the tread and rise on the stairs, is a constant one.
4.4.6.1. Instantaneous velocity vs. average velocity
Now consider the following case. A car sits at rest at a traffic light. Another car approaches the light in the adjacent lane, moving at a constant speed. At exactly the instant that the
light changes two things happen. The oncoming car, call it A, is beside the stationary car, call it B, just as the latter begins to accelerate.
Now suppose that car A continues at a constant velocity, and car B accelerates at a constant rate such that they both reach the next intersection at exactly the same instant.
If you have difficulty visualizing this situation,refer to the video program for visual reinforcement, or look at a streaming movie of the animation.
There are several questions we might ask about the motion of the two cars and the graphs of their motion in the way of analysis of this situation.
1. What is the average speed of the two cars.
2. How does the final speed of the two cars compare.
4.4.6.1.1. Average Speeds
From our definition of average speed, total distance divided by time, it is clear that the two cars must have the same average speed. They travel the same distance in the same amount of
time. Their average speeds (velocities) must be the same according to the definition of average speed. How can this be? Obviously the details of their motion are very different. Car A
moves at a constant speed while car B accelerates from rest to some final speed, achieved at the next intersection. A graph of the velocity vs. time for the two cars can help to
illuminate the situation. The graphs are shown in the following figures.
The rectangle is formed by the constant velocity of car A. The horizontal line labeled "average velocity" represents both the instantaneous and average velocity since the speed does not
Here's the graph of motion for car B.
The large triangle represents the motion of car B. Initially it is a rest (it has zero velocity). At time zero (when the light changes) it begins to accelerate, reaching the intersection
at the exact same time as car A.
4.4.6.1.2. Final Speeds
For the first half of the elapsed time interval, car B is traveling slower than car A, but at exactly halfway in the time interval (halfway in time, but not in distance. Why?) their
instantaneous velocities are the same.
What is happening during the first half of the time interval? What would you see if you were in car A? What would you see if you were in car B?
In order to reach the next intersection at the same time, car B must spend the second half of the time interval catching up to car A. By the time they reach the intersection, at exactly
the same instant, the instantaneous velocity of car B must be exactly twice that of car A.
Why twice? Let's compare the graphs of the two cars.
Look at the two small triangles (blue and pink) in this figure. They are congruent, which means that their two vertical sides are equal in length. That means that the average velocity is
exactly halfway between the initial and final velocities of car B. That means that the final velocity of car B must be exactly twice that of car A.
4.4.6.1.3. Slopes
On the graph, the rate of acceleration can be represented by the slope of the line. By definition, the slope of a graph is the rise divided by the run. In this graph the rise represents
the change in velocity and the run represents time. If we call the slope a, then symbolically, a = [[Delta]]v/t (the [[Delta]] symbol means "change of").
If you are still with us, you will recognize that this is exactly the definition of acceleration. You will also note that the horizontal line, representing the constant motion of car A
has a slope of zero, indicating that the acceleration of car B is zero, or that it is not accelerating, or that it is moving at constant speed. All three statements are equivalent.
4.4.6.1.4. Areas
Can you see from this graph that the area of the triangle and the area of the rectangle are the same. If you take the smaller triangle on the upper right and flip it (do it mentally, or
make a cutout) you will see that it is exactly congruent with the other small triangle in the lower left. Congruent means they are the same size and shape. This is a good word to look up
in the dictionary if you are not familiar with it. We will come back to the meaning of the area in awhile.
If the slope of this velocity vs. time graph represents the acceleration, what does the area represent? Let's see.
To calculate the area of the rectangle on the graph you would multiply the length times the width. On the graph the length is time and the width is average velocity. So multiplying l x w
is equivalent to multiplying v x t.
But wait. Velocity multiplied by time equals distance. Sure it does. Look at the definition of average velocity: distance divided by time. Symbolically v = d/t so d = vt.
The area of the rectangle equals the distance travel by car A in the given time interval. Now, we saw above that the area of the rectangle and the area of the large triangle (representing
the motion of car B) are equal.
Do you see the point. Here's the logic and the only conclusion we can reach.
The area of the rectangle represents distance traveled by car A.
The areas of the two graphical figures are equal, and the cars travel the same distance, therefore the area of the large triangle must represent the distance traveled by car B.
4.4.6.1.5. Putting it together
What we have just discovered, the relationship of graphical geometry to numbers and equations is the basis for all of our modern physics. There are relationships between the variables we
call time, distance, velocity, and acceleration. These relationships are mathematical and graphical As we will see, Newton used these relationships when he invented calculus. These
relationships between slope and area are general relationships and form the basis for differential calculus (slopes) and integral calculus (areas), which became the backbone of physical
science after Newton showed us how they work.
4.4.6.1.6. Slopes, Areas, and Physical Reality
In general, the slope of a graph is a ratio of the two quantities being plotted. The area of the graph is a product of the same two quantities.
On a graph of velocity vs. time, the slope represents acceleration and the area represents the distance traveled.
The fact that the slope and areas are related in this way for certain physical quantities has got to leave us wondering why it should be so.
The only answer is no answer at all: Whether we like it or not, physical reality is based on relationships and described by mathematics. We can use graphs and equations to represent
change, contrary to Aristotle's teachings. This is one of the greatest revelations of modern science.
Pythagorean? Somewhat, but more than that. We distinguish between coincidental relationships and meaningful ones, although admittedly we can't always tell the difference at first glance.
4.4.6.2. Algebraic relationships between distance, velocity, acceleration, and time
4.5. Summary of Concepts
4.5.1. average velocity is total distance divided by time
4.5.2. instantaneous velocity is average velocity over a suitably small time interval
4.5.3. acceleration is change of velocity divided by time
4.5.4. on a graph of velocity vs. time the following are true:
4.5.4.1. constant or uniform acceleration will plot as a straight line
4.5.4.2. the slope of the graph represents the velocity
4.5.4.3. the area of the figure formed by the acceleration graph represents the distance traveled
These concepts will be studied as an exercise in lab 5.
5. Summary
Our modern view of motion is very different from that of the Scholastics, whose ideas were borrowed almost directly from Aristotle. Aristotle had claimed that mathematics was of no use in describing
change. In his view all change, including motion, was connected to the cosmology of imperfection.
Aristotle described four kinds of motion: alteration, local, violent, and celestial. Each of these had different properties and different causes.
One of Aristotle's greatest errors was in his views on motion. He believed that all motion must have a cause, and that motion could not continue without some impelling force. He attributed the cause
of motion to the Prime Mover, or to the desire of the four elements to seek perfection in their logical place in the sublunar realm. Part of the arguments against a moving Earth were based on
Aristotle's incorrect analysis of motion.
Galileo recognized that if he could prove Aristotle's views on motion to be incorrect then it would be easier to convince people that the Earth could move. If Aristotle could be wrong on one account
then he could be wrong on others too.
Our modern view of motion derives from Galileo's work, which we will study extensively in future lessons. By precisely defining the parameters, or variables of motion, Galileo was able to show
logically and mathematically the relationship between those variables. He did this in such a way as to be able to make measurements of motion in terms of distance and time and show conclusively that
acceleration due to gravity was uniform.
Our modern view incorporates the use of the Cartesian coordinates to show the relationship between time, distance velocity, and acceleration. From these tools we can see that constant acceleration
produces a straight line on a graph of velocity vs. time. Furthermore we note that uniform (constant) acceleration produces a direct relationship between distance and the second power (square) of
time. We can also show that the slope of this graph represents acceleration and the area of the graph represents distance.
5.1. Aristotle's views on motion were not logical and easily disproven
5.2. Aristotle's views on motion were incorporated into the Scholastic Philosophy
5.3. Aristotle argued that there were four different kinds of motion, each defined by different properties of substances.
5.4. Galileo recognized that Aristotle's views on motion were weak and that to disprove them would weaken his authority on other matters
5.5. Our modern view of motion is derived from Galileo's work
5.6. Our modern view of motion is a quantitative way of describing change
5.7. Our modern view of motion describes it in terms of measurable quantities of distance and time
5.8. On a graph of velocity vs. time the slope represents acceleration and the area represents distance
5.9. These relationships were instrumental in Newton's development of the calculus and his theory of gravitation | {"url":"http://www2.honolulu.hawaii.edu/instruct/natsci/science/brill/sci122/Programs/p12/p12.html","timestamp":"2014-04-17T09:43:42Z","content_type":null,"content_length":"76862","record_id":"<urn:uuid:5dd68d62-be7e-4507-948d-60f4071c79fa>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
vector angles
August 30th 2008, 12:58 PM
vector angles
A child walks due east on the deck of a ship at 4 miles per hour.
The ship is moving north at a speed of 16 miles per hour.
Find the speed and direction of the child relative to the surface of the water.
Speed = mph
The angle of the direction from the north
I don't know what angle to find but i found the speed to be 16.4924
August 30th 2008, 01:07 PM
Chris L T521
A child walks due east on the deck of a ship at 4 miles per hour.
The ship is moving north at a speed of 16 miles per hour.
Find the speed and direction of the child relative to the surface of the water.
Speed = mph
The angle of the direction from the north
I don't know what angle to find but i found the speed to be 16.4924
The speed looks right.
I assume you came up with the vector equation $\bold v=4\bold i+16\bold j$
Thus, $\parallel \bold v\parallel=\sqrt{16^2+4^2}\approx\color{red}\boxed {16.49}$
Now, to find the angle, use $\vartheta=\tan^{-1}\left(\frac{v_y}{v_x}\right)$, where $v_y~and~v_x$ are the y and x components of vector $\bold v$.
Does this make sense?
August 30th 2008, 01:11 PM
yes it does thank you i had the equation upside down | {"url":"http://mathhelpforum.com/pre-calculus/47168-vector-angles-print.html","timestamp":"2014-04-19T00:36:54Z","content_type":null,"content_length":"6035","record_id":"<urn:uuid:a3886d4c-41d4-457f-ae88-0d7648c6c263>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
presenting distance
Lili and Mackenzie teamed up last week with their bands to continue their exploration of estimating the distance from Brightworks to Dolores Park with a measurement unit of their choice. After some
multiplication skill practice, the kids returned to the provocation to do some more math-y exploration, culminating in a presentation on Wednesday. Lili writes all about it below:
Each group used a map of the mission to count the number of long and short blocks between Brightworks and Dolores Park. Using the block the school is on as a sample block, students estimated how many
of their units would make up a short block and how many would make up a long block. Some groups, looking for increased precision, used pieces of paper as makeshift “rulers” to decide that short
blocks were ½ or ⅓ the length of long blocks, etc.
Each group employed different strategies to work out the math. Some added the number of units per block as they went along–creating long strings of complex addition problems. One group started by
thinking about averages–estimating the average number of units in a given block and then doing the multiplication to get a rough answer before diving in with more specific numbers. Another group
created an equation and learned how to cross-multiply. We made use of the base 10 manipulables to make long addition and multiplication more concrete for some groups who were struggling to visualize
the arithmetic they were working out.
On Wednesday, when most groups had already worked out a lot of the math, we gave the students about an hour to create presentations for each other to describe the process they went through to arrive
at an estimate. We emphasized the fact that the numerical answers would be different for each group because everyone started with a different unit. Especially without any define-able correct answer,
our presentations would be the only way for us to understand each others’ methodology and eventual answer. It was lovely to remove the absolute from the mathematical process in this way. Everyone got
a different answer based on the conditions they set up for themselves at the beginning of the project.
Clementine, Quinn, Ben, and Jacob solved the problem two different ways: they averaged the number of smaller and longer blocks, and counted the number of smaller and longer blocks and added the
Bruno, Audrey, and Lukas measured the distance in sheets of paper – ten sheets of paper could fit in 3 paving stones, so they counted how many paving stones are in a city block, then used that number
to figure out the total.
Natasha and Norabelle were thrilled about the chance to present their findings, and both handwrote and typed their spell-checked speech to make a confident presentation. They used Natasha’s cubit
(tip of finger to bottom of elbow) to determine the length of a copper pole, then used the copper pole to measure the block, and then added and added and added.
Oscar and Nicky used a long stick to measure and used multiplication and division to do their math work, measuring from the front door to the corner of the block. When prepping for their
presentation, they found that they didn’t show enough work during their figuring and needed to fill in the blanks before presenting. They learned that they actually do need to show their process in
their work!
The presentations created further opportunities for a multi-modal approach to the project. Visual thinkers made diagrams and spent time designing and decorating their presentation boards. Tactile
learners built structures with the base 10 blocks. Auditory learners talked through the steps of the project with peers and collaborators and got to shine during the actual speech-giving part of the
process. Mathematical minds could attempt more precise estimations, making the project more challenging for those who were ready. Because the projects and presentations were so multifaceted, all
different kinds of learners left feeling both accomplished and challenged by the work they had done. | {"url":"http://www.sfbrightworks.org/2013/10/presenting-distance/","timestamp":"2014-04-17T12:33:16Z","content_type":null,"content_length":"27958","record_id":"<urn:uuid:285ba0ff-8f2d-48de-b574-746bbc69b1b8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating capacitive reactance
Apologies if this has been posted/answered else where but I could not find it.
I am trying to develop a model in matlab for a simple (short cylindrical coil) air cored transformer. So far I am capable of defining the coil such as core diameter, turns, length etc.., calculate
the coil resistance, input current, magnetic field produced, flux density at a given distance along the central axis, and potential emf for the secondary coil.
The next thing I need to work out is the impedance Z.
Z = sqrt(R^2 + X^2)
where X = XL - XC
I can calculate XL because I have found an equation to work out the inductance of the coil, L.
L = (r^2 * N^2)/(9r + 10l)
*from the wiki inductor page
so XL = 2πfL.
I run in to problems with capacitive reactance XC. I cannot find anywhere online explaining how to work out the capacitance of a coil so I can calculate the capacitive reactance XC.
XC = 1/(2πFC)
Does anyone here know how to calculate the capacitance of a coil, or even if my understanding is completely off? | {"url":"http://www.physicsforums.com/showthread.php?p=4268715","timestamp":"2014-04-19T04:42:53Z","content_type":null,"content_length":"32251","record_id":"<urn:uuid:39ff9dd0-8da0-4c5c-9745-7c3ad749e6ca>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kantarovich-type inequalities for operators via D-optimal design theory
Luc Pronzato, Henry Wynn and Anatoly Zhigljavsky
Linear Algebra and its Applications Volume 410, , 2005.
The Katarovich inequality is $z^TAzz^TA^{-1}z\le (M+m)^2/(4mM)$, where $A$ is a positive definite symmetric operator in R^d, z is a unit vector and m and M are respectively the smallest and largest
eigenvalues of A. This is generalised both for operators in R^d and in Hilbert space by noting a connection with D-optimal design theory in mathematical statistics. Each geenralised bound is found as
the maxima of the determinant of a suitable moment matrix. | {"url":"http://eprints.pascal-network.org/archive/00001446/","timestamp":"2014-04-17T01:05:29Z","content_type":null,"content_length":"5769","record_id":"<urn:uuid:33e3cdbb-5d2e-4fb8-9597-9b14d22948ad>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Determine the Laplace transform\[\large e^{-t} \space t \space \sin2t\]\[f(t)=t \space \sin2t\]\[L[e^{-t}f(t)]=L[f(t)](s+1)\]My Answer\[\frac{ 4s(s+1) }{ (s^2+4)^2 }\]Book Answer\[\frac{ 4(s+1) }{
[(s+1)^2+4]^2 }\]What am I missing?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
@zepdrix You feel like looking at another one?
Best Response
You've already chosen the best response.
Hmmm I dunno, I gotta get caught up on my Diff EQ homework apparently XD lol
Best Response
You've already chosen the best response.
I have a Laplace test tomorrow and I'm just going back thru all the study problems. I feel like I did this right. I don't know. What year are you in school @zepdrix
Best Response
You've already chosen the best response.
Mmmmm i dunno, almost done with 2 years i guess :O not too far in yet.
Best Response
You've already chosen the best response.
your major?
Best Response
You've already chosen the best response.
Pshhhh i dunno, I'll figure that out next semester.. trying to put it off as long as i can, until i absolutely have to decide XD I love love love math.. i can't seem to narrow it down any further
than that :D
Best Response
You've already chosen the best response.
Haha, took me 4 years and a couple of different majors before I landed on engineering. I might post more Laplace questions. stay tuned
Best Response
You've already chosen the best response.
@AccessDenied Thank you for the solution. I'll study that for future problems :)
Best Response
You've already chosen the best response.
simplify \[(\sqrt x+\sqrt 3)(\sqrt x+\sqrt 27)\]\[(\sqrt x \times \sqrt x)+(\sqrt x \times \sqrt 27)+(\sqrt 3 \times \sqrt x)+(\sqrt 3 \times \sqrt 27)\]\[x+\sqrt {27x}+\sqrt {3x}+[\sqrt {3 \
times 27}=\sqrt {81}=9]\]@lala2
Best Response
You've already chosen the best response.
thanks so much @ChmE!
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a94912e4b0129a3c904b9b","timestamp":"2014-04-17T12:51:07Z","content_type":null,"content_length":"54304","record_id":"<urn:uuid:88365ef1-0774-452e-a979-5e262ab182c2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determination of modular elliptic curves by Heegner points
Luo, Wenzhi and Ramakrishnan, Dinakar (1997) Determination of modular elliptic curves by Heegner points. Pacific Journal of Mathematics, 181 (3). pp. 251-258. ISSN 0030-8730. http://
See Usage Policy.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:LUOpjm97
For every integer N ≥ 1, consider the set K(N) of imaginary quadratic fields such that, for each K in K(N), its discriminant D is an odd, square-free integer congruent to 1 modulo 4, which is prime
to N and a square modulo 4N. For each K, let c = ([x]−[∞]) be the divisor class of a Heegner point x of discriminant D on the modular curve X = X(0)(N) as in [GZ]. (Concretely, such an x is the image
of a point z in the upper half plane H such that both z and Nz are roots of integral, definite, binary quadratic forms of the same discriminant D ([B]).) Then c defines a point rational over the
Hilbert class field H of K on the Jacobian J = J(0)(N) of X. Denote by cK the trace of c to K.
Item Type: Article
Additional © Copyright 1997, Pacific Journal of Mathematics. To the memory of Olga Taussky-Todd. This Note is dedicated to the memory of Olga Taussky-Todd. Perhaps it is fitting that it concerns
Information: heights and special values, as it was while attending the lectures of B. Gross on this topic in Quebec in June 1985 that the second author first met Olga. We would like to thank B.
Gross and W. Duke for comments on an earlier version of the article. Thanks are also due to different people, Henri Darmon in particular, for suggesting that a result such as Theorem A
above might hold by a variant of [LR]. Both authors would also like to acknowledge the support of the NSF, which made this work possible.
Record CaltechAUTHORS:LUOpjm97
Persistent http://resolver.caltech.edu/CaltechAUTHORS:LUOpjm97
Alternative http://pjm.math.berkeley.edu/1997/181-3/p13.html
Usage No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code: 833
Collection: CaltechAUTHORS
Deposited Tony Diaz
Deposited 12 Oct 2005
Last 26 Dec 2012 08:41
Repository Staff Only: item control page | {"url":"http://authors.library.caltech.edu/833/","timestamp":"2014-04-17T16:40:15Z","content_type":null,"content_length":"19839","record_id":"<urn:uuid:10a357b4-1645-4c09-9c07-6728ac7f4452>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
The conformal group of $S^n$.
Take the 2-minute tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Is there any explicit computation of Conf($S^n$, $g_{std}$), the group of conformal diffeomorphisms of the standard $n$-sphere?
Please ask on math.stackexchange.com. It's a good question but not for here. Also, any exposition of the Yamabe problem is likely to discuss this.
Deane Yang Apr 5 '11 at 14:01
I disagree—this is a fine question for Math Overflow. (Indeed any question whose answer can be found in "any exposition of the Yamabe problem" is probably of interest to research mathematicians.)
Tom Church Apr 5 '11 at 14:34
I am not sure why Yamabe problem is relevant. The group of conformal automorphisms of the n-sphere (with n>1) is the group generated by reflections in round (n-1)-spheres. Any diffeomorphism of a
circle is conformal.
Igor Belegradek Apr 5 '11 at 15:30
My answer to the following question answers this : mathoverflow.net/questions/10066/…
Andy Putman Apr 5 '11 at 15:48
Tom, I concede your point. Igor, many of us differential geometers of a certain age learned about conformal transformations of the sphere, because they play a critical role in the Yamabe problem.
Most differential geometry groups don't really discuss conformal geometry and therefore not this. But any discussion of the Yamabe problem is likely to contain a brief exposition of the topic.
Deane Yang Apr 5 '11 at 16:39
show 1 more comment
Let's say you want to find all locally conformal maps on some open subset of $\mathbb{R}^n$ where $n\geq 3$. The case of $n = 2$ is rather special, any holomorphic function with nonzero derivative
is locally conformal.
Sticking to the case $n\geq 3$, unwinding the definitions leads to a system of PDEs which can be explicitly solved. This is known as Liouville theorem. One class of solutions cannot be extended to
the whole $\mathbb{R}^n$ - these are the spherical inversions. Thus one is led to consider the conformal compactification of $\mathbb{R}^n$ - the sphere $S^n$, where the spherical inversions are
defined on the whole space. Conformal compactification means that we can embed $\mathbb{R}^n$ into compact $S^n$ and that the embedding is conformal map (in this case it is the inverse of the
stereographical projection). Now we know from the Liouville theorem that any locally conformal diffeomorphism of the sphere is either translation, rotation, dilatation or spherical inversion. The
maps are quite explicit on $\mathbb{R}^n$. To get the equations on the sphere you have to "conjugate" it with the stereographical projection which is also quite explicit.
In fact, one can describe explicitly isomorphism between the group of conformal diffeomorphisms of $S^n$ and the linear Lie group $\mathrm{SO}(n+1,1)$. For proof of the Liouville theorem and for
details on this isomorphism see notes by Slovák, page 46 onwards.
add comment
up vote 1 down Try the book A Mathematical Introduction to Conformal Field Theory by Martin Schottenloher. Chapters 1 and 2 go over some of the proofs you are looking for and the book is example
vote driven.
add comment
up vote 0 down vote Try Lecture One of Eastwood, "Notes on Conformal Differential Geometry" (http://dml.cz/dmlcz/701576).
add comment | {"url":"http://mathoverflow.net/questions/60687/the-conformal-group-of-sn","timestamp":"2014-04-16T16:10:55Z","content_type":null,"content_length":"64834","record_id":"<urn:uuid:ae1f4c82-68ab-4c88-b5b2-53c9a069d9d2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
PDF Ebooks for Search word 'ecet Mathematics Chemistry Physics model papers'
Mathematical Methods for Physics and Engineering : A Comprehensive ...
ence of teaching mathematics and physics to undergraduate and pre-universtiy ...... university and college courses, particularly in physics, engineering and ... http://www.matematica.net/portal/
e-books/Riley,%20Hobson%20and%20Bence%20-%20Mathematical%20Methods%20for%20Physics%20and%20Engineering.pdf File Type:PDF
Physics Beyond the Standard Model
modated by the Standard Model, but we have reasons to expect much more CP violation than it can produce. Physics that goes beyond the Standard Model can ... http://sdsu-physics.org/assets/PDFs/
standard_model1.pdf File Type:PDF
the absence of mathematical physics in Indian mathematical traditions, while ... over the essential of the debate, the author concentrates on the quasi- ... empiricism in mathematics; mixed
mathematics; applications to physics. ... http://bib.tiera.ru/dvd53/Boniolo%20G.%20(Ed),%20Budinich%20P.%20(Ed)
%20-%20Role%20of%20Mathematics%20in%20Physical%20Sciences.%20Interdisciplinary%20and%20Philosophical%20Aspects(2005)(256).pdf File Type:PDF
Mathematical Methods for Physics and Engineering - Matematica.NET
Mathematical Methods for Physics and Engineering. The third edition of this highly acclaimed undergraduate textbook is suitable for teaching all the ... http://www.matematica.net/portal/e-books/
Riley,%2520Hobson%2520and%2520Bence%2520-%2520Mathematical%2520Methods%2520for%2520Physics%2520and%2520Engineering.pdf File Type:PDF
Paper I: Mathematical Physics I
Introduction to Mathematical Physics by Charlie Harper. ( P.H.I., 1995). 6. Higher Engineering Mathematics by B S Grewal, Khanna Publishers (2000). ... http://maitreyi.du.ac.in/Physicshons.pdf File
The Effects of a Model-Based Physics Program with a Physics First ...
Introduction: Physics First (PF) and Its Implementation; Research on ... an inquiry- based approach; Learning physics and algebra concurrently creates ... To make the coherence of scientific
knowledge more evident to students by making it more explicit. ... Using mathematics, information and computer technology, and ... https://tmstec4.mtsu.edu/2013stem/
The%2520Effects%2520of%2520a%2520Model-Based%2520Physics%2520Program%2520with%2520a%2520Physics%2520First%2520Approach%25E2%2580%25A8.ppt File Type:PPT
Teaching Mathematics to Students of Chemistry with Symbolic ...
secondary institution even select chemistry rather than physics because they ... taught with greater or lesser invocation of computer algebra; .... mathematical operations incurs an overhead in the
form of learning to ..... A typical traditional course on differential equations comprises a sequence of recipes for ... http://www.cecm.sfu.ca/CAG/papers/ChemEdPaper.pdf File Type:PDF
*Di'oision of Mathematical Physics, Fukai University, Fakiii 91.0 §1.
*Di'oision of Mathematical Physics, Fukai University, Fakiii 91.0 ..... By introduction of this Pi factor the potential energy receives an repulsive ... http://lib.semi.ac.cn:8080/tsh/dzzy/wsqk/
selected%20papers/Progress%20of%20Theoretical%20Physics%20Supplement/65-10.pdf File Type:PDF | {"url":"http://www.downloadpdffree.com/ecet-Mathematics-Chemistry-Physics-model-papers.pdf/10","timestamp":"2014-04-19T19:34:52Z","content_type":null,"content_length":"44714","record_id":"<urn:uuid:3f5ef46c-161e-40d1-b233-a140ddbfbc2b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equilibrium Constants from Initial Concentrations
Here are some tips and trick for calculating equilibrium concentrations from initial concentrations. So we will take a look at 3 different cases. And the first case we’re going to calculate,
equilibrium concentrations when initial concentrations are given and one equilibrium concentration is known. In all these cases we going to use what’s called and ICE table. And the ‘I’ means Initial,
and the ‘C’ stands for Change, and ‘E’, which should make sense stands for Squilibrium. So we’re going to use the ICE table method.
And in the ICE table method, all units are in molar, for molarity. So it will be Molar, capital M so don’t forget that. If it's moles you got to convert the litres.
Say for example we’re given that we have 1Molar N2, and then we have 2Molar H2, and then we have 0Molar NH3. And then in our ICE table, we have the equilibrium concentration of and NH3 is known. It's
known to be equal to 1M. So we want to find out what the N2 concentration is going to be at equilibrium, and we also what to find out what the H2 concentration is at equilibrium. So what we’re going
to do is, in the ICE table methods, we will have our I line because that’s given, and we have our equilibrium concentration. In our C line, what we’re going to do is, we are going to balance our
equation, because we always need to do that. So if you balance the equation you will end up with a 3 and the 2 here.
And so what we’re going to do is, since we have initially no NH3, no Ammonia, what we’re going to do is, we know the change is going to be positive in the product side. Does it really matter? No, it
doesn’t. So if we actually put minus, then we would actually get a negative change and then we would figure it out.
But from here the change is going to be +x. Now there is going to be a 2 in front of the x, because it matches the coefficient. So it will be +2x, so I have that there.
For the C line for N2, since it’s on the opposite side on the reactant, it’s going to get used up. So I’m going to have –x. And then for the C line for H2, since I have a coefficient at 3, it will be
-3x. So remember, the coefficient falls down with the x here. So now what we’re going to use is, we going to use our information here. Because we know that to get the change 2x, we had none to start
with. So zero plus 2x should equal 1M. So if zero plus 2x equals 1, and we'll just drop the molar just for the sake of making it easier for right now. Then, we know that 2x should equal the number 1.
So if we divide, x should equal 0.5. So then, we can actually plug in the x’s in our spots and figure it out. So N2, 1 minus 0.5 will get me 0.5M per equilibrium. And then 2 minus 3 times 0.5, it
will be 2 minus 1.5 so we would be left with 0.5M also. So that’s how we figure out the equilibrium concentrations when we’re given initial concentrations, and equilibrium concentrations of one
species. And so remember, balance your equation and then on your C line, remember your coefficients drop down.
Number 2 is very similar. So calculating when initial concentrations when one change is known. So we have this equation here, and we will use our ICE table nice and quick. We’re given that we have
1Molar of N2, we have 2Molar of Br2 and we have no Molars of HBr, for example. And so say for example, we know this is the change of the H2 is -0.25M. So we want to figure out the equilibrium
concentrations of H2, the equilibrium concentration of Br2, and the equilibrium concentration of HBr.
So like we did before remember balance, so we’re going to balance the equations. So we'll put a 2 in front of the HBr. Now in our C line we use our x method. So we add 2x to the HBr, because the
coefficient of 2 is there. We would have -x for the Br2 because there is one coefficient and then also, important is the H2, because we know that we would also have –x here. So that means that based
on what we have here, x should equal 0.25M because of what we have, based on the change that we know. And so, if we do the math, 1 minus 0.25 is 0.75M for H2. For Br2 it will be 2 minus 0.25, so that
would 1.75M. And then for HBr it will be 0 plus 2 times 0.25 so that’s 0.50M. And so that’s how we figure out the equilibrium concentrations when initial concentrations and one change is known.
Now the last part is the trickiest part, and here are some tips and tricks. Calculate the equilibrium concentrations when initial concentrations and the K value is known. So remember, K is the
equilibrium constant value that we have. So we’re given that the K of this equation is equal to 794 at 25 degrees Celsius. And we’re given our initial concentrations, so we will make so that we have
no Molar of this. And then we’re given that we have 2Molar H2 and 2Molar I2. So those are the things that we're given.
So now we want find the equilibrium concentrations of H2, of I2 and of HI. So we do the same thing; balance, balance the equation. So we'll put a 2 in front of the HI. And then, so we have our I line
and that's it. So under our C line, since we have no HI, that means we’re going to use up some of the issue. So -x, since there was only 1 for the coefficient. –x for the I2 and then +2x for the HI.
Now keep in mind on the C line, the –x’s do not need to necessarily add up to the plus x’s on the opposite side. Because if you look at a our first example, from N2 plus 3H2 yields to an H3, if you
notice the x’s don’t add up on both sides. I have minus 4x on the left side, and now on the product side, we have +2x. So those x’s do not necessarily equal. So just to answer your question if you
had it.
So under the E line, we would calculate that. We would have 2 minus x, to 2 minus x and we have 2x here. So then what we will do is, we would use our K expression. And if you don’t how to write to a
K expression, take a look at some of our other videos, and it help you to write a K expression. So the products go on the top on the K expression. So it will be the concentration HI and then it would
be squared since the coefficient was 2, over the concentration of H2, times the concentration of I2.
Now we plug our E line into those. So we would have 2x, the quantity squared, because don’t forget to keep that square there, over 2 minus x, over 2 minus x. And so that would be the same thing as
saying, 2x squared over 2 minus x squared. Now all that is equal to 794.
Now there are 3 cases that could happen here, and I have them listed as A, B and C. I could have a perfect square, where I can take the square root of both sides, and that’s where my x’s are, I can
spacely square root that to simplify that. Then I have letter B where if I can’t square root, then I have some x’s here, and I can actually cancel the x’s out like the 5 plus x and the 2 minus x
there. I can cancel those out because those would be small compared to the 5 and compared to the 2. I can’t eliminate that x. The reason why is because the small number is still a small number by
itself. But when it’s attached to a 5, or attached to a 2, then I can eliminate those because those would be negligible compare to whatever number I have. After I calculate x then I can compare those
to my concentration to figure out if they’re indeed less than 5%. And that’s called the 5% rule that you may have heard about in class.
And then the last one is using a quadratic. If I can’t use the 5% rule, then I would use a quadratic equation, and that would just take time, and then you would just us your quadratic equation like
in math class, and then, you can solve for your x that way. So in this case our example, since we have K equals and then 2x the quantity squared, over 2 minus x. So that quantity squared is equals
794. If you take a look here, we do actually have a perfect square, where we could square root both sides, both that and that. And then we can get 2x over 2 minus x equals the square root of 794, and
then we can solve for x. I won’t do that here to save yourself some time.
So those are the 3 cases that you would see for calculating equilibrium concentrations from initial concentrations. The first here are pretty easy. All you’re doing is some simple algebra. The last
one at is a little trickier. And so hopefully this tips and tricks, help lay out a plan for you to attack and figure out how to calculate those equilibrium concentrations. Have a good one. | {"url":"https://www.brightstorm.com/science/chemistry/chemical-equilibrium/equilibrium-constants-from-initial-concentrations/","timestamp":"2014-04-18T15:41:45Z","content_type":null,"content_length":"60911","record_id":"<urn:uuid:5ecd2083-0307-4e24-bae8-0880e80dc791>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Barrington, NJ Calculus Tutor
Find a Barrington, NJ Calculus Tutor
...Able to help students improve reading comprehension through specific test-taking strategies and pinpoint necessary areas of vocabulary improvement. Scored 800/800 on January 26, 2013 SAT
Writing exam, with a 12 on the essay. Able to help focus students on necessary grammar rules and help them with essay composition.
19 Subjects: including calculus, statistics, algebra 2, geometry
...My background is in engineering and business, so I use an applied math approach to teaching. I find knowing why the math is important goes a long way towards helping students retain
information. After all, math IS fun!In the past 5 years, I have taught differential equations at a local university.
13 Subjects: including calculus, geometry, statistics, algebra 1
...Many have known me as Carl The Math Tutor. I have nearly completed a PhD in math (with a heavy emphasis towards the computer science side of math) from the University of Delaware. I have 20+
years of solid experience tutoring college-level math and theoretical computer science, having mostly financed my education that way.
11 Subjects: including calculus, statistics, ACT Math, precalculus
...I conduct research at UPenn and West Chester University on colloidal crystals and hydrodynamic damping. Students I tutor are mostly college-age, but range from middle school to adult. As a
tutor with multiple years of experience tutoring people in precalculus- and calculus-level courses, tutoring calculus is one of my main focuses.
9 Subjects: including calculus, physics, geometry, algebra 1
...I believe that I have a unique ability to present and demonstrate various topics in mathematics in a fun and effective way. I have worked three semesters as a computer science lab TA at North
Carolina State University, as well as three semesters as a general math tutor for the tutoring center at...
22 Subjects: including calculus, geometry, GRE, ASVAB
Related Barrington, NJ Tutors
Barrington, NJ Accounting Tutors
Barrington, NJ ACT Tutors
Barrington, NJ Algebra Tutors
Barrington, NJ Algebra 2 Tutors
Barrington, NJ Calculus Tutors
Barrington, NJ Geometry Tutors
Barrington, NJ Math Tutors
Barrington, NJ Prealgebra Tutors
Barrington, NJ Precalculus Tutors
Barrington, NJ SAT Tutors
Barrington, NJ SAT Math Tutors
Barrington, NJ Science Tutors
Barrington, NJ Statistics Tutors
Barrington, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Barrington_NJ_Calculus_tutors.php","timestamp":"2014-04-17T04:05:24Z","content_type":null,"content_length":"24341","record_id":"<urn:uuid:f1be11a6-9c49-42b3-bfcd-34b61565fb97>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gaussian Integral
September 16th 2009, 11:05 AM #1
Junior Member
Feb 2009
Gaussian Integral
I need to integrate:
to find A. I tried 2 different methods but yielded different answers.
Using $\int_{-\infty}^{\infty}e^{-u^2}du=\sqrt{\pi}$:
let $u=\lambda(x-a)$ and $du=\lambda dx$
thus $A=\frac{\lambda}{\sqrt{\pi}}$
Attempt 2, using $\int_{-\infty}^{\infty}Ae^{\frac{-(x-a)^2}{2c^2}}dx=Ac\sqrt{2\pi}$
Here $A=\frac{1}{c\sqrt{2\pi}}$; since $\frac{1}{2c^2}=\lambda$, we have $c=\frac{1}{\sqrt{2\lambda}}$
Therefore $A=\frac{1}{\frac{1}{\sqrt{2\lambda}}\sqrt{2\pi}}=\ sqrt{\frac{\lambda}{\pi}}$
I thought both formula should be valid as listed from wiki. What am I doing wrong?
I need to integrate:
to find A. I tried 2 different methods but yielded different answers.
Using $\int_{-\infty}^{\infty}e^{-u^2}du=\sqrt{\pi}$:
let $u=\lambda(x-a)$ and $du=\lambda dx$Here's your problem.
thus $A=\frac{\lambda}{\sqrt{\pi}}$
Attempt 2, using $\int_{-\infty}^{\infty}Ae^{\frac{-(x-a)^2}{2c^2}}dx=Ac\sqrt{2\pi}$
Here $A=\frac{1}{c\sqrt{2\pi}}$; since $\frac{1}{2c^2}=\lambda$, we have $c=\frac{1}{\sqrt{2\lambda}}$
Therefore $A=\frac{1}{\frac{1}{\sqrt{2\lambda}}\sqrt{2\pi}}=\ sqrt{\frac{\lambda}{\pi}}$
I thought both formula should be valid as listed from wiki. What am I doing wrong?
You said: "let $u=\lambda(x-a)$ and $du=\lambda dx$"
In this case though, $-u^2 = -\underbrace{\lambda^2}_{bad}(x-a)^2$. You want $u=\sqrt{\lambda}(x-a)$ and $du=\sqrt{\lambda}\,dx$.
Fixing this will reconcile the two answers you got.
September 16th 2009, 11:22 AM #2 | {"url":"http://mathhelpforum.com/calculus/102628-gaussian-integral.html","timestamp":"2014-04-19T18:24:29Z","content_type":null,"content_length":"38451","record_id":"<urn:uuid:869b4c7a-9d28-4cac-827c-14eed82e1d9e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
di’s theorem
You are currently browsing the tag archive for the ‘Szemeredi’s theorem’ tag.
Tamar Ziegler and I have just uploaded to the arXiv our joint paper “A multi-dimensional Szemerédi theorem for the primes via a correspondence principle“. This paper is related to an earlier result
of Ben Green and mine in which we established that the primes contain arbitrarily long arithmetic progressions. Actually, in that paper we proved a more general result:
Theorem 1 (Szemerédi’s theorem in the primes) Let ${A}$ be a subset of the primes ${{\mathcal P}}$ of positive relative density, thus ${\limsup_{N \rightarrow \infty} \frac{|A \cap [N]|}{|{\
mathcal P} \cap [N]|} > 0}$. Then ${A}$ contains arbitrarily long arithmetic progressions.
This result was based in part on an earlier paper of Green that handled the case of progressions of length three. With the primes replaced by the integers, this is of course the famous theorem of
Szemerédi’s theorem has now been generalised in many different directions. One of these is the multidimensional Szemerédi theorem of Furstenberg and Katznelson, who used ergodic-theoretic techniques
to show that any dense subset of ${{\bf Z}^d}$ necessarily contained infinitely many constellations of any prescribed shape. Our main result is to relativise that theorem to the primes as well:
Theorem 2 (Multidimensional Szemerédi theorem in the primes) Let ${d \geq 1}$, and let ${A}$ be a subset of the ${d^{th}}$ Cartesian power ${{\mathcal P}^d}$ of the primes of positive relative
density, thus
$\displaystyle \limsup_{N \rightarrow \infty} \frac{|A \cap [N]^d|}{|{\mathcal P}^d \cap [N]^d|} > 0.$
Then for any ${v_1,\ldots,v_k \in {\bf Z}^d}$, ${A}$ contains infinitely many “constellations” of the form ${a+r v_1, \ldots, a + rv_k}$ with ${a \in {\bf Z}^k}$ and ${r}$ a positive integer.
In the case when ${A}$ is itself a Cartesian product of one-dimensional sets (in particular, if ${A}$ is all of ${{\mathcal P}^d}$), this result already follows from Theorem 1, but there does not
seem to be a similarly easy argument to deduce the general case of Theorem 2 from previous results. Simultaneously with this paper, an independent proof of Theorem 2 using a somewhat different method
has been established by Cook, Maygar, and Titichetrakun.
The result is reminiscent of an earlier result of mine on finding constellations in the Gaussian primes (or dense subsets thereof). That paper followed closely the arguments of my original paper with
Ben Green, namely it first enclosed (a W-tricked version of) the primes or Gaussian primes (in a sieve theoretic-sense) by a slightly larger set (or more precisely, a weight function ${u}$) of almost
primes or almost Gaussian primes, which one could then verify (using methods closely related to the sieve-theoretic methods in the ongoing Polymath8 project) to obey certain pseudorandomness
conditions, known as the linear forms condition and the correlation condition. Very roughly speaking, these conditions assert statements of the following form: if ${n}$ is a randomly selected
integer, then the events of ${n+h_1,\ldots,n+h_k}$ simultaneously being an almost prime (or almost Gaussian prime) are approximately independent for most choices of ${h_1,\ldots,h_k}$. Once these
conditions are satisfied, one can then run a transference argument (initially based on ergodic-theory methods, but nowadays there are simpler transference results based on the Hahn-Banach theorem,
due to Gowers and Reingold-Trevisan-Tulsiani-Vadhan) to obtain relative Szemerédi-type theorems from their absolute counterparts.
However, when one tries to adapt these arguments to sets such as ${{\mathcal P}^2}$, a new difficulty occurs: the natural analogue of the almost primes would be the Cartesian square ${{\mathcal A}^2}
$ of the almost primes – pairs ${(n,m)}$ whose entries are both almost primes. (Actually, for technical reasons, one does not work directly with a set of almost primes, but would instead work with a
weight function such as ${u(n) u(m)}$ that is concentrated on a set such as ${{\mathcal A}^2}$, but let me ignore this distinction for now.) However, this set ${{\mathcal A}^2}$ does not enjoy as
many pseudorandomness conditions as one would need for a direct application of the transference strategy to work. More specifically, given any fixed ${h, k}$, and random ${(n,m)}$, the four events
$\displaystyle (n,m) \in {\mathcal A}^2$
$\displaystyle (n+h,m) \in {\mathcal A}^2$
$\displaystyle (n,m+k) \in {\mathcal A}^2$
$\displaystyle (n+h,m+k) \in {\mathcal A}^2$
do not behave independently (as they would if ${{\mathcal A}^2}$ were replaced for instance by the Gaussian almost primes), because any three of these events imply the fourth. This blocks the
transference strategy for constellations which contain some right-angles to them (e.g. constellations of the form ${(n,m), (n+r,m), (n,m+r)}$) as such constellations soon turn into rectangles such as
the one above after applying Cauchy-Schwarz a few times. (But a few years ago, Cook and Magyar showed that if one restricted attention to constellations which were in general position in the sense
that any coordinate hyperplane contained at most one element in the constellation, then this obstruction does not occur and one can establish Theorem 2 in this case through the transference
argument.) It’s worth noting that very recently, Conlon, Fox, and Zhao have succeeded in removing of the pseudorandomness conditions (namely the correlation condition) from the transference
principle, leaving only the linear forms condition as the remaining pseudorandomness condition to be verified, but unfortunately this does not completely solve the above problem because the linear
forms condition also fails for ${{\mathcal A}^2}$ (or for weights concentrated on ${{\mathcal A}^2}$) when applied to rectangular patterns.
There are now two ways known to get around this problem and establish Theorem 2 in full generality. The approach of Cook, Magyar, and Titichetrakun proceeds by starting with one of the known proofs
of the multidimensional Szemerédi theorem – namely, the proof that proceeds through hypergraph regularity and hypergraph removal – and attach pseudorandom weights directly within the proof itself,
rather than trying to add the weights to the result of that proof through a transference argument. (A key technical issue is that weights have to be added to all the levels of the hypergraph – not
just the vertices and top-order edges – in order to circumvent the failure of naive pseudorandomness.) As one has to modify the entire proof of the multidimensional Szemerédi theorem, rather than use
that theorem as a black box, the Cook-Magyar-Titichetrakun argument is lengthier than ours; on the other hand, it is more general and does not rely on some difficult theorems about primes that are
used in our paper.
In our approach, we continue to use the multidimensional Szemerédi theorem (or more precisely, the equivalent theorem of Furstenberg and Katznelson concerning multiple recurrence for commuting
shifts) as a black box. The difference is that instead of using a transference principle to connect the relative multidimensional Szemerédi theorem we need to the multiple recurrence theorem, we
instead proceed by a version of the Furstenberg correspondence principle, similar to the one that connects the absolute multidimensional Szemerédi theorem to the multiple recurrence theorem. I had
discovered this approach many years ago in an unpublished note, but had abandoned it because it required an infinite number of linear forms conditions (in contrast to the transference technique,
which only needed a finite number of linear forms conditions and (until the recent work of Conlon-Fox-Zhao) a correlation condition). The reason for this infinite number of conditions is that the
correspondence principle has to build a probability measure on an entire ${\sigma}$-algebra; for this, it is not enough to specify the measure ${\mu(A)}$ of a single set such as ${A}$, but one also
has to specify the measure ${\mu( T^{n_1} A \cap \ldots \cap T^{n_m} A)}$ of “cylinder sets” such as ${T^{n_1} A \cap \ldots \cap T^{n_m} A}$ where ${m}$ could be arbitrarily large. The larger ${m}$
gets, the more linear forms conditions one needs to keep the correspondence under control.
With the sieve weights ${u}$ we were using at the time, standard sieve theory methods could indeed provide a finite number of linear forms conditions, but not an infinite number, so my idea was
abandoned. However, with my later work with Green and Ziegler on linear equations in primes (and related work on the Mobius-nilsequences conjecture and the inverse conjecture on the Gowers norm),
Tamar and I realised that the primes themselves obey an infinite number of linear forms conditions, so one can basically use the primes (or a proxy for the primes, such as the von Mangoldt function $
{\Lambda}$) as the enveloping sieve weight, rather than a classical sieve. Thus my old idea of using the Furstenberg correspondence principle to transfer Szemerédi-type theorems to the primes could
actually be realised. In the one-dimensional case, this simply produces a much more complicated proof of Theorem 1 than the existing one; but it turns out that the argument works as well in higher
dimensions and yields Theorem 2 relatively painlessly, except for the fact that it needs the results on linear equations in primes, the known proofs of which are extremely lengthy (and also require
some of the transference machinery mentioned earlier). The problem of correlations in rectangles is avoided in the correspondence principle approach because one can compensate for such correlations
by performing a suitable weighted limit to compute the measure ${\mu( T^{n_1} A \cap \ldots \cap T^{n_m} A)}$ of cylinder sets, with each ${m}$ requiring a different weighted correction. (This may be
related to the Cook-Magyar-Titichetrakun strategy of weighting all of the facets of the hypergraph in order to recover pseudorandomness, although our contexts are rather different.)
Ben Green and I have just uploaded to the arXiv our paper “New bounds for Szemeredi’s theorem, Ia: Progressions of length 4 in finite field geometries revisited“, submitted to Proc. Lond. Math. Soc..
This is both an erratum to, and a replacement for, our previous paper “New bounds for Szemeredi’s theorem. I. Progressions of length 4 in finite field geometries“. The main objective in both papers
is to bound the quantity ${r_4(F^n)}$ for a vector space ${F^n}$ over a finite field ${F}$ of characteristic greater than ${4}$, where ${r_4(F^n)}$ is defined as the cardinality of the largest subset
of ${F^n}$ that does not contain an arithmetic progression of length ${4}$. In our earlier paper, we gave two arguments that bounded ${r_4(F^n)}$ in the regime when the field ${F}$ was fixed and ${n}
$ was large. The first “cheap” argument gave the bound
$\displaystyle r_4(F^n) \ll |F|^n \exp( - c \sqrt{\log n} )$
and the more complicated “expensive” argument gave the improvement
$\displaystyle r_4(F^n) \ll |F|^n n^{-c} \ \ \ \ \ (1)$
for some constant ${c>0}$ depending only on ${F}$.
Unfortunately, while the cheap argument is correct, we discovered a subtle but serious gap in our expensive argument in the original paper. Roughly speaking, the strategy in that argument is to
employ the density increment method: one begins with a large subset ${A}$ of ${F^n}$ that has no arithmetic progressions of length ${4}$, and seeks to locate a subspace on which ${A}$ has a
significantly increased density. Then, by using a “Koopman-von Neumann theorem”, ultimately based on an iteration of the inverse ${U^3}$ theorem of Ben and myself (and also independently by
Samorodnitsky), one approximates ${A}$ by a “quadratically structured” function ${f}$, which is (locally) a combination of a bounded number of quadratic phase functions, which one can prepare to be
in a certain “locally equidistributed” or “locally high rank” form. (It is this reduction to the high rank case that distinguishes the “expensive” argument from the “cheap” one.) Because ${A}$ has no
progressions of length ${4}$, the count of progressions of length ${4}$ weighted by ${f}$ will also be small; by combining this with the theory of equidistribution of quadratic phase functions, one
can then conclude that there will be a subspace on which ${f}$ has increased density.
The error in the paper was to conclude from this that the original function ${1_A}$ also had increased density on the same subspace; it turns out that the manner in which ${f}$ approximates ${1_A}$
is not strong enough to deduce this latter conclusion from the former. (One can strengthen the nature of approximation until one restores such a conclusion, but only at the price of deteriorating the
quantitative bounds on ${r_4(F^n)}$ one gets at the end of the day to be worse than the cheap argument.)
After trying unsuccessfully to repair this error, we eventually found an alternate argument, based on earlier papers of ourselves and of Bergelson-Host-Kra, that avoided the density increment method
entirely and ended up giving a simpler proof of a stronger result than (1), and also gives the explicit value of ${c = 2^{-22}}$ for the exponent ${c}$ in (1). In fact, it gives the following
stronger result:
Theorem 1 Let ${A}$ be a subset of ${F^n}$ of density at least ${\alpha}$, and let ${\epsilon>0}$. Then there is a subspace ${W}$ of ${F^n}$ of codimension ${O( \epsilon^{-2^{20}})}$ such that
the number of (possibly degenerate) progressions ${a, a+r, a+2r, a+3r}$ in ${A \cap W}$ is at least ${(\alpha^4-\epsilon)|W|^2}$.
The bound (1) is an easy consequence of this theorem after choosing ${\epsilon := \alpha^4/2}$ and removing the degenerate progressions from the conclusion of the theorem.
The main new idea is to work with a local Koopman-von Neumann theorem rather than a global one, trading a relatively weak global approximation to ${1_A}$ with a significantly stronger local
approximation to ${1_A}$ on a subspace ${W}$. This is somewhat analogous to how sometimes in graph theory it is more efficient (from the point of view of quantative estimates) to work with a local
version of the Szemerédi regularity lemma which gives just a single regular pair of cells, rather than attempting to regularise almost all of the cells. This local approach is well adapted to the
inverse ${U^3}$ theorem we use (which also has this local aspect), and also makes the reduction to the high rank case much cleaner. At the end of the day, one ends up with a fairly large subspace $
{W}$ on which ${A}$ is quite dense (of density ${\alpha-O(\epsilon)}$) and which can be well approximated by a “pure quadratic” object, namely a function of a small number of quadratic phases obeying
a high rank condition. One can then exploit a special positivity property of the count of length four progressions weighted by pure quadratic objects, essentially due to Bergelson-Host-Kra, which
then gives the required lower bound.
A few days ago, Endre Szemerédi was awarded the 2012 Abel prize “for his fundamental contributions to discrete mathematics and theoretical computer science, and in recognition of the profound and
lasting impact of these contributions on additive number theory and ergodic theory.” The full citation for the prize may be found here, and the written notes for a talk given by Tim Gowers on Endre’s
work at the announcement may be found here (and video of the talk can be found here).
As I was on the Abel prize committee this year, I won’t comment further on the prize, but will instead focus on what is arguably Endre’s most well known result, namely Szemerédi’s theorem on
arithmetic progressions:
Theorem 1 (Szemerédi’s theorem) Let ${A}$ be a set of integers of positive upper density, thus ${\lim \sup_{N \rightarrow\infty} \frac{|A \cap [-N,N]|}{|[-N,N]|} > 0}$, where ${[-N,N] := \{-N,
-N+1,\ldots,N\}}$. Then ${A}$ contains an arithmetic progression of length ${k}$ for any ${k>1}$.
Szemerédi’s original proof of this theorem is a remarkably intricate piece of combinatorial reasoning. Most proofs of theorems in mathematics – even long and difficult ones – generally come with a
reasonably compact “high-level” overview, in which the proof is (conceptually, at least) broken down into simpler pieces. There may well be technical difficulties in formulating and then proving each
of the component pieces, and then in fitting the pieces together, but usually the “big picture” is reasonably clear. To give just one example, the overall strategy of Perelman’s proof of the Poincaré
conjecture can be briefly summarised as follows: to show that a simply connected three-dimensional manifold is homeomorphic to a sphere, place a Riemannian metric on it and perform Ricci flow,
excising any singularities that arise by surgery, until the entire manifold becomes extinct. By reversing the flow and analysing the surgeries performed, obtain enough control on the topology of the
original manifold to establish that it is a topological sphere.
In contrast, the pieces of Szemerédi’s proof are highly interlocking, particularly with regard to all the epsilon-type parameters involved; it takes quite a bit of notational setup and foundational
lemmas before the key steps of the proof can even be stated, let alone proved. Szemerédi’s original paper contains a logical diagram of the proof (reproduced in Gowers’ recent talk) which already
gives a fair indication of this interlocking structure. (Many years ago I tried to present the proof, but I was unable to find much of a simplification, and my exposition is probably not that much
clearer than the original text.) Even the use of nonstandard analysis, which is often helpful in cleaning up armies of epsilons, turns out to be a bit tricky to apply here. (In typical applications
of nonstandard analysis, one can get by with a single nonstandard universe, constructed as an ultrapower of the standard universe; but to correctly model all the epsilons occuring in Szemerédi’s
argument, one needs to repeatedly perform the ultrapower construction to obtain a (finite) sequence of increasingly nonstandard (and increasingly saturated) universes, each one containing unbounded
quantities that are far larger than any quantity that appears in the preceding universe, as discussed at the end of this previous blog post. This sequence of universes does end up concealing all the
epsilons, but it is not so clear that this is a net gain in clarity for the proof; I may return to the nonstandard presentation of Szemeredi’s argument at some future juncture.)
Instead of trying to describe the entire argument here, I thought I would instead show some key components of it, with only the slightest hint as to how to assemble the components together to form
the whole proof. In particular, I would like to show how two particular ingredients in the proof – namely van der Waerden’s theorem and the Szemerédi regularity lemma – become useful. For reasons
that will hopefully become clearer later, it is convenient not only to work with ordinary progressions ${P_1 = \{ a, a+r_1, a+2r_1, \ldots, a+(k_1-1)r_1\}}$, but also progressions of progressions $
{P_2 := \{ P_1, P_1 + r_2, P_1+2r_2, \ldots, P_1+(k_2-1)r_2\}}$, progressions of progressions of progressions, and so forth. (In additive combinatorics, these objects are known as generalised
arithmetic progressions of rank one, two, three, etc., and play a central role in the subject, although the way they are used in Szemerédi’s proof is somewhat different from the way that they are
normally used in additive combinatorics.) Very roughly speaking, Szemerédi’s proof begins by building an enormous generalised arithmetic progression of high rank containing many elements of the set $
{A}$ (arranged in a “near-maximal-density” configuration), and then steadily prunes this progression to improve the combinatorial properties of the configuration, until one ends up with a single rank
one progression of length ${k}$ that consists entirely of elements of ${A}$.
To illustrate some of the basic ideas, let us first consider a situation in which we have located a progression ${P, P + r, \ldots, P+(k-1)r}$ of progressions of length ${k}$, with each progression $
{P+ir}$, ${i=0,\ldots,k-1}$ being quite long, and containing a near-maximal amount of elements of ${A}$, thus
$\displaystyle |A \cap (P+ir)| \approx \delta |P|$
where ${\delta := \lim \sup_{|P| \rightarrow \infty} \frac{|A \cap P|}{|P|}}$ is the “maximal density” of ${A}$ along arithmetic progressions. (There are a lot of subtleties in the argument about
exactly how good the error terms are in various approximations, but we will ignore these issues for the sake of this discussion and just use the imprecise symbols such as ${\approx}$ instead.) By
hypothesis, ${\delta}$ is positive. The objective is then to locate a progression ${a, a+r', \ldots,a+(k-1)r'}$ in ${A}$, with each ${a+ir}$ in ${P+ir}$ for ${i=0,\ldots,k-1}$. It may help to view
the progression of progressions ${P, P + r, \ldots, P+(k-1)r}$ as a tall thin rectangle ${P \times \{0,\ldots,k-1\}}$.
If we write ${A_i := \{ a \in P: a+ir \in A \}}$ for ${i=0,\ldots,k-1}$, then the problem is equivalent to finding a (possibly degenerate) arithmetic progression ${a_0,a_1,\ldots,a_{k-1}}$, with each
${a_i}$ in ${A_i}$.
By hypothesis, we know already that each set ${A_i}$ has density about ${\delta}$ in ${P}$:
$\displaystyle |A_i \cap P| \approx \delta |P|. \ \ \ \ \ (1)$
Let us now make a “weakly mixing” assumption on the ${A_i}$, which roughly speaking asserts that
$\displaystyle |A_i \cap E| \approx \delta \sigma |P| \ \ \ \ \ (2)$
for “most” subsets ${E}$ of ${P}$ of density ${\approx \sigma}$ of a certain form to be specified shortly. This is a plausible type of assumption if one believes ${A_i}$ to behave like a random set,
and if the sets ${E}$ are constructed “independently” of the ${A_i}$ in some sense. Of course, we do not expect such an assumption to be valid all of the time, but we will postpone consideration of
this point until later. Let us now see how this sort of weakly mixing hypothesis could help one count progressions ${a_0,\ldots,a_{k-1}}$ of the desired form.
We will inductively consider the following (nonrigorously defined) sequence of claims ${C(i,j)}$ for each ${0 \leq i \leq j < k}$:
• ${C(i,j)}$: For most choices of ${a_j \in P}$, there are ${\sim \delta^i |P|}$ arithmetic progressions ${a_0,\ldots,a_{k-1}}$ in ${P}$ with the specified choice of ${a_j}$, such that ${a_l \in
A_l}$ for all ${l=0,\ldots,i-1}$.
(Actually, to avoid boundary issues one should restrict ${a_j}$ to lie in the middle third of ${P}$, rather than near the edges, but let us ignore this minor technical detail.) The quantity ${\delta^
i |P|}$ is natural here, given that there are ${\sim |P|}$ arithmetic progressions ${a_0,\ldots,a_{k-1}}$ in ${P}$ that pass through ${a_i}$ in the ${i^{th}}$ position, and that each one ought to
have a probability of ${\delta^i}$ or so that the events ${a_0 \in A_0, \ldots, a_{i-1} \in A_{i-1}}$ simultaneously hold.) If one has the claim ${C(k-1,k-1)}$, then by selecting a typical ${a_{k-1}}
$ in ${A_{k-1}}$, we obtain a progression ${a_0,\ldots,a_{k-1}}$ with ${a_i \in A_i}$ for all ${i=0,\ldots,k-1}$, as required. (In fact, we obtain about ${\delta^k |P|^2}$ such progressions by this
We can heuristically justify the claims ${C(i,j)}$ by induction on ${i}$. For ${i=0}$, the claims ${C(0,j)}$ are clear just from direct counting of progressions (as long as we keep ${a_j}$ away from
the edges of ${P}$). Now suppose that ${i>0}$, and the claims ${C(i-1,j)}$ have already been proven. For any ${i \leq j < k}$ and for most ${a_j \in P}$, we have from hypothesis that there are ${\sim
\delta^{i-1} |P|}$ progressions ${a_0,\ldots,a_{k-1}}$ in ${P}$ through ${a_j}$ with ${a_0 \in A_0,\ldots,a_{i-2}\in A_{i-2}}$. Let ${E = E(a_j)}$ be the set of all the values of ${a_{i-1}}$ attained
by these progressions, then ${|E| \sim \delta^{i-1} |P|}$. Invoking the weak mixing hypothesis, we (heuristically, at least) conclude that for most choices of ${a_j}$, we have
$\displaystyle |A_{i-1} \cap E| \sim \delta^i |P|$
which then gives the desired claim ${C(i,j)}$.
The observant reader will note that we only needed the claim ${C(i,j)}$ in the case ${j=k-1}$ for the above argument, but for technical reasons, the full proof requires one to work with more general
values of ${j}$ (also the claim ${C(i,j)}$ needs to be replaced by a more complicated version of itself, but let’s ignore this for sake of discussion).
We now return to the question of how to justify the weak mixing hypothesis (2). For a single block ${A_i}$ of ${A}$, one can easily concoct a scenario in which this hypothesis fails, by choosing ${E}
$ to overlap with ${A_i}$ too strongly, or to be too disjoint from ${A_i}$. However, one can do better if one can select ${A_i}$ from a long progression of blocks. The starting point is the following
simple double counting observation that gives the right upper bound:
Proposition 2 (Single upper bound) Let ${P, P+r, \ldots, P+(M-1)r}$ be a progression of progressions ${P}$ for some large ${M}$. Suppose that for each ${i=0,\ldots,M-1}$, the set ${A_i := \{ a \
in P: a+ir \in A \}}$ has density ${\approx \delta}$ in ${P}$ (i.e. (1) holds). Let ${E}$ be a subset of ${P}$ of density ${\approx \sigma}$. Then (if ${M}$ is large enough) one can find an ${i =
0,\ldots,M-1}$ such that
$\displaystyle |A_i \cap E| \lessapprox \delta \sigma |P|.$
Proof: The key is the double counting identity
$\displaystyle \sum_{i=0}^{M-1} |A_i \cap E| = \sum_{a \in E} |A \cap \{ a, a+r, \ldots, a+(M-1) r\}|.$
Because ${A}$ has maximal density ${\delta}$ and ${M}$ is large, we have
$\displaystyle |A \cap \{ a, a+r, \ldots, a+(M-1) r\}| \lessapprox \delta M$
for each ${a}$, and thus
$\displaystyle \sum_{i=0}^{M-1} |A_i \cap E| \lessapprox \delta M |E|.$
The claim then follows from the pigeonhole principle. $\Box$
Now suppose we want to obtain weak mixing not just for a single set ${E}$, but for a small number ${E_1,\ldots,E_m}$ of such sets, i.e. we wish to find an ${i}$ for which
$\displaystyle |A_i \cap E_j| \lessapprox \delta \sigma_j |P|. \ \ \ \ \ (3)$
for all ${j=1,\ldots,m}$, where ${\sigma_j}$ is the density of ${E_j}$ in ${P}$. The above proposition gives, for each ${j}$, a choice of ${i}$ for which (3) holds, but it could be a different ${i}$
for each ${j}$, and so it is not immediately obvious how to use Proposition 2 to find an ${i}$ for which (3) holds simultaneously for all ${j}$. However, it turns out that the van der Waerden theorem
is the perfect tool for this amplification:
Proposition 3 (Multiple upper bound) Let ${P, P+r, \ldots, P+(M-1)r}$ be a progression of progressions ${P+ir}$ for some large ${M}$. Suppose that for each ${i=0,\ldots,M-1}$, the set ${A_i := \{
a \in P: a+ir \in A \}}$ has density ${\approx \delta}$ in ${P}$ (i.e. (1) holds). For each ${1 \leq j \leq m}$, let ${E_j}$ be a subset of ${P}$ of density ${\approx \sigma_j}$. Then (if ${M}$
is large enough depending on ${j}$) one can find an ${i = 0,\ldots,M-1}$ such that
$\displaystyle |A_i \cap E_j| \lessapprox \delta \sigma_j |P|$
simultaneously for all ${1 \leq j \leq m}$.
Proof: Suppose that the claim failed (for some suitably large ${M}$). Then, for each ${i = 0,\ldots,M-1}$, there exists ${j \in \{1,\ldots,m\}}$ such that
$\displaystyle |A_i \cap E_j| \gg \delta \sigma_j |P|.$
This can be viewed as a colouring of the interval ${\{1,\ldots,M\}}$ by ${m}$ colours. If we take ${M}$ large compared to ${m}$, van der Waerden’s theorem allows us to then find a long subprogression
of ${\{1,\ldots,M\}}$ which is monochromatic, so that ${j}$ is constant on this progression. But then this will furnish a counterexample to Proposition 2. $\Box$
One nice thing about this proposition is that the upper bounds can be automatically upgraded to an asymptotic:
Proposition 4 (Multiple mixing) Let ${P, P+r, \ldots, P+(M-1)r}$ be a progression of progressions ${P+ir}$ for some large ${M}$. Suppose that for each ${i=0,\ldots,M-1}$, the set ${A_i := \{ a \
in P: a+ir \in A \}}$ has density ${\approx \delta}$ in ${P}$ (i.e. (1) holds). For each ${1 \leq j \leq m}$, let ${E_j}$ be a subset of ${P}$ of density ${\approx \sigma_j}$. Then (if ${M}$ is
large enough depending on ${m}$) one can find an ${i = 0,\ldots,M-1}$ such that
$\displaystyle |A_i \cap E_j| \approx \delta \sigma_j |P|$
simultaneously for all ${1 \leq j \leq m}$.
Proof: By applying the previous proposition to the collection of sets ${E_1,\ldots,E_m}$ and their complements ${P\backslash E_1,\ldots,P \backslash E_m}$ (thus replacing ${m}$ with ${2m}$, one can
find an ${i}$ for which
$\displaystyle |A_i \cap E_j| \lessapprox \delta \sigma_j |P|$
$\displaystyle |A_i \cap (P \backslash E_j)| \lessapprox \delta (1-\sigma_j) |P|$
which gives the claim. $\Box$
However, this improvement of Proposition 2 turns out to not be strong enough for applications. The reason is that the number ${m}$ of sets ${E_1,\ldots,E_m}$ for which mixing is established is too
small compared with the length ${M}$ of the progression one has to use in order to obtain that mixing. However, thanks to the magic of the Szemerédi regularity lemma, one can amplify the above
proposition even further, to allow for a huge number of ${E_i}$ to be mixed (at the cost of excluding a small fraction of exceptions):
Proposition 5 (Really multiple mixing) Let ${P, P+r, \ldots, P+(M-1)r}$ be a progression of progressions ${P+ir}$ for some large ${M}$. Suppose that for each ${i=0,\ldots,M-1}$, the set ${A_i :=
\{ a \in P: a+ir \in A \}}$ has density ${\approx \delta}$ in ${P}$ (i.e. (1) holds). For each ${v}$ in some (large) finite set ${V}$, let ${E_v}$ be a subset of ${P}$ of density ${\approx \
sigma_v}$. Then (if ${M}$ is large enough, but not dependent on the size of ${V}$) one can find an ${i = 0,\ldots,M-1}$ such that
$\displaystyle |A_i \cap E_v| \approx \delta \sigma_v |P|$
simultaneously for almost all ${v \in V}$.
Proof: We build a bipartite graph ${G = (P, V, E)}$ connecting the progression ${P}$ to the finite set ${V}$ by placing an edge ${(a,v)}$ between an element ${a \in P}$ and an element ${v \in V}$
whenever ${a \in E_v}$. The number ${|E_v| \approx \sigma_v |P|}$ can then be interpreted as the degree of ${v}$ in this graph, while the number ${|A_i \cap E_v|}$ is the number of neighbours of ${v}
$ that land in ${A_i}$.
We now apply the regularity lemma to this graph ${G}$. Roughly speaking, what this lemma does is to partition ${P}$ and ${V}$ into almost equally sized cells ${P = P_1 \cup \ldots P_m}$ and ${V = V_1
\cup \ldots V_m}$ such that for most pairs ${P_j, V_k}$ of cells, the graph ${G}$ resembles a random bipartite graph of some density ${d_{jk}}$ between these two cells. The key point is that the
number ${m}$ of cells here is bounded uniformly in the size of ${P}$ and ${V}$. As a consequence of this lemma, one can show that for most vertices ${v}$ in a typical cell ${V_k}$, the number ${|E_v
|}$ is approximately equal to
$\displaystyle |E_v| \approx \sum_{j=1}^m d_{ij} |P_j|$
and the number ${|A_i \cap E_v|}$ is approximately equal to
$\displaystyle |A_i \cap E_v| \approx \sum_{j=1}^m d_{ij} |A_i \cap P_j|.$
The point here is that the ${|V|}$ different statistics ${|A_i \cap E_v|}$ are now controlled by a mere ${m}$ statistics ${|A_i \cap P_j|}$ (this is not unlike the use of principal component analysis
in statistics, incidentally, but that is another story). Now, we invoke Proposition 4 to find an ${i}$ for which
$\displaystyle |A_i \cap P_j| \approx \delta |P_j|$
simultaneously for all ${j=1,\ldots,m}$, and the claim follows. $\Box$
This proposition now suggests a way forward to establish the type of mixing properties (2) needed for the preceding attempt at proving Szemerédi’s theorem to actually work. Whereas in that attempt,
we were working with a single progression of progressions ${P, P+r, \ldots, P+(k-1)r}$ of progressions containing a near-maximal density of elements of ${A}$, we will now have to work with a family $
{(P_\lambda, P_\lambda+r_\lambda,\ldots,P_\lambda+(k-1)r_\lambda)_{\lambda \in \Lambda}}$ of such progression of progressions, where ${\Lambda}$ ranges over some suitably large parameter set.
Furthermore, in order to invoke Proposition 5, this family must be “well-arranged” in some arithmetic sense; in particular, for a given ${i}$, it should be possible to find many reasonably large
subfamilies of this family for which the ${i^{th}}$ terms ${P_\lambda + i r_\lambda}$ of the progression of progressions in this subfamily are themselves in arithmetic progression. (Also, for
technical reasons having to do with the fact that the sets ${E_v}$ in Proposition 5 are not allowed to depend on ${i}$, one also needs the progressions ${P_\lambda + i' r_\lambda}$ for any given ${0
\leq i' < i}$ to be “similar” in the sense that they intersect ${A}$ in the same fashion (thus the sets ${A \cap (P_\lambda + i' r_\lambda)}$ as ${\lambda}$ varies need to be translates of each
other).) If one has this sort of family, then Proposition 5 allows us to “spend” some of the degrees of freedom of the parameter set ${\Lambda}$ in order to gain good mixing properties for at least
one of the sets ${P_\lambda +i r_\lambda}$ in the progression of progressions.
Of course, we still have to figure out how to get such large families of well-arranged progressions of progressions. Szemerédi’s solution was to begin by working with generalised progressions of a
much larger rank ${d}$ than the rank ${2}$ progressions considered here; roughly speaking, to prove Szemerédi’s theorem for length ${k}$ progressions, one has to consider generalised progressions of
rank as high as ${2^k+1}$. It is possible by a reasonably straightforward (though somewhat delicate) “density increment argument” to locate a huge generalised progression of this rank which is
“saturated” by ${A}$ in a certain rather technical sense (related to the concept of “near maximal density” used previously). Then, by another reasonably elementary argument, it is possible to locate
inside a suitable large generalised progression of some rank ${d}$, a family of large generalised progressions of rank ${d-1}$ which inherit many of the good properties of the original generalised
progression, and which have the arithmetic structure needed for Proposition 5 to be applicable, at least for one value of ${i}$. (But getting this sort of property for all values of ${i}$
simultaneously is tricky, and requires many careful iterations of the above scheme; there is also the problem that by obtaining good behaviour for one index ${i}$, one may lose good behaviour at
previous indices, leading to a sort of “Tower of Hanoi” situation which may help explain the exponential factor in the rank ${2^k+1}$ that is ultimately needed. It is an extremely delicate argument;
all the parameters and definitions have to be set very precisely in order for the argument to work at all, and it is really quite remarkable that Endre was able to see it through to the end.)
In 1977, Furstenberg established his multiple recurrence theorem:
Theorem 1 (Furstenberg multiple recurrence) Let ${(X, {\mathcal B}, \mu, T)}$ be a measure-preserving system, thus ${(X,{\mathcal B},\mu)}$ is a probability space and ${T: X \rightarrow X}$ is a
measure-preserving bijection such that ${T}$ and ${T^{-1}}$ are both measurable. Let ${E}$ be a measurable subset of ${X}$ of positive measure ${\mu(E) > 0}$. Then for any ${k \geq 1}$, there
exists ${n > 0}$ such that
$\displaystyle E \cap T^{-n} E \cap \ldots \cap T^{-(k-1)n} E eq \emptyset.$
Equivalently, there exists ${n > 0}$ and ${x \in X}$ such that
$\displaystyle x, T^n x, \ldots, T^{(k-1)n} x \in E.$
As is well known, the Furstenberg multiple recurrence theorem is equivalent to Szemerédi’s theorem, thanks to the Furstenberg correspondence principle; see for instance these lecture notes of mine.
The multiple recurrence theorem is proven, roughly speaking, by an induction on the “complexity” of the system ${(X,{\mathcal X},\mu,T)}$. Indeed, for very simple systems, such as periodic systems
(in which ${T^n}$ is the identity for some ${n>0}$, which is for instance the case for the circle shift ${X = {\bf R}/{\bf Z}}$, ${Tx := x+\alpha}$ with a rational shift ${\alpha}$), the theorem is
trivial; at a slightly more advanced level, almost periodic (or compact) systems (in which ${\{ T^n f: n \in {\bf Z} \}}$ is a precompact subset of ${L^2(X)}$ for every ${f \in L^2(X)}$, which is for
instance the case for irrational circle shifts), is also quite easy. One then shows that the multiple recurrence property is preserved under various extension operations (specifically, compact
extensions, weakly mixing extensions, and limits of chains of extensions), which then gives the multiple recurrence theorem as a consequence of the Furstenberg-Zimmer structure theorem for
measure-preserving systems. See these lecture notes for further discussion.
From a high-level perspective, this is still one of the most conceptual proofs known of Szemerédi’s theorem. However, the individual components of the proof are still somewhat intricate. Perhaps the
most difficult step is the demonstration that the multiple recurrence property is preserved under compact extensions; see for instance these lecture notes, which is devoted entirely to this step.
This step requires quite a bit of measure-theoretic and/or functional analytic machinery, such as the theory of disintegrations, relatively almost periodic functions, or Hilbert modules.
However, I recently realised that there is a special case of the compact extension step – namely that of finite extensions – which avoids almost all of these technical issues while still capturing
the essence of the argument (and in particular, the key idea of using van der Waerden’s theorem). As such, this may serve as a pedagogical device for motivating this step of the proof of the multiple
recurrence theorem.
Let us first explain what a finite extension is. Given a measure-preserving system ${X = (X,{\mathcal X},\mu,T)}$, a finite set ${Y}$, and a measurable map ${\rho: X \rightarrow \hbox{Sym}(Y)}$ from
${X}$ to the permutation group of ${Y}$, one can form the finite extension
$\displaystyle X \ltimes_\rho Y = (X \times Y, {\mathcal X} \times {\mathcal Y}, \mu \times u, S),$
which as a probability space is the product of ${(X,{\mathcal X},\mu)}$ with the finite probability space ${Y = (Y, {\mathcal Y},u)}$ (with the discrete ${\sigma}$-algebra and uniform probability
measure), and with shift map
$\displaystyle S(x, y) := (Tx, \rho(x) y). \ \ \ \ \ (1)$
One easily verifies that this is indeed a measure-preserving system. We refer to ${\rho}$ as the cocycle of the system.
An example of finite extensions comes from group theory. Suppose we have a short exact sequence
$\displaystyle 0 \rightarrow K \rightarrow G \rightarrow H \rightarrow 0$
of finite groups. Let ${g}$ be a group element of ${G}$, and let ${h}$ be its projection in ${H}$. Then the shift map ${x \mapsto gx}$ on ${G}$ (with the discrete ${\sigma}$-algebra and uniform
probability measure) can be viewed as a finite extension of the shift map ${y \mapsto hy}$ on ${H}$ (again with the discrete ${\sigma}$-algebra and uniform probability measure), by arbitrarily
selecting a section ${\phi: H \rightarrow G}$ that inverts the projection map, identifying ${G}$ with ${H \times K}$ by identifying ${k \phi(y)}$ with ${(y,k)}$ for ${y \in H, k \in K}$, and using
the cocycle
$\displaystyle \rho(y) := \phi(hy)^{-1} g \phi(y).$
Thus, for instance, the unit shift ${x \mapsto x+1}$ on ${{\bf Z}/N{\bf Z}}$ can be thought of as a finite extension of the unit shift ${x \mapsto x+1}$ on ${{\bf Z}/M{\bf Z}}$ whenever ${N}$ is a
multiple of ${M}$.
Another example comes from Riemannian geometry. If ${M}$ is a Riemannian manifold that is a finite cover of another Riemannian manifold ${N}$ (with the metric on ${M}$ being the pullback of that on $
{N}$), then (unit time) geodesic flow on the cosphere bundle of ${M}$ is a finite extension of the corresponding flow on ${N}$.
Here, then, is the finite extension special case of the compact extension step in the proof of the multiple recurrence theorem:
Proposition 2 (Finite extensions) Let ${X \rtimes_\rho Y}$ be a finite extension of a measure-preserving system ${X}$. If ${X}$ obeys the conclusion of the Furstenberg multiple recurrence
theorem, then so does ${X \rtimes_\rho Y}$.
Before we prove this proposition, let us first give the combinatorial analogue.
Lemma 3 Let ${A}$ be a subset of the integers that contains arbitrarily long arithmetic progressions, and let ${A = A_1 \cup \ldots \cup A_M}$ be a colouring of ${A}$ by ${M}$ colours (or
equivalently, a partition of ${A}$ into ${M}$ colour classes ${A_i}$). Then at least one of the ${A_i}$ contains arbitrarily long arithmetic progressions.
Proof: By the infinite pigeonhole principle, it suffices to show that for each ${k \geq 1}$, one of the colour classes ${A_i}$ contains an arithmetic progression of length ${k}$.
Let ${N}$ be a large integer (depending on ${k}$ and ${M}$) to be chosen later. Then ${A}$ contains an arithmetic progression of length ${N}$, which may be identified with ${\{0,\ldots,N-1\}}$. The
colouring of ${A}$ then induces a colouring on ${\{0,\ldots,N-1\}}$ into ${M}$ colour classes. Applying (the finitary form of) van der Waerden’s theorem, we conclude that if ${N}$ is sufficiently
large depending on ${M}$ and ${k}$, then one of these colouring classes contains an arithmetic progression of length ${k}$; undoing the identification, we conclude that one of the ${A_i}$ contains an
arithmetic progression of length ${k}$, as desired. $\Box$
Of course, by specialising to the case ${A={\bf Z}}$, we see that the above Lemma is in fact equivalent to van der Waerden’s theorem.
Now we prove Proposition 2.
Proof: Fix ${k}$. Let ${E}$ be a positive measure subset of ${X \rtimes_\rho Y = (X \times Y, {\mathcal X} \times {\mathcal Y}, \mu \times u, S)}$. By Fubini’s theorem, we have
$\displaystyle \mu \times u(E) = \int_X f(x)\ d\mu(x)$
where ${f(x) := u(E_x)}$ and ${E_x := \{ y \in Y: (x,y) \in E \}}$ is the fibre of ${E}$ at ${x}$. Since ${\mu \times u(E)}$ is positive, we conclude that the set
$\displaystyle F := \{ x \in X: f(x) > 0 \} = \{ x \in X: E_x eq \emptyset \}$
is a positive measure subset of ${X}$. Note for each ${x \in F}$, we can find an element ${g(x) \in Y}$ such that ${(x,g(x)) \in E}$. While not strictly necessary for this argument, one can ensure if
one wishes that the function ${g}$ is measurable by totally ordering ${Y}$, and then letting ${g(x)}$ the minimal element of ${Y}$ for which ${(x,g(x)) \in E}$.
Let ${N}$ be a large integer (which will depend on ${k}$ and the cardinality ${M}$ of ${Y}$) to be chosen later. Because ${X}$ obeys the multiple recurrence theorem, we can find a positive integer $
{n}$ and ${x \in X}$ such that
$\displaystyle x, T^n x, T^{2n} x, \ldots, T^{(N-1) n} x \in F.$
Now consider the sequence of ${N}$ points
$\displaystyle S^{-mn}( T^{mn} x, g(T^{mn} x) )$
for ${m = 0,\ldots,N-1}$. From (1), we see that
$\displaystyle S^{-mn}( T^{mn} x, g(T^{mn} x) ) = (x, c(m)) \ \ \ \ \ (2)$
for some sequence ${c(0),\ldots,c(N-1) \in Y}$. This can be viewed as a colouring of ${\{0,\ldots,N-1\}}$ by ${M}$ colours, where ${M}$ is the cardinality of ${Y}$. Applying van der Waerden’s theorem
, we conclude (if ${N}$ is sufficiently large depending on ${k}$ and ${|Y|}$) that there is an arithmetic progression ${a, a+r,\ldots,a+(k-1)r}$ in ${\{0,\ldots,N-1\}}$ with ${r>0}$ such that
$\displaystyle c(a) = c(a+r) = \ldots = c(a+(k-1)r) = c$
for some ${c \in Y}$. If we then let ${y = (x,c)}$, we see from (2) that
$\displaystyle S^{an + irn} y = ( T^{(a+ir)n} x, g(T^{(a+ir)n} x) ) \in E$
for all ${i=0,\ldots,k-1}$, and the claim follows. $\Box$
Remark 1 The precise connection between Lemma 3 and Proposition 2 arises from the following observation: with ${E, F, g}$ as in the proof of Proposition 2, and ${x \in X}$, the set
$\displaystyle A := \{ n \in {\bf Z}: T^n x \in F \}$
can be partitioned into the classes
$\displaystyle A_i := \{ n \in {\bf Z}: S^n (x,i) \in E' \}$
where ${E' := \{ (x,g(x)): x \in F \} \subset E}$ is the graph of ${g}$. The multiple recurrence property for ${X}$ ensures that ${A}$ contains arbitrarily long arithmetic progressions, and so
therefore one of the ${A_i}$ must also, which gives the multiple recurrence property for ${Y}$.
Remark 2 Compact extensions can be viewed as a generalisation of finite extensions, in which the fibres are no longer finite sets, but are themselves measure spaces obeying an additional
property, which roughly speaking asserts that for many functions ${f}$ on the extension, the shifts ${T^n f}$ of ${f}$ behave in an almost periodic fashion on most fibres, so that the orbits ${T^
n f}$ become totally bounded on each fibre. This total boundedness allows one to obtain an analogue of the above colouring map ${c()}$ to which van der Waerden’s theorem can be applied.
In Notes 3, we saw that the number of additive patterns in a given set was (in principle, at least) controlled by the Gowers uniformity norms of functions associated to that set.
Such norms can be defined on any finite additive group (and also on some other types of domains, though we will not discuss this point here). In particular, they can be defined on the
finite-dimensional vector spaces ${V}$ over a finite field ${{\bf F}}$.
In this case, the Gowers norms ${U^{d+1}(V)}$ are closely tied to the space ${\hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})}$ of polynomials of degree at most ${d}$. Indeed, as noted in
Exercise 20 of Notes 4, a function ${f: V \rightarrow {\bf C}}$ of ${L^\infty(V)}$ norm ${1}$ has ${U^{d+1}(V)}$ norm equal to ${1}$ if and only if ${f = e(\phi)}$ for some ${\phi \in \hbox{Poly}_{\
leq d}(V \rightarrow {\bf R}/{\bf Z})}$; thus polynomials solve the “${100\%}$ inverse problem” for the trivial inequality ${\|f\|_{U^{d+1}(V)} \leq \|f\|_{L^\infty(V)}}$. They are also a crucial
component of the solution to the “${99\%}$ inverse problem” and “${1\%}$ inverse problem”. For the former, we will soon show:
Proposition 1 (${99\%}$ inverse theorem for ${U^{d+1}(V)}$) Let ${f: V \rightarrow {\bf C}}$ be such that ${\|f\|_{L^\infty(V)}}$ and ${\|f\|_{U^{d+1}(V)} \geq 1-\epsilon}$ for some ${\epsilon >
0}$. Then there exists ${\phi \in \hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})}$ such that ${\| f - e(\phi)\|_{L^1(V)} = O_{d, {\bf F}}( \epsilon^c )}$, where ${c = c_d > 0}$ is a constant
depending only on ${d}$.
Thus, for the Gowers norm to be almost completely saturated, one must be very close to a polynomial. The converse assertion is easily established:
Exercise 1 (Converse to ${99\%}$ inverse theorem for ${U^{d+1}(V)}$) If ${\|f\|_{L^\infty(V)} \leq 1}$ and ${\|f-e(\phi)\|_{L^1(V)} \leq \epsilon}$ for some ${\phi \in \hbox{Poly}_{\leq d}(V \
rightarrow {\bf R}/{\bf Z})}$, then ${\|F\|_{U^{d+1}(V)} \geq 1 - O_{d,{\bf F}}( \epsilon^c )}$, where ${c = c_d > 0}$ is a constant depending only on ${d}$.
In the ${1\%}$ world, one no longer expects to be close to a polynomial. Instead, one expects to correlate with a polynomial. Indeed, one has
Lemma 2 (Converse to the ${1\%}$ inverse theorem for ${U^{d+1}(V)}$) If ${f: V \rightarrow {\bf C}}$ and ${\phi \in \hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})}$ are such that ${|\langle
f, e(\phi) \rangle_{L^2(V)}| \geq \epsilon}$, where ${\langle f, g \rangle_{L^2(V)} := {\bf E}_{x \in G} f(x) \overline{g(x)}}$, then ${\|f\|_{U^{d+1}(V)} \geq \epsilon}$.
Proof: From the definition of the ${U^1}$ norm (equation (18) from Notes 3), the monotonicity of the Gowers norms (Exercise 19 of Notes 3), and the polynomial phase modulation invariance of the
Gowers norms (Exercise 21 of Notes 3), one has
$\displaystyle |\langle f, e(\phi) \rangle| = \| f e(-\phi) \|_{U^1(V)}$
$\displaystyle \leq \|f e(-\phi) \|_{U^{d+1}(V)}$
$\displaystyle = \|f\|_{U^{d+1}(V)}$
and the claim follows. $\Box$
In the high characteristic case ${\hbox{char}({\bf F}) > d}$ at least, this can be reversed:
Theorem 3 (${1\%}$ inverse theorem for ${U^{d+1}(V)}$) Suppose that ${\hbox{char}({\bf F}) > d \geq 0}$. If ${f: V \rightarrow {\bf C}}$ is such that ${\|f\|_{L^\infty(V)} \leq 1}$ and ${\|f\|_{U
^{d+1}(V)} \geq \epsilon}$, then there exists ${\phi \in \hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})}$ such that ${|\langle f, e(\phi) \rangle_{L^2(V)}| \gg_{\epsilon,d,{\bf F}} 1}$.
This result is sometimes referred to as the inverse conjecture for the Gowers norm (in high, but bounded, characteristic). For small ${d}$, the claim is easy:
Exercise 2 Verify the cases ${d=0,1}$ of this theorem. (Hint: to verify the ${d=1}$ case, use the Fourier-analytic identities ${\|f\|_{U^2(V)} = (\sum_{\xi \in \hat V} |\hat f(\xi)|^4)^{1/4}}$
and ${\|f\|_{L^2(V)} = (\sum_{\xi \in \hat V} |\hat f(\xi)|^2)^{1/2}}$, where ${\hat V}$ is the space of all homomorphisms ${\xi: x \mapsto \xi \cdot x}$ from ${V}$ to ${{\bf R}/{\bf Z}}$, and $
{\hat f(\xi) := \mathop{\bf E}_{x \in V} f(x) e(-\xi \cdot x)}$ are the Fourier coefficients of ${f}$.)
This conjecture for larger values of ${d}$ are more difficult to establish. The ${d=2}$ case of the theorem was established by Ben Green and myself in the high characteristic case ${\hbox{char}({\bf
F}) > 2}$; the low characteristic case ${\hbox{char}({\bf F}) = d = 2}$ was independently and simultaneously established by Samorodnitsky. The cases ${d>2}$ in the high characteristic case was
established in two stages, firstly using a modification of the Furstenberg correspondence principle, due to Ziegler and myself. to convert the problem to an ergodic theory counterpart, and then using
a modification of the methods of Host-Kra and Ziegler to solve that counterpart, as done in this paper of Bergelson, Ziegler, and myself.
The situation with the low characteristic case in general is still unclear. In the high characteristic case, we saw from Notes 4 that one could replace the space of non-classical polynomials ${\hbox
{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})}$ in the above conjecture with the essentially equivalent space of classical polynomials ${\hbox{Poly}_{\leq d}(V \rightarrow {\bf F})}$. However, as we
shall see below, this turns out not to be the case in certain low characteristic cases (a fact first observed by Lovett, Meshulam, and Samorodnitsky, and independently by Ben Green and myself), for
instance if ${\hbox{char}({\bf F}) = 2}$ and ${d \geq 3}$; this is ultimately due to the existence in those cases of non-classical polynomials which exhibit no significant correlation with classical
polynomials of equal or lesser degree. This distinction between classical and non-classical polynomials appears to be a rather non-trivial obstruction to understanding the low characteristic setting;
it may be necessary to obtain a more complete theory of non-classical polynomials in order to fully settle this issue.
The inverse conjecture has a number of consequences. For instance, it can be used to establish the analogue of Szemerédi’s theorem in this setting:
Theorem 4 (Szemerédi’s theorem for finite fields) Let ${{\bf F} = {\bf F}_p}$ be a finite field, let ${\delta > 0}$, and let ${A \subset {\bf F}^n}$ be such that ${|A| \geq \delta |{\bf F}^n|}$.
If ${n}$ is sufficiently large depending on ${p,\delta}$, then ${A}$ contains an (affine) line ${\{ x, x+r, \ldots, x+(p-1)r\}}$ for some ${x,r \in {\bf F}^n}$ with ${ req 0}$.
Exercise 3 Use Theorem 4 to establish the following generalisation: with the notation as above, if ${k \geq 1}$ and ${n}$ is sufficiently large depending on ${p,\delta}$, then ${A}$ contains an
affine ${k}$-dimensional subspace.
We will prove this theorem in two different ways, one using a density increment method, and the other using an energy increment method. We discuss some other applications below the fold.
Ben Green, and I have just uploaded to the arXiv a short (six-page) paper “Yet another proof of Szemeredi’s theorem“, submitted to the 70th birthday conference proceedings for Endre Szemerédi. In
this paper we put in print a folklore observation, namely that the inverse conjecture for the Gowers norm, together with the density increment argument, easily implies Szemerédi’s famous theorem on
arithmetic progressions. This is unsurprising, given that Gowers’ proof of Szemerédi’s theorem proceeds through a weaker version of the inverse conjecture and a density increment argument, and also
given that it is possible to derive Szemerédi’s theorem from knowledge of the characteristic factor for multiple recurrence (the ergodic theory analogue of the inverse conjecture, first established
by Host and Kra), as was done by Bergelson, Leibman, and Lesigne (and also implicitly in the earlier paper of Bergelson, Host, and Kra); but to our knowledge the exact derivation of Szemerédi’s
theorem from the inverse conjecture was not in the literature. Ordinarily this type of folklore might be considered too trifling (and too well known among experts in the field) to publish; but we
felt that the venue of the Szemerédi birthday conference provided a natural venue for this particular observation.
The key point is that one can show (by an elementary argument relying primarily an induction on dimension argument and the Weyl recurrence theorem, i.e. that given any real ${\alpha}$ and any integer
${s \geq 1}$, that the expression ${\alpha n^s}$ gets arbitrarily close to an integer) that given a (polynomial) nilsequence ${n \mapsto F(g(n)\Gamma)}$, one can subdivide any long arithmetic
progression (such as ${[N] = \{1,\ldots,N\}}$) into a number of medium-sized progressions, where the nilsequence is nearly constant on each progression. As a consequence of this and the inverse
conjecture for the Gowers norm, if a set has no arithmetic progressions, then it must have an elevated density on a subprogression; iterating this observation as per the usual density-increment
argument as introduced long ago by Roth, one obtains the claim. (This is very close to the scheme of Gowers’ proof.)
Technically, one might call this the shortest proof of Szemerédi’s theorem in the literature (and would be something like the sixteenth such genuinely distinct proof, by our count), but that would be
cheating quite a bit, primarily due to the fact that it assumes the inverse conjecture for the Gowers norm, our current proof of which is checking in at about 100 pages…
Ben Green, and I have just uploaded to the arXiv our paper “An arithmetic regularity lemma, an associated counting lemma, and applications“, submitted (a little behind schedule) to the 70th birthday
conference proceedings for Endre Szemerédi. In this paper we describe the general-degree version of the arithmetic regularity lemma, which can be viewed as the counterpart of the Szemerédi regularity
lemma, in which the object being regularised is a function ${f: [N] \rightarrow [0,1]}$ on a discrete interval ${[N] = \{1,\ldots,N\}}$ rather than a graph, and the type of patterns one wishes to
count are additive patterns (such as arithmetic progressions ${n,n+d,\ldots,n+(k-1)d}$) rather than subgraphs. Very roughly speaking, this regularity lemma asserts that all such functions can be
decomposed as a degree ${\leq s}$ nilsequence (or more precisely, a variant of a nilsequence that we call an virtual irrational nilsequence), plus a small error, plus a third error which is extremely
tiny in the Gowers uniformity norm ${U^{s+1}[N]}$. In principle, at least, the latter two errors can be readily discarded in applications, so that the regularity lemma reduces many questions in
additive combinatorics to questions concerning (virtual irrational) nilsequences. To work with these nilsequences, we also establish a arithmetic counting lemma that gives an integral formula for
counting additive patterns weighted by such nilsequences.
The regularity lemma is a manifestation of the “dichotomy between structure and randomness”, as discussed for instance in my ICM article or FOCS article. In the degree ${1}$ case ${s=1}$, this result
is essentially due to Green. It is powered by the inverse conjecture for the Gowers norms, which we and Tamar Ziegler have recently established (paper to be forthcoming shortly; the ${k=4}$ case of
our argument is discussed here). The counting lemma is established through the quantitative equidistribution theory of nilmanifolds, which Ben and I set out in this paper.
The regularity and counting lemmas are designed to be used together, and in the paper we give three applications of this combination. Firstly, we give a new proof of Szemerédi’s theorem, which
proceeds via an energy increment argument rather than a density increment one. Secondly, we establish a conjecture of Bergelson, Host, and Kra, namely that if ${A \subset [N]}$ has density ${\alpha}$
, and ${\epsilon > 0}$, then there exist ${\gg_{\alpha,\epsilon} N}$ shifts ${h}$ for which ${A}$ contains at least ${(\alpha^4 - \epsilon)N}$ arithmetic progressions of length ${k=4}$ of spacing $
{h}$. (The ${k=3}$ case of this conjecture was established earlier by Green; the ${k=5}$ case is false, as was shown by Ruzsa in an appendix to the Bergelson-Host-Kra paper.) Thirdly, we establish a
variant of a recent result of Gowers-Wolf, showing that the true complexity of a system of linear forms over ${[N]}$ indeed matches the conjectured value predicted in their first paper.
In all three applications, the scheme of proof can be described as follows:
• Apply the arithmetic regularity lemma, and decompose a relevant function ${f}$ into three pieces, ${f_{nil}, f_{sml}, f_{unf}}$.
• The uniform part ${f_{unf}}$ is so tiny in the Gowers uniformity norm that its contribution can be easily dealt with by an appropriate “generalised von Neumann theorem”.
• The contribution of the (virtual, irrational) nilsequence ${f_{nil}}$ can be controlled using the arithmetic counting lemma.
• Finally, one needs to check that the contribution of the small error ${f_{sml}}$ does not overwhelm the main term ${f_{nil}}$. This is the trickiest bit; one often needs to use the counting lemma
again to show that one can find a set of arithmetic patterns for ${f_{nil}}$ that is so sufficiently “equidistributed” that it is not impacted by the small error.
To illustrate the last point, let us give the following example. Suppose we have a set ${A \subset [N]}$ of some positive density (say ${|A| = 10^{-1} N}$) and we have managed to prove that ${A}$
contains a reasonable number of arithmetic progressions of length ${5}$ (say), e.g. it contains at least ${10^{-10} N^2}$ such progressions. Now we perturb ${A}$ by deleting a small number, say ${10^
{-2} N}$, elements from ${A}$ to create a new set ${A'}$. Can we still conclude that the new set ${A'}$ contains any arithmetic progressions of length ${5}$?
Unfortunately, the answer could be no; conceivably, all of the ${10^{-10} N^2}$ arithmetic progressions in ${A}$ could be wiped out by the ${10^{-2} N}$ elements removed from ${A}$, since each such
element of ${A}$ could be associated with up to ${|A|}$ (or even ${5|A|}$) arithmetic progressions in ${A}$.
But suppose we knew that the ${10^{-10} N^2}$ arithmetic progressions in ${A}$ were equidistributed, in the sense that each element in ${A}$ belonged to the same number of such arithmetic
progressions, namely ${5 \times 10^{-9} N}$. Then each element deleted from ${A}$ only removes at most ${5 \times 10^{-9} N}$ progressions, and so one can safely remove ${10^{-2} N}$ elements from $
{A}$ and still retain some arithmetic progressions. The same argument works if the arithmetic progressions are only approximately equidistributed, in the sense that the number of progressions that a
given element ${a \in A}$ belongs to concentrates sharply around its mean (for instance, by having a small variance), provided that the equidistribution is sufficiently strong. Fortunately, the
arithmetic regularity and counting lemmas are designed to give precisely such a strong equidistribution result.
A succinct (but slightly inaccurate) summation of the regularity+counting lemma strategy would be that in order to solve a problem in additive combinatorics, it “suffices to check it for
nilsequences”. But this should come with a caveat, due to the issue of the small error above; in addition to checking it for nilsequences, the answer in the nilsequence case must be sufficiently
“dispersed” in a suitable sense, so that it can survive the addition of a small (but not completely negligible) perturbation.
One last “production note”. Like our previous paper with Emmanuel Breuillard, we used Subversion to write this paper, which turned out to be a significant efficiency boost as we could work on
different parts of the paper simultaneously (this was particularly important this time round as the paper was somewhat lengthy and complicated, and there was a submission deadline). When doing so, we
found it convenient to split the paper into a dozen or so pieces (one for each section of the paper, basically) in order to avoid conflicts, and to help coordinate the writing process. I’m also
looking into git (a more advanced version control system), and am planning to use it for another of my joint projects; I hope to be able to comment on the relative strengths of these systems (and
with plain old email) in the future.
In this lecture, we describe the simple but fundamental Furstenberg correspondence principle which connects the “soft analysis” subject of ergodic theory (in particular, recurrence theorems) with the
“hard analysis” subject of combinatorial number theory (or more generally with results of “density Ramsey theory” type). Rather than try to set up the most general and abstract version of this
principle, we shall instead study the canonical example of this principle in action, namely the equating of the Furstenberg multiple recurrence theorem with Szemerédi’s theorem on arithmetic
Read the rest of this entry »
A few months ago, I was invited to contribute an article to Scholarpedia – a Wikipedia-like experiment (using essentially the same software, in fact) in which the articles are far fewer in number,
but have specialists as the primary authors (and curators) and are peer-reviewed in a manner similar to submissions to a research journal. Specifically, I was invited (with Ben Green) to author the
article on Szemerédi’s theorem. The article is now [S:submitted (awaiting review):S] reviewed and accepted, and can be viewed on the Scholarpedia page for that theorem. Like Wikipedia, the page is
open to edits or any other comments by any user (once they register an account and login); but the edits are moderated by the curators and primary authors, who thus remain responsible for the
Scholarpedia seems to be an interesting experiment, trying to blend the collaborative and dynamic strengths of the wiki system with the traditional and static strengths of the peer-review system. At
any rate, any feedback on my article with Ben, either at the Scholarpedia page or here, would be welcome.
[Update, July 9: the article has been reviewed, modified, and accepted in just three days - a blindingly fast speed as far as peer review goes!]
In this second lecture, I wish to talk about the dichotomy between structure and randomness as it manifests itself in four closely related areas of mathematics:
• Combinatorial number theory, which seeks to find patterns in unstructured dense sets (or colourings) of integers;
• Ergodic theory (or more specifically, multiple recurrence theory), which seeks to find patterns in positive-measure sets under the action of a discrete dynamical system on probability spaces (or
more specifically, measure-preserving actions of the integers ${\Bbb Z}$);
• Graph theory, or more specifically the portion of this theory concerned with finding patterns in large unstructured dense graphs; and
• Ergodic graph theory, which is a very new and undeveloped subject, which roughly speaking seems to be concerned with the patterns within a measure-preserving action of the infinite permutation
group $S_\infty$, which is one of several models we have available to study infinite “limits” of graphs.
The two “discrete” (or “finitary”, or “quantitative”) fields of combinatorial number theory and graph theory happen to be related to each other, basically by using the Cayley graph construction; I
will give an example of this shortly. The two “continuous” (or “infinitary”, or “qualitative”) fields of ergodic theory and ergodic graph theory are at present only related on the level of analogy
and informal intuition, but hopefully some more systematic connections between them will appear soon.
On the other hand, we have some very rigorous connections between combinatorial number theory and ergodic theory, and also (more recently) between graph theory and ergodic graph theory, basically by
the procedure of viewing the infinitary continuous setting as a limit of the finitary discrete setting. These two connections go by the names of the Furstenberg correspondence principle and the graph
correspondence principle respectively. These principles allow one to tap the power of the infinitary world (for instance, the ability to take limits and perform completions or closures of objects) in
order to establish results in the finitary world, or at least to take the intuition gained in the infinitary world and transfer it to a finitary setting. Conversely, the finitary world provides an
excellent model setting to refine one’s understanding of infinitary objects, for instance by establishing quantitative analogues of “soft” results obtained in an infinitary manner. I will remark here
that this best-of-both-worlds approach, borrowing from both the finitary and infinitary traditions of mathematics, was absolutely necessary for Ben Green and I in order to establish our result on
long arithmetic progressions in the primes. In particular, the infinitary setting is excellent for being able to rigorously define and study concepts (such as structure or randomness) which are much
“fuzzier” and harder to pin down exactly in the finitary world.
Recent Comments
Eytan Paldi on Polymath8b, X: writing the pap…
vznvzn on The Collatz conjecture, Little…
Andrew V. Sutherland on Polymath8b, X: writing the pap…
Terence Tao on Polymath8b, X: writing the pap…
Philippe Michel on Polymath8b, X: writing the pap…
Anonymous on Real stable polynomials and th…
Terence Tao on Polymath8b, X: writing the pap…
Andrew V. Sutherland on Polymath8b, X: writing the pap…
Terence Tao on Polymath8b, X: writing the pap…
Aubrey de Grey on Polymath8b, X: writing the pap…
xfxie on Polymath8b, X: writing the pap…
Daniel on Polymath8b, X: writing the pap…
Terence Tao on Polymath8b, X: writing the pap…
Terence Tao on Finite time blowup for an aver… | {"url":"https://terrytao.wordpress.com/tag/szemeredis-theorem/","timestamp":"2014-04-19T04:21:51Z","content_type":null,"content_length":"317438","record_id":"<urn:uuid:1c780749-1167-4042-80a5-4ee4d0ece54f>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Third level on Antimatter
You must be finished for the summer? Like most academics, I get asked this question every day in summer, usually by village acquaintances convinced that college closes the day the students finish
their exams.
Some lecturers in the Institutes of Technology do indeed take off from June 20th to September 1st; that is their right, given the heavy teaching load during termtime. However, for those of us who try
to keep up the research, the summer months are the time to get something done, just like our colleagues in the universities.
For me, this is no chore – the sheer bliss of being able to do quiet research without classes, meetings, staff interactions and all the rest of it. Very restful. Also, we’re having a serious
heatwave in Ireland this month and I’m happy to escape to the cool, quiet office every day. So I plug away happily during the day and treat myself to a swim in my village in the evenings..
Tide’s in on Lawlor’s Strand in Dunmore East
Actually, I did give some ‘cameo’ lectures this week and last, to our summer school. We have a very nice bunch of engineering, computing and business students visiting from Kiel in Germany, and I had
fun trying to condense my climate science course down to a one-hour presentation for each group. I haven’t given short presentations on climate before, it was very satisfying to prepare (see here for
a copy of the talk) The other thing I noticed was that students from the continent always seem to be very mature, polite and interested. I must look into an exchange sometime, do they have Erasmus
for staff?
My main task this summer is to finish my little book on cosmology. It’s based on a course I have taught for some years and it’s been a lot of fun to write. Now I’m finding that it’s one thing to
write a book and quite another to get it published! Still, I have plenty of time now to be writing book proposals and writing to publishers. In the meantime, I look forward to a swim in the sea
everyday after work and a walk into the village. It’s funny to live in a village where others come for summer holidays!
Tide’s out on Lawlor’s Strand in Dunmore East
Unfortunately it’s so warm, we’re beginning to get quite a few jellyfish. Hope it cools down a little next week!
A day in the life
There is a day-in-the life profile of me in today’s Irish Times, Ireland’s newspaper of record. I’m very pleased with it, I like the title – Labs, lectures and luring young people into scence – and
the accompanying photo, it looks like I’m about to burst into song! This is a weekly series where an academic describes their working week, so I give a day-to-day description of the challenge of
balancing teaching and research at my college Waterford Institute of Technology in Ireland.
Is this person singing?
There is quite a lot of discussion in Ireland at the moment concerning the role of institutes of technology vs that of universities. I quite like the two-tier system – the institutes function like
polytechnics and tend to be smaller and offer more practical programmes than the universities. However, WIT is something of an anomaly – because it is the only third level college in a largeish city
and surrounding area, it has been functioning rather like a university for many years (i.e. has a very broad range of programmes, quite high entry points and is reasonably research-active). The
college is currently being considered for technological university status, but many commentators oppose the idea of an upgrade – there are fears of a domino effect amongst the other 12 institutes,
giving Ireland far too many universities.
It’s hard to know the best solution but I’m not complaining – I like the broad teaching portfolio of the IoTs, and there is a lot to be said for a college where you do research if you want to, not
because you have to!
I had originally said that the institutes cater for a ‘slightly lower level of student’. Oops! This is simply not true in the case of WIT, given the entry points for many of the courses I teach,
apologies Jamie and Susie. Again, I think the points are a reflection of the fact that WIT has been functioning rather like a university simply because of where it is.
Comments on the article are on the Irish Times page
Filed under Teaching, Third level
Last day at Cambridge Infinities Conference
Today was the third and last day of the ‘Infinities and Cosmology’ conference at Cambridge (there is also a workshop tomorrow, see website). Yesterday saw quite a heavy schedule, with part II of
George Ellis’s ‘Infinites of Age and Size Including Global Topological Issues’, part II of Anthony Aguirre’s ‘Infinite and Finite State Spaces’ and part II of Michael Douglas’s ‘Can We Test the
String Theory Landscape?’ (see previous post for an outline of these topics). We also had a fairly technical talk on ‘Singularities and Cosmic Censorship in General Relativity’ by the Cambridge
mathematician Mihalis Dafermos: nuts-and-bolts talks like these are great for non-relativists like me because you get to see the mathematical tools used in GR research.
The logo for the Infinities in Cosmology conference; an artist’s impression of small universes
Today saw part II of Mihalis’s talk and the lecture ‘Infinite Computations and Spacetime’ by Mark Hogarth, a fascinating exploration of new methods of computation by exploiting relativistic spacetime
. I won’t attempt to summarize either, but the lectures should soon be available on the conference website.
For me, the highlight of the day was the talk ‘At Home and At Sea in an Infinite Universe: Newtonian and Machian Theories of Motion’ by Simon Saunders, the well-known Oxford physicist and
philosopher of physics. This was a superb discussion of Newton’s cosmology, in particular the paradox of gravitational instability in the Newtonian universe of infinite size and absolute, fixed
space. Did Newton realize that our solar system might possess a net acceleration, or did he assume that external gravitational forces somehow cancel out? Drawing on material from Newton’s Principia
and his ‘System of the World’, Professor Saunders argued that Newton assumed the latter, though whether he attributed such a delicate cosmic balancing act to divine intervention or to unknown forces
is not clear. (The possibility of a theological argument is not so fanciful as this work was the first mathematical attempt to try to describe the universe as a whole). Later, Professor Saunders
suggested that it is likely Newton declined to spend too much time on the question simply because it was untestable.
Newton’s famous Principia
There were many other interesting points in this fascinating lecture. Viewing the slides shown from Newton’s Principia, I was struck by the equivalence drawn again and again between bodies at rest
and in uniform motion. This anticipates Einstein’s special theory of relativity and is again slightly in conflict with Newton’s assumption of a fixed, absolute space, as Simon pointed out. All this
hints at a possible difference in Newton’s philosophy towards the universe at large versus motion on local scales – ironic as he was the first scientist to unite terrestrial and celestial motion in a
single framework. I won’t comment further, but the lecture left one eager to read Simon’s recent paper on the subject.
All in all, a superb conference. It was interesting that, even with such distinguished speakers, moderators observed time limits strictly in order to allow plenty of time for questions and comments
after the talks. In some ways, this was the best part; it’s not often one gets to hear to-and-fro arguments between scientists like John Barrow, George Ellis, Julian Babour and Simon Saunders, in the
lecture theatre and over coffee.
Speaking of coffee, one of the best aspects of the conference was the venue. Cambridge’s Department of Applied and Theoretical Physics forms part of its Centre for Mathematical Sciences and is housed
in a lovely modern open-plan building, with the smell of coffee and scones wafting throughout the atrium. What other mathematics institute can boast such a setup? Not DIAS, I’m afraid. Indeed, I’m
writing this post in the quiet atrium/canteen (no annoying background music – that wouldn’t be tolerated here). However, I’ve just realised that we are now finished for the day, so I’m off to do some
sight-seeing at last.
The main atrium in the Center for Mathematical Sciences is one big coffee shop, perfect for group discussions of physics, philosophy and mathematics
The Department of Applied Mathematics and Theoretical Physics forms part of the Centre for Mathematical Sciences at Cambridge
Mid-term in Chamonix
Last week was mid-term and I had a few days skiing in Chamonix in the French Alps. Chamonix lies in the shadow of Mont Blanc, the highest of the Alpine peaks, and the area is famous for its
challenging snowsports and mountain climbing. It was surprisingly easy to get to (1 hr 30 mins from Geneva airport) and the skiing certainly didn’t disappoint.
I stayed with my brother and his family in a tiny chalet in Les Praz, a small village just outside the town of Chamonix. The great advantage of this village is that it offers easy access to La F
lègere, a large ski area on the opposite side of the valley to the crowds at Chamonix. We had one day’s skiing out of Flegère, another at Argentière, the next resort along the valley, and the final
day at Le Tour, further down the valley again.
The village of Les Praz in Chamonix
The skiing was great in each case; lots of snow, steep pistes and clear skies almost every afternoon. An extra thrill was the fact that one could ski over the Swiss border and have lunch in
Switzerland. Of the three resorts, Flegère was my favourite; plenty of trees, nice unpisted runs under the lifts and not too many people.
The lonely skier
That said, I retain my preference for skiing in Austria. One reason is that, like many French resorts, Chamonix has relatively few gondolas, a large number of button lifts and uncovered chairlifts.
Button lifts are quite tiring on the feet after a while, while exposed chairlifts can get very cold – a concern at altitudes above 1500 m where the midday temperature is often below -10 degrees
Celsius. In Austria, almost all the main resorts have installed a healthy distribution of small, efficient gondolas and covered chairlifts (in the latter case, the chairs are heated by solar panels
in the plastic cover). There were also far fewer restaurants and cafes on the Chamonix slopes, which I found quite surprising for such a famous resort (coffee breaks are important for the tired
skier). So while the French are justifiably proud of their resorts, I still prefer Austria!
All in all a very good ski holiday, highly recommended…
Filed under Skiing, Travel
Resistors in series and parallel
In the last post, we saw that for many materials, the electric current I through a device is proportional to the voltage V applied to it, and inversely proportional its resistance, i.e. I = V/R
(Ohm’s law). If there is more than one device (or resistor) in a circuit, the current through each also depends on how the resistors are connected, i.e., whether they are connected in series or in
In a series circuit (below), the resistors are connected one after the other (just as in a TV series, one watches one episode after another). The same current runs through each device since there is
no alternative path or branch, i.e. I = I1 = I2. From V = IR, we see the voltage across each device will be different; in fact, the largest voltage drop will be across the largest resistance (just
as the largest energy drop occurs across the largest waterfall in a river). The total voltage in a series circuit is the sum of the individual voltages, i.e. V = V1+V2. As you might expect, the total
resistance (or load) of the circuit is the simply the sum of the individual resistances, R = R1 + R2.
Series circuit: the current is the same in each lamp while there may be a different voltage drop across each (V = V1+V2 +V3)
On the other hand, resistors in a circuit can be connected in parallel (see below). In this case, each device is connected directly to the terminals of the voltage source and therefore experiences
the same voltage (V = V1=V2). Since I = V/R , there will be a different current through each device (unless they happen to be of equal resistance) .The total current in a parallel circuit is the sum
of the individual currents, i.e. I = I1+I2. A strange aspect of parallel circuits is that the total resistance of the circuit is lowered as you add in more devices (1/R = 1/r1 + 1/r2). The physical
reason is that you are increasing the number of alternate paths the current can take.
Parallel circuit: the voltage is the same across each lamp but the currents may be different (I = I1+I2)
Confusing? The simple rule is that in a series circuit, the current is everywhere the same because there are no branches. On the other hand, devices connected in parallel see an identical voltage. In
everyday circuits, electrical devices such as kettles, TVs and computers are connected in parallel to each other because it is safer if each device sees the same voltage source; it also turns out to
be more efficient from the point of view of power consumption (an AC voltage is used, more on this later).
In the lab, circuits often contain some devices connected in series, others in parallel. In order to calculate the current through a given device, redraw the circuit with any resistors in parallel
replaced with the equivalent resistance in series, and analyse the resulting series circuit.
Assuming a resistance of 100 Ohms for each of the resistors in the combination circuit above, calculate the total resistance of the circuit. If a DC voltage of 12 V is applied, calculate the current
in the circuit. (Ans: 133 Ω, 0.09 A)
Filed under Teaching, Third level
Current, voltage and the French resistance
Last week, our 1st science students had their first laboratory session on electrical circuits. They haven’t met electricity in lectures yet, so I spent some time explaining the concepts of current
and voltage.
In essence, current is the flow of electric charge around a circuit (measured in amps) while voltage is the energy that drives the current (and is measured in volts). I find it helpful to think of
the two in terms of cause and effect; a current will only flow in the circuit if a voltage is applied. In simple circuits, this energy is supplied in the form of a DC battery (or voltage source) that
drives the current through some device (or resistor) in the circuit.
The lamp (or resistor) lights as the current goes through it, completing the circuit
You might expect that there is a simple relation between voltage and current, and sure enough, the German scientist Georg Ohm discovered that, for many materials, there is a linear relationship
between the two. Ohm’s law states that the current I passing through a material connected to a voltage V is given by the simple equation I = V/R. Here, 1/R is the constant of proportionality and is
called electrical resistance and you can see why from the equation: a material with a very large value of R will pass almost no current (bad conductor), while a material with very small R will yield
a large current for the same voltage (good conductor). So the term has exactly the same meaning as it has in ordinary speech, e.g. the French resistance. Resistance is measured in volts per amp, also
known as Ohms (Ω).
Many materials have a linear relation between voltage and current – the slope of the graph is the material’s resistance
In the experiment, the students apply a series of voltages to an unknown resistance in a circuit, and record the corresponding currents. A plot of voltage versus current then allows them to verify
the linearity of the relation and the resistance is estimated from the slope of the line. (Strictly speaking, one should really put the voltage on the x-axis as it is the independent variable, but
the calculation is simpler if the voltage is on the y-axis).
Measuring current and voltage
All of the above is fine in principle. Yet novices find the measurements quite difficult in practice. They have problems connecting the circuit because they get confused between measuring the current
that flows through a device, and the voltage across it. It’s crucial to understand the difference between the two, and I suspect the modern multimeter adds to the confusion.
The ammeter reads the current running through the resistor while the voltmeter reads the voltage across it. A plot of voltage vs current gives a measurement of the resistance
When I was a student, the current was measured by passing the current through an ammeter (marked A in the diagram), an analog device with a nice big dial calibrated in amps or milliamps. The voltage
across the resistor was measured by connecting a different instrument, the voltmeter, across the terminals of the resistor; this voltmeter was a separate meter with a dial calibrated in volts (marked
V in the diagram). So an ammeter was always connected in series with the resistor/device, while the voltmeter was always connected across it (in parallel).
Current is measured by passing it through the ammeter (L) while voltage is measured by connecting across the voltmeter (R)
Nowadays, identical instruments are used for both; to measure current, one passes the current through the terminals marked ‘current’ of a multimeter, and the main dial on the meter is switched to the
amp scale. To measure voltage, one connects the ends of the resistor across the terminals marked ‘voltage’ on an identical multimeter, and the dial is switched to volts. It sounds simple, but it’s
easy to connect to the wrong terminals, getting no readings or blowing the fuse in the meter. More subtly, I think the clever circuity inside the multimeter hides the fact that current goes through
while voltage drops across. All in all, I suspect students would understand circuits better if we went back to separate instruments for measuring current and voltage….
The mysterious multimeter. To measure current, leads are connected to the sockets marked ‘common’ and ‘amps’; to measure voltage, one connects to the sockets marked ‘common’ and ‘voltage’.
1. If a 12-V voltage is applied across a resistor of 15 kΩ, what current flows in the circuit? How many electrons per second does this current represent? (Ans: 0.8 mA, 5.0 x 10^15 electrons)
2. What happens to the current if one end of the resistor accidentally touches the other? (Ans: the circuit resistance drops almost to zero and the current becomes very large – don’t try this in the
3. Ohm’s law is a misnomer – it is not a universal law of nature but simply a property of some materials (many materials have a nonlinear response to voltage, including your cat).
4. It might seem from Ohm’s law that a material with zero resistance can give infinite current! No such materials are known; the relation is simply not valid for these materials. However, some
materials have extremely low resistance at very low temperatures, known as superconductors. A good application of superconductivity can be found at the Large Hadron Collider, where protons are guided
around the ring by magnets made of superconducting material: this reduces power consumption enormously but the snag is that the entire accelerator has to be kept at extremely low temperatures during
the experiments.
Filed under Teaching
RTE, NASA and a WARP drive
On Friday, I got a call from Mooney Goes Wild , the daily science programme on Irish national radio, asking me to participate in an interview concerning NASA’s recent interest in creating a WARP
drive for space travel. I’d heard this interesting story over Christmas and I like science on the radio, so it was fun to look up a few details and take part in the discussion.
Starship Enterprise of Star Trek uses a warp drive to traverse the immense distances of outer space
The live interview took place that very afternoon, right in the middle of our College Exam Boards (those weighty meetings when lecturers come together with external examiners to decide which students
pass and which don’t). Our current physics extern, Professor Peter Mitchell of UCD, taught me as a student, so we had fun discussing the NASA project over lunch.
In the event, the interview was very interesting; I thought the RTE panel of Olan Mc Gowan, Eanna ni Lamhna, Richard Collins and Terry Flanagan asked great questions and we all enjoyed ourselves.
Below is the Q&A script I prepared in advance (I always run up a draft script as it helps me organize my thoughts and it provides interviewers with a jumping-off point). The panel’s questions went a
good deal further, you can hear a podcast of the interview here.
Artist’s impression of the NASA experiment; the vacuum ring causes space behind the object to expand, propelling it forwards
We recently came across a story that NASA has begun work on the development of a WARP drive, a device that would allow spaceships to travel faster than light. Such an engine could in principle
transport a spacecraft to the most distant stars in a matter of weeks, but seems the stuff of science fiction. We contacted Dr Cormac O Raifeartaigh, a physicist at Waterford Institute of
Technology, to get his opinion on this story…
PANEL: First of all, what is a warp drive?
COR: It’s the word used for a hypothetical engine that could drive a spacecraft by distorting or ‘warping’ space. In principle, this could allow the ship to travel faster than the speed of
light, taking a shortcut to reach remote galaxies in hours instead of millions of years! (The device turns up in science fiction in order to enable people to get from one galaxy to another
without dying of old age on the way…even travelling to a nearby planet takes several years).
PANEL: How is it supposed to work? I thought faster-than-light travel was supposed to be impossible?
COR: That’s right. According to Einstein’s theory of relativity, no material body can reach the speed of light. If it comes close to this speed, the body gets bigger, and heavier, and it cannot
match the speed of something with no mass (light). There is a lot of evidence to suggest that this is exactly what happens, it’s amazing to see particles like protons accelerated at facilities
like the Large Hadron Collider up to speeds like 99.99% of the speed of light, but never quite reaching nature’s speed limit.
PANEL: So, how does the warp drive work then ?
COR: Another prediction of relativity is that space and time are not fixed, but affected by motion and by gravity. For example, there is a huge amount of evidence that the space of our universe
is continually expanding. In principle, a patch of space can move at any speed; if you could somehow warp a bubble of space around an object ( or spaceship), then that object would travel at the
speed set by the distortion..
PANEL: Has this mad idea been around for a while?
COR: Yes,in principle. The problem is that the energy required to make that bubble of warped space is far greater than any energy available. What’s new is that physicist Harold White at NASA
thinks he can reduce the energy required, with a clever design; the object (spaceship) is surrounded by a thin vacuum ring of a special shape that causes the space just behind the spaceship to
expand, and just in front to contract; the difference propels the spaceship very fast indeed! Of course that’s just the theory..
PANEL: Do you think it will work?
COR: No, I doubt it, even with objects on the atomic scale. However, we will learn a lot by trying, there’s nothing wrong with the principle. For example, many cosmologists believe that our
whole universe expanded at speeds far greater than light during the first instant (the theory of cosmic inflation), before settling down to today’s more sedate expansion. But as regards
investment, I wouldn’t put any money in ‘warp drive’ shares just yet!
Filed under Public lectures, Science and society
End of semester
This week is one of my favourites in the college timetable. The teaching semester finished last Friday and the hapless students are now starting their Christmas exams. It’s time to empty out the
teaching briefcase and catch up on research…
Examtime in college
I recently compiled a list of this semseter’s research and outreach and was pleasantly surprised – three conference presentations, two academic papers and eight public lectures , not to mention a
couple of science articles and book reviews in The Irish Times (see here for presentations and here for articles).
All of this is on top of an 18-hour teaching week, which adds up to a lot of late nights. I’ve been arguing for years that the workload in the Institutes of Technology should be more flexible; it’s
very difficult to do any meaningful research if you’re teaching 18 hours a week. Another challenge is that most lecturers in the IoT sector are 3-4 to an office, with consequent staff interactions,
phone calls and students coming to the door. As a result, a great many lecturers simply stop doing research, which is a terrible waste and hardly ideal for a college that teaches to degree level and
beyond. I often think that, far from enhancing ‘productivity’, work practices in the IoT sector mitigate strongly against good teaching and research at third level.
In my case, I stay in college most evenings until 9 pm. That said, I enjoy the research – as I say to my students, if you find a job you truly like, you’ll never work a day in your life!
I’m particularly pleased with my recent paper on the discovery of the expanding universe. It’s my first foray into the history of cosmology, and it has already got quite a bit of attention, thanks
to a very nice conference in Arizona. I very nearly didn’t go to this conference because of teaching commitments; now I’m glad I did as it was a lot of fun and the paper has opened quite a few doors.
These days, I turn down far more opportunities than I accept, it may finally be time to consider an academic move.
Slipher’s telescope at the Lowell Observatory in Flagstaff, Arizona
Meanwhile, rumours continue to circulate in the media concerning the prospect of our college being turned into a technological university. This would certainly be a welcome development, especially if
it meant reduced teaching for those engaged in research, but I’d be quite surprised. WIT has been very successful at attracting research funding in certain areas, but research activity per academic
is quite low in our college in comparison with the university sector. I don’t see how we could qualify as a university without bringing in quite a lot of new research-active staff , a buy-in for
which there is no money whatsoever; hopefully I’m wrong on this.
Filed under Teaching, Third level
Mozart and the stars
I had a lovely evening on Thursday playing Mozart piano trios in Castalia Hall in Ballytobin, Co. Kilkenny. My fiddle doesn’t come out of retirement that often these days but I always try and make an
exception for chamber music, especially if it’s Mozart. (Note: a piano trio means a piano, violin and cello playing together, not three pianos!)
Castalia hall, Ballytobin
The hall was a big surprise; tucked away at the foot of a large property belonging to the Camphill community of Ballytobin, it is a beautiful building, highly original in style and with a fantastic
acoustic. The Camphill community was set up as a therapeutic centre for children and adults with physical or mental disabilities and there is a really nice atmosphere there. Several of the young
guests wandered into the hall while we were playing, clearly well used to visiting musicians.
And what musicians. One of the great advantages of teaching at Waterford Institute of Technology is that I occasionally get to play music with renowned harpsichordist Malcolm Proud. Malcolm lectures
in music in our college and just happens to be a world authority on period music. When he’s not away on solo recitals around Europe or touring with the Irish Baroque Orchestra, he likes to relax by
playing chamber music of a different era – which is where amateurs like me come in.
Malcolm at the piano at Castalia hall
Of course, I’m not the only violin-playing physicist, it’s well-known that Einstein played the violin to to quite a serious level. Actually, an extraordinary number of mathematicians and physicists
play classical music, I’ve often wondered about the connection. It’s hard to judge just how good a player Einstein was from his biographers; however, he must have ben reasonably competent as he
performed celebrity chamber concerts with outstanding musicians such as Rubenstein. I read somewhere that Mozart was his favourite composer, that’s something else we have in common.
Einstein in concert with a piano trio
On Thursday, we played through the G major (K496) and C major (K548) trios. Although I have played the piano quartets many times, the trios were new territory for me. Looking at the score in
advance, I thought the C major would be the more challenging of the two – in fact it was absolutely beautiful to play, a lovely opening movement followed by a fantastic slow movement. Another Mozart
discovery, can’t wait to try the other five trios.
After the session, we climbed up the tower at Castalia to inspect the new observatory. John Clarke, the director of the community, had the tower built with a view to mounting a telescope on top; he
has done a fantastic job, it’s a superb location for an observatory. Then we all went back to Malcolm’s house to inspect a telescope that has been misbehaving. Our cellist Ian McShane is a keen
astronomer and it took him about three minutes to find the problem , whereupon we all had a good look at the beautiful night sky one sees on clear nights in rural Ireland.
Filed under Music
September conference: origins of the expanding universe
A conference next month will celebrate the pioneering work of the American astronomer Vesto Slipher. On September 13-15th, the Lowell Observatory in Flagstaff, Arizona, will host the conference The
Origins of the Expanding Universe to commemmorate the hundredth anniversary of Slipher’s measurements of the motion of the distant nebulae; see here for the conference website.
As readers of this blog will know, Slipher observed that the light from many of the distant nebulae was redshifted, i.e. shifted to lower frequency than normal. This was the first indication that
the distant nebulae are moving away at significant speed and it was an important hint that some nebulae are in fact distinct galaxies far beyond our own Milky Way (see cosmology 101 section). A few
years later, Edwin Hubble combined Slipher’s redshift results with his own measurements of distance to establish that there is a linear relation between the distance to a galaxy and its rate of
recession; the relation became known as Hubble’s law although it probably should be called the Hubble/Slipher law.
The Hubble/Slipher discovery of the recession of the galaxies was a key step along the road to the discovery of the expanding universe, but the two are not quite the same thing; for the latter, one
needs to situate the phenomenon in the context of the general theory of relativity (according to relativity, the galaxies appear to be moving away from one another because space is expanding). The
Belgian physicist Georges Lemaitre was the first to make the connection between the relativistic universe and the observed recession of the galaxies, although his contribution is often overlooked. A
major thrust of the conference is to explore exactly such distinctions; looking at the lineup, it looks like an intriguing mixture of cosmologists, astronomers and historians.
All this is highly relevant to my yet-to-be-completed book so after a long, wet summer at WIT, I’m off to sunny Arizona next month! My own talk is titled ‘Who discovered the expanding universe?’ and
I intend to compare and contrast the contributions of various pioneers such as Slipher, Hubble, Humason, Friedmann and Lemaitre. You can see a list of speakers and abstracts for the talks here.
Many thanks to Peter Coles of In the Dark for drawing the conference to my attention.
Going on holiday just as classes start back? Nice job – Ed.
Sigh. I haven’t had a day off all summer and this is not a holiday.
Filed under Astronomy, Cosmology (general), Third level | {"url":"http://coraifeartaigh.wordpress.com/tag/third-level/","timestamp":"2014-04-16T10:22:51Z","content_type":null,"content_length":"81540","record_id":"<urn:uuid:2e73ef26-ed30-4662-8322-5b834f5ed930>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
A common basis for similarity and dissimilarity measures involving two strings
- Pattern Recognition
"... We study the problem of recognizing a string Y which is the noisy version of some unknown string X * chosen from a finite dictionary, H. The traditional case which has been extensively studied
in the literature is the one in which Y contains substitution, insertion and deletion (SID) errors. Altho ..."
Cited by 12 (2 self)
Add to MetaCart
We study the problem of recognizing a string Y which is the noisy version of some unknown string X * chosen from a finite dictionary, H. The traditional case which has been extensively studied in the
literature is the one in which Y contains substitution, insertion and deletion (SID) errors. Although some work has been done to extend the traditional set of edit operations to include the
straightforward transposition of adjacent characters 2 [14] the problem is unsolved when the transposed characters are themselves subsequently substituted, as is typical in cursive and typewritten
script, in molecular biology and in noisy chain-coded boundaries. In this paper we present the first reported solution to the analytic problem of editing one string X to another, Y using these four
edit operations. A scheme for obtaining the optimal edit operations has also been given. Both these solutions are optimal for the infinite alphabet case. Using these algorithms we present a syntactic
pattern rec...
- IEEE Transactions on Pattern Analysis and Machine Intelligence , 1996
"... Marzal and Vidal [8] recently considered the problem of computing the normalized edit distance between two strings, and reported experimental results which demonstrated the use of the measure to
recognize handwritten characters. Their paper formulated the theoretical properties of the measure and de ..."
Cited by 11 (1 self)
Add to MetaCart
Marzal and Vidal [8] recently considered the problem of computing the normalized edit distance between two strings, and reported experimental results which demonstrated the use of the measure to
recognize handwritten characters. Their paper formulated the theoretical properties of the measure and developed two algorithms to compute it. In this short communication we shall demonstrate how
this measure is related to an auxiliary measure already defined in the literature -- the inter-string constrained edit distance [10,11,15]. Since the normalized edit distance can be computed
efficiently using the latter, the analytic and experimental results reported in [8] can be obtained just as accurately, but more efficiently, using the strategies presented here. I. PROBLEM STATEMENT
In the comparison of text patterns, phonemes and biological macromolecules a question that has attracted much interest is that of quantifying the dissimilarity between strings. A review of such
distance measures and ...
"... In this paper we present a foundational basis for optimal and information theoretic syntactic pattern recognition. We do this by developing a rigorous model, M * , for channels which permit
arbitrarily distributed substitution, deletion and insertion syntactic errors. More explicitly, if A is any ..."
Cited by 4 (2 self)
Add to MetaCart
In this paper we present a foundational basis for optimal and information theoretic syntactic pattern recognition. We do this by developing a rigorous model, M * , for channels which permit
arbitrarily distributed substitution, deletion and insertion syntactic errors. More explicitly, if A is any finite alphabet and A * the set of words over A, we specify a stochastically consistent
scheme by which a string U A * can be transformed into any Y A * by means of arbitrarily distributed substitution, deletion and insertion operations. The scheme is shown to be Functionally Complete
and stochastically consistent. Apart from the synthesis aspects, we also deal with the analysis of such a model and derive a technique by which Pr[Y|U], the probability of receiving Y given that U
was transmitted, can be computed in cubic time using dynamic programming. One of the salient features of this scheme is that it demonstrates how dynamic programming can be applied to evaluate
quantities involv...
- IEEE Transactions on Systems, Man and Cybernetics , 1997
"... A typical syntactic pattern recognition (PR) problem involves comparing a noisy string with every element of a dictionary, H. The problem of classification can be greatly simplified if the
dictionary is partitioned into a set of sub-dictionaries. In this case, the classification can be hierarchical ..."
Cited by 3 (0 self)
Add to MetaCart
A typical syntactic pattern recognition (PR) problem involves comparing a noisy string with every element of a dictionary, H. The problem of classification can be greatly simplified if the dictionary
is partitioned into a set of sub-dictionaries. In this case, the classification can be hierarchical -- the noisy string is first compared to a representative element of each sub-dictionary and the
closest match within the sub-dictionary is subsequently located. Indeed, the entire problem of sub-dividing a set of strings into subsets where each subset contains "similar" strings has been
referred to as the "String Taxonomy Problem". To our knowledge there is no reported solution to this problem (see footnote on Page 2). In this paper we shall present a learningautomaton based
solution to string taxonomy. The solution utilizes the Object Migrating Automaton (OMA) whose power in clustering objects and images [33,35] has been reported. The power of the scheme for string
taxonomy has been demons...
- In ICSC , 1994
"... . We consider a problem which can greatly enhance the areas of cursive script recognition and the recognition of printed character sequences. This problem involves recognizing words/strings by
processing their noisy subsequences. Let X * be any unknown word from a finite dictionary H. Let U be a ..."
Cited by 1 (0 self)
Add to MetaCart
. We consider a problem which can greatly enhance the areas of cursive script recognition and the recognition of printed character sequences. This problem involves recognizing words/strings by
processing their noisy subsequences. Let X * be any unknown word from a finite dictionary H. Let U be any arbitrary subsequence of X * . We study the problem of estimating X * by processing Y, a
noisy version of U. Y contains substitution, insertion, deletion and generalized transposition errors -- the latter occurring when transposed characters are themselves subsequently substituted. We
solve the noisy subsequence recognition problem by defining and using the constrained edit distance between X H and Y subject to any arbitrary edit constraint involving the number and type of edit
operations to be performed. An algorithm to compute this constrained edit distance has been presented. Using these algorithms we present a syntactic Pattern Recognition (PR) scheme which corrects
noisy tex...
- In Advances in Structural and Syntactic Pattern Recognition , 1996
"... In this paper we present a foundational basis for optimal and information theoretic syntactic pattern recognition. We do this by developing a rigorous model, M * , for channels which permit
arbitrarily distributed substitution, deletion and insertion syntactic errors. More explicitly, if A is any ..."
Cited by 1 (0 self)
Add to MetaCart
In this paper we present a foundational basis for optimal and information theoretic syntactic pattern recognition. We do this by developing a rigorous model, M * , for channels which permit
arbitrarily distributed substitution, deletion and insertion syntactic errors. More explicitly, if A is any finite alphabet and A * the set of words over A, we specify a stochastically consistent
scheme by which a string U A * can be transformed into any Y A * by means of arbitrarily distributed substitution, deletion and insertion operations. The scheme is shown to be Functionally Complete
and stochastically consistent. Apart from the synthesis aspects, we also deal with the analysis of such a model and derive a technique by which Pr[Y|U], the probability of receiving Y given that U
was transmitted, can be computed in cubic time using dynamic programming. Experimental results which involve dictionaries with strings of lengths between 7 and 14 with an overall average noise of
39.75 % demons... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2702183","timestamp":"2014-04-21T09:08:19Z","content_type":null,"content_length":"28437","record_id":"<urn:uuid:f1ef186e-e141-4345-9139-635806cc6d5a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relationship between noise power, SNR and bandwidth
17th April 2007, 14:59
snr and bandwidth parameters
Could somebody please explain the relationship between noise power (dBc/Hz), Bandwith (Hz) and SNR for a linear system.
Note: the noise power is quoted as a negative number
17th April 2007, 15:21
relation between snr and bandwidth
your question is vague. try rephrasing it.
Are you looking for shannon's channel capacity theorem?
17th April 2007, 15:57
bandwith snr
Sorry, ill rephrase my question.
Im working on some software that takes an audio input from the PC line input and applies gaussian noise of a given power to acheive a desired SNR, then passes the output to the soundcard line
A calibration routine measures the input signal power. The bandwidth is set by a filter on the soundcard output.
On the software interface are three parameters intrinsically related, these being bandwidth, SNR and noise power. Changing one alteres the other two.
Thus Im trying to understand the relationship between these parameters.
Hope this clarifies my question.
On an separate note, I am also slightly thrown by a reference in literiture Im reading that suggests that a 6dB difference exists between a "coherent" signal source and an "incoherent" noise
I think the term coherence used here is referring to in time i.e. a coherent signal source has strong correlation at time intervals, whereas a true noise source is always uncorrelated and thus
Is my understanding of the (in)chorent term correct and any explanation of the 6dB difference???
Thank you
18th April 2007, 06:52
Re: Relationship between noise power, SNR and bandwidth
Signal Bandwidth - B_S
Noise Bandwidth - B_N
Filter Bandwidth - B_F
Signal : P_S (Over its Bandwidth B_S)
Noise : P_N (Over its Bandwidth B_N)
Noise varies with frequency (in case of Coloured Gaussian).
Effect of Changing one on the other:
1) Bandwidth:
As Bandwith is reduced to say B_F1 both signal and Noise power will reduce as power above B_F1 is reduced to zero.
Hence Signal power, Noise power will reduce and hence SNR would also vary (increase or decrease).
2) Noise Power :
As Noise Power is reduced to say P_N1 some of the noise signal would reduce effectively zeroing its power within the bandwidth.
Thus apart from Increasing SNR on reducing Noise power, it would reduce even the bandwidth (never increase on reducing noise power).
3) Signal Power:
Signal Power follows same explanation of noise power too.
Dont forget to press Help button | {"url":"http://www.edaboard.com/printthread93820.html","timestamp":"2014-04-19T12:30:40Z","content_type":null,"content_length":"7140","record_id":"<urn:uuid:e0cb6f25-03d8-4137-b7d3-0b31049717f1>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
OPNML/MATLAB Facilities
READ_GRIB for MATLAB : The MATLAB Grib Edition1 reader has been moved to the Renaissance Computing Institute. Click here to go there now.
This is the access page for the OPNML/MATLAB5 routines for handling output files from the FUNDY and QUODDY series of shallow-water FEM models. Documentation including installation, demonstrations,
and function lists are available through the OPNML User's Guide on the OPNML Homepage. The OPNML/Matlab facilities consist of command-line functions to plot vector-fields, scalar contours, domain
geometry, and drogue paths.The functionality of the routines is restricted to 2-D fields, typically as horizontal/vertical slices of a 3-D field or vertically averaged resultsfrom a 3-D model. Some
3-D data visualization is avaviable, but currently is better handled using AVS. A Graphical User Interface (GUI) to the command-line OPNML/Matlab functions is avaliable called FEDAR (Finite Element
DAta Reviewer), and is described below. Most OPNML/Matlab codes rely on MATLAB's latest language features (namely structures) and thus require MATLAB 5.1 or later.
Go Directly To:
The principle difference between the MATLAB4 and MATLAB5 versions of the OPNML routines resides in MATLAB's new language features, particularly structures. Formerly, the function LOADGRID returned a
separate array for the FEM element list, the horizontal node coordinates, bathymetry, etc. ([e,x,y,z,b]=loadgrid(domainname);). Other functions were then passed the information needed, as in plotting
the FEM domain boundary with PLOTBND (plotbnd(x,y,b)). The MATLAB5 structure capability greatly simplifies the loading and passing of grid parts to other routines. In MATLAB5, the above calls now
look like:
In the first statement, the LOADGRID function returns the element, node, bathymetry, and boundary information packaged into a structure called gridstruct. gridstruct has the following components
(type "gridstruct" at the MATLAB prompt (>>)),for example.
gridstruct =
name: 'domainname'
e: [727x3 double] % Element list
x: [444x1 double] % Node x-coord
y: [444x1 double] % Node y-coord
z: [444x1 double] % Node depth
bnd: [161x2 double] % Domain boundary list
ar: [727x1 double] % Element areas
The next statement calls the boundary plotter and passes it the ENTIRE structure, from which PLOTBND extracts the information it needs. Below is a list of OPNML/MATLAB routines that are provided for
backward usage; they use the calling syntax from MATLAB4 and are post-pended with a "4". The user will need to determine if it's easier to replace MATLAB4 syntax with the new structure-based features
or temporarily append "4" to certain routines. But these should not be relied upon to exist forever; they are old and in the way.
belint4 drawelems4 el_areas4 genbel4
lcontour4 loadgrid4 plotbnd4
More details are given in the OPNML User's Guide Demo.
FDCONT Version 1.2
FDCONT (current version 1.2) is a contour/vector plotting toolkit for FEM node-based data. It requires MATLAB5+, and the usual OPNML toolbox. Click here to download a tarball. It will untar into
a directory called FDCONT_1.2. Place this directory alongside FDCONT_1.1, in $OPNML. Then, edit the opnmlinit.m function to point to the new directory. For demo and documentation, click here.
VIZICQ4 Version 1.2
VIZICQ4 is a GUI-based .icq4 file visualizer written in MATLAB for the GLOBEC/OSSE workgroup. It is in Beta development (1.2), but can be retrieved here. VIZICQ4 will eventually support
isosurfaces using new MATLAB5.3 functionality, and will also read in any standard model output file type. Also available from the FCAST/MATLAB pages. README,TarBall, ScreenShot
BUILDIND is a GUI-based .ind file generator written in MATLAB for the GLOBEC/OSSE workgroup. It is in Beta development, but can be retrieved here. Also available from the FCAST/MATLAB pages.
FEDAR3, the Finite Element DAta Reviewer, is on the verge of being released. For a screenshot of FEDAR3's look, click here. Retrieve beta-test tarball.
New OPNML Functions
These functions are not yet in the standard distribution. They are Beta standalone routines that need hammering on.
read_grib V1.4.0 (20 Sep 2005) a WMO GRiB file reader
read_grib is a World Meteorological Organization (WMO) GRiB file reader. It allows the reading of the WMO international exchange GRiB formatted data files into MATLAB. It has various input
modes, including extraction of individial GRiB records by record number, extraction by parameter name (which is not unique), and generating an inventory of the GRiB file contents. It has been
tested on the following standard model output files; AVN,ETA,RUC,ECMWF. The functionality is more complete than V1.2. The default GRiB parameter table used is the NCEP Operational table.
( http://wesley.wwb.noaa.gov). See http://www.scd.ucar.edu/dss/docs/gribdoc/ for GRiB format documentation.
drog2ddt a 2-D Drog tracker for MATLAB.
drog2ddt is a 2-D Drog tracker for MATLAB. It provides for the tracking of particles in a 2-D flow field directly in MATLAB. The integrator is 4th-order Runge-Kutta. The drogue elimination
criteria is not very sophisticated; if a drogue hits a boundary, it's gone. drog2ddt is not intended for production particle tracking, but rather for exploratory work directly in MATLAB. See
http://www.opnml.unc.edu/Projects/EL9905/Misc/misc.html for some example applications.
drog2ddt.m function.
pca.m function.
Retrieve the GNU ZIPPED OPNML/MATLAB5 Tarball
Retrieve the UNIX compressed OPNML/MATLAB5 Tarball
Retrieve the OPNML/MATLAB5 Tarball
See the OPNML User's Guide for Installation instructions.
Retrieve mex-file binaries for certain architectures. The tar file should be placed in the subdirecory MEX, relative to the installation directory of the OPNML/MATLAB5 tarball and untarred in place.
Here is access to the collection of OPNML routines that are not distributed in any of the above toolboxes. It is hoped that some of the routines may be of use to others, but they are provided on an
AS IS basis.
OPNML Local Routines MAIL: brian_blanton@renci.org
Last modified: 09 Sep 2002 | {"url":"http://www.opnml.unc.edu/OPNML_Matlab/","timestamp":"2014-04-16T10:31:57Z","content_type":null,"content_length":"14110","record_id":"<urn:uuid:6c39e6f0-7e00-4dd4-80bb-f4865b97970e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
Power Load Event Detection and Classification Based on Edge Symbol Analysis and Support Vector Machine
Applied Computational Intelligence and Soft Computing
Volume 2012 (2012), Article ID 742461, 10 pages
Research Article
Power Load Event Detection and Classification Based on Edge Symbol Analysis and Support Vector Machine
^1School of DCIT, University of Newcastle, Callaghan, NSW 2308, Australia
^2ICT Centre, Commonwealth Scientific and Industrial Research Organization, Clayton South, VIC 3169, Australia
^3Energy Technology Division, Commonwealth Scientific and Industrial Research Organization, Clayton South, VIC 3169, Australia
Received 30 April 2012; Revised 8 August 2012; Accepted 18 October 2012
Academic Editor: F. Morabito
Copyright © 2012 Lei Jiang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Energy signature analysis of power appliance is the core of nonintrusive load monitoring (NILM) where the detailed data of the appliances used in houses are obtained by analyzing changes in the
voltage and current. This paper focuses on developing an automatic power load event detection and appliance classification based on machine learning. In power load event detection, the paper presents
a new transient detection algorithm. By turn-on and turn-off transient waveforms analysis, it can accurately detect the edge point when a device is switched on or switched off. The proposed load
classification technique can identify different power appliances with improved recognition accuracy and computational speed. The load classification method is composed of two processes including
frequency feature analysis and support vector machine. The experimental results indicated that the incorporation of the new edge detection and turn-on and turn-off transient signature analysis into
NILM revealed more information than traditional NILM methods. The load classification method has achieved more than ninety percent recognition rate.
1. Introduction
Nowadays, multifarious classification techniques are widely used in many signal-processing areas, such as face recognition, road traffic analyze, medical image, weather forecasting, and so forth [1,
2]. It has been considered whether this kind of techniques could serve the purposes of power load monitoring, which is a process for obtaining what appliances are used in the house as well as their
individual energy consumption by analyzing changes in the voltage and current. In particular, information about the operating conditions of consumers (such as what appliances are operated and what
state they are in) is useful for demand estimation and prediction. In addition, the operating conditions of electrical appliances in modern society clearly reflect consumers’ lifestyles and behavior
patterns [3]. Research is being performed on the use of such information for circuit diagnoses, safety confirmation systems for elderly persons living alone [4], demand management, and optimization [
5–10], and for other uses [11].
There are mainly two classes of approaches in power monitoring algorithm, including intrusive appliance load monitoring (IALM) and nonintrusive appliance load monitoring (NIALM) [12]. In order to
measure consumption, IALM distributes direct sensors at each device or appliance. Although conceptually straightforward and potentially highly accurate, direct sensing is often expensive due to time
consuming installation and the requirement for one sensor for each device or appliance. In response to limitations with the direct sensing approach, researchers have explored methods to infer energy
usage via a single sensor.
In contrast to the direct sensing methods, the standard NIALM configuration includes a sole sensor set to measure current and voltage, as well as a processing algorithm for determining the status of
various devices [13]. It was firstly studied by Hart of MIT in the early 1990s with funding from the Electric Power Research Institute [12]. Since then, different strategies for NIALM have been
developed. One of the important approaches on classification is to apply support vector machine (SVM) [14–18] in the nonintrusive appliance load monitoring. It is imperative for power system research
field to evaluate the SVM on this task from a practical point of view. As the preprocess for finding out the features for SVM, transient detection also plays an important role in NIALM.
In order to decompose the total loads into their components, Bijker et al. employed different power level, or named step change for the purpose [19]. Chang et al. used DWT and a new method coreless
HCT to detect the power events [20]. Nonetheless, they only supposed to detect the events of appliances with complicated features, such as motor, microwave oven, and thyristor rectifier. In 2002,
Onoda et al. used the data proposed in [21] for the purpose of estimating the state of household electric appliances. They compared different types of SVMs obtained by choosing different kernels.
They reported results of polynomial kernels, radial basis function kernels, and sigmoid kernels. All results for the three different kernels achieved almost same error rates. However, in the
estimation of the state of household electric appliances, the results for the three different kernels achieved different error rates. They also compared different capacity of SVMs obtained by
choosing different regularization constants and parameters of kernels experimentally. The results showed that the capacity control is as important as the choice of kernel functions. Kadouche and his
colleagues later presented [22] their ongoing work on the house occupant prediction issue based on daily life habits in smart houses. Most of their works were based on supervised learning technical.
They used SVM to build behavior classification model for learning the user’s habits, analyzed the publicly available dataset from the Washington State University Smart Apartment Test-bed.
Particularly, they evaluated the grooming, having breakfast, and bed to toilet activities [23]. Their experimental results showed that the user can be recognized with a high precision which means
that each user has his own way to perform activities. As future work, the users’ patterns which allow a person to be discriminated and recognized among a group, performing multiactivities in the same
environment without using intrusive technologies were being studied.
Grinblat and his fellows presented a new method for generating adaptive classifiers and capable of learning concepts that change with time, which is the time-adaptive support vector machine (TA-SVM)
[24]. The basic idea of TA-SVM is to use a sequence of classifiers, each one is appropriate for a small time window but, in contrast to other proposals, learning all the hyperplanes in a global way.
Starting from the solution of independent SVMs, they showed that the addition of a new term in the cost function (which penalizes the diversity between consecutive classifiers) produces in fact a
coupling of the sequence. Once coupled, the set of SVMs acts as a single adaptive classifier. They evaluated different aspects of the TA-SVM using artificial drifting problems. In particular, they
showed that changing the number of classifiers and the coupling constant can effectively regularize the sequence of classifiers. They compared TA-SVM with other state-of-the-art methods in three
different settings: estimation, prediction, and extrapolation, including problems with small datasets, high-dimensional input spaces, and noise. TA-SVM showed in all cases to be equivalent to or
better than the other methods. Even for the most unfavorable situation for TA-SVM, that is, the sudden changes of the dataset, their new method showed a very good performance. However, the limitation
of this method may be reflected in the long drifting time and the requirement of additional hardware. A more recent research of Liang’s group [25], combined various features including current
waveform (CW), active/reactive power (PQ), harmonics (HAR), instantaneous admittance waveform (IAW), instantaneous power waveform (IPW), eigenvalues (EIG), and switching transient waveform (STW). The
results of their research provided a higher degree of recognition precision, but the algorithm requires mountains of work on collecting and processing appliance signatures.
In this paper, the data collected in real world is used and clearly analyzed what are the important issues for applying SVM to power system research field. The remainder is organized as below.
Section 2 gives the details of the new transient event detection process named ESA and the reasons why we choose SVM as our method and the main process of load classification. The details of data
collecting and preprocessing along with the experiment results are shown in Section 3. Finally, conclusions are made in Section 4.
2. Power Load Events Detection and Classification
It is known that although NIALM based on different techniques, it has several common principles [26].(i) Load features classification: specific appliance features, or signatures, need to be selected
and mathematically characterized. (ii) Mechanism: a hardware installation (sensor and data acquisition system) that can detect the selected features is required. (iii) Decomposition: a mathematical
algorithm detects the features in the overall signal and output.
Based on this theory, Figure 1 describes the process of this research. The mechanism collects the whole operation of a circuit, then, the appropriate algorithms draw out all the electrical events in
the circumstance. After successfully seeking the point “load switch on,” it is feasible to classify load , afterwards seek the point “load switch on” excluding faked edges generated by previous
loads. The same procedure subsequently goes for searching switch off events.
2.1. Power Load Events Detection Using Edge Symbol Analysis Method
2.1.1. Background
Methods of edge detection have been widely studied in many areas, primarily developed in the discrete cases and have been confined to slightly noisy signals. The regularization or smoothing and
optimal approaches of Canny [27] have led to several efficient continuous operators for noisy and blurred signals. Other advanced methods that consider the Canny criterion have been developed to deal
with noise, uneven illumination, and image contrast. In a marginal way, discrete approaches for regularization have been developed and have improved results by considering the discrete nature of the
signals [28, 29].
Introducing nonlinearity into the global filtering process, as in noisy edge detection, is a marginal yet efficient method of obtaining good performance. Generally, nonlinear filtering is used in a
preliminary regularization stage. Pitas and Venetsanopoulos proposed a class of nonlinear filters that reject additive and impulse noise, while preserving the edges [30]. More recently,
Benazza-Benyahia et al. introduced a nonlinear filter bank leading to a multiresolution approximation of the signal in which discontinuities are well preserved [31]. A nonlinear filter for edge
enhancement, using a morphological filter, has been proposed by Schulze [32]. The author showed that local variation analysis allows enhancing edges corrupted by multiplicative noise. Hwang and
Haddad presented an integrated nonlinear edge-detection-based denoising scheme [33]. A thresholded derivative is computed from two half filters (median for impulse noise, mean for Gaussian noise, and
min-max for uniform noise) and edge detection is used to select the second filtering stage, that is, mean for noise or median for edge points. In the scheme, edge detection could be considered as a
by-product and the optimal performance is obtained only when the correct first filter is selected according to the noise statistic. Based on these theories, Laligant and his colleague propose to
obtain both noise reduction and edge detection by a one-stage nonlinear derivative scheme [34]. The scheme, which consists of combining two polarized differences, yields significant improvements in
signal-to-noise ratio without using regularization or increasing the computational requirements.
2.1.2. The Approach
It is known that three kinds of slopes can be found in all power waveforms, including positive, negative, and constant. So the edge symbol analysis method (ESA) can find the position of the edge
point according to the sign of the signal area. In this way, the edge point will be located after the transition if the slope is positive. Otherwise, when it is negative, the edge point will be
located before the slope. Here, for the sake of detecting these two kinds of edges, two detector filters and have to be introduced. Since is chosen as an antisymmetric linear filter, which gives the
localization changes and this shifted pixel localization depending on edge’s orientation. These filters have the relationship described in (1). With the simple threshold , we use the combination of
and as below to detect edge points of the original input signal .
2.2. SVM-Based Power Load Classification
The proposed power classification method is composed of two major stages, including: first a frequency feature analysis is applied on current signal, and then a trained classifier based on SVM is
applied to identify different appliances. The following sections give the details of the approach.
2.2.1. Current Feature Analysis
To save computational resources and improve performance, only current signals were used for frequency analysis with fewer sample points [35]. For example, according to previous experiments, the
accuracy rate of the classification is almost the same or even higher when employing 500 frequency signal points as features instead of 2000 unprocessed current signal points.
After collecting the data of single device and extracting the current signals, a short-time fast Fourier transform (FFT) of the signal is performed as in (3). Where is an th root of the unity. and an
example is shown as Figure 2.
2.2.2. SVM-Based Classification
SVM, as one of the method of NIALM system, is relatively insensitive to the number of data points and the classification complexity does not depend on the dimensionality of the feature space [36].
Therefore, it can potentially learn a larger set of patterns and be able to scale better than some other methods. Once the data are classified into two classes, a suitable optimizing algorithm can be
used if necessary for further feature identification, depending on the application.
Support vector machine is a training algorithm for learning classification and regression rules from data SVM was first introduced by Cortes and Vapnik [37] in the 1990s for classification and have
recently become an area of intense research owing to developments in the techniques and theory coupled with extensions to regression and density estimation. It is based on the structural risk
minimization principle which incorporates capacity control to prevent overfitting. It is a partial solution to the bias-variance trade-off dilemma [38]. It has been widely used in different areas. As
a classifier, it gives a set of training examples, each marked as belonging to one of two categories. The SVM training algorithm builds a model that predicts whether a new example falls into one
category or the other. Intuitively, an SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as
wide as possible [39]. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on.
SVM delivers state-of-the-art performance in real-world applications such as text categorization, hand-written character recognition, biosequences analysis, image classification, and so forth. It is
now established as one of the standard tools for machine learning and data mining. The SVM decision function is defined as (4) Here, is the unclassified tested vector, is the support vectors and is
their weights, and is a constant bias. is the kernel function introduced into SVM to solve the nonlinear problems by performing implicit mapping into a high-dimensional feature space [40].
Consider the problem of separating the set of training vectors belonging to two separate classes [41]. with a hyperplane, The kernel function chosen in our algorithm is Gaussian where scalar is
identical for all coordinates.
3. Experiments
3.1. Data Collecting and Preprocessing
All the data are collected in Commonwealth Scientific and Industrial Research Organization (CSIRO) and a residential house in Newcastle, NSW, Australia. Collecting details are shown in Table 1.
Figure 3 shows the segmentation process in this experiment. Each signal of load has three phrases: on phrase, steady state, and off phrase.
3.2. Experiment Result and Discussion
3.2.1. Implementation of ESA
In Figure 4, the overlapped data of Kettle and Toaster are shown as an example. Figure 4(a) is the original overlapped data of kettle and oven. The red figures in Figure 4(b) are the output of edge
detection and the original data. The on and off edges are represented in red color. To clearly observe the resultant edge points, the edge points in Figure 4(b) (i.e., the red curve) are redrawn in
Figure 4(c).
The example of overlapping kettle with microwave oven (Figure 5(a)) and PC (Figure 5(b)) is shown in Figure 5. It can easily find the edges (as the yellow arrows shown in Figure 5) by repeating the
process that mentioned above, and then relocate the transient points in the original data. When the transient features are recognized as “microwave oven on” or “PC on” by SVM, the algorithm ignores
the next “Off” and “On” events because they are fake symbols. It is thus able to obtain the full process of the whole event.
Data of 2 loads combinations from four typical appliances are employed for testing in this experiment. The algorithm runs on 10 different data for each group. As for a single case, the performance
time consumption is lower than 4 seconds. The result of this process is shown in Table 2. Practically, all events are accurately detected by ESA except the Load2-On, whereas the accuracy is still
higher than 80%.
3.2.2. Implementation of Load Classification
The recognition process on each phrase is running with Gaussian as the function kernel. Using the segmented steady-state power data for the test, some selected features are shown in Figures 6 and 7,
where Figure 6 shows the training unprocessed current data in time domain, and Figure 7 shows the training current data in frequency domain. The algorithm runs 70 times on the data collected from the
11 different loads. As for a single case, the performance time consumption is lower than 2 seconds. The recognition results are shown in Table 3.
It can be seen that all the recognition accuracy rates are higher than 90%. It shows the ideal recognition performance of our algorithm. Moreover, despite the training processes always take several
minutes, it avoid a mess of pre-work on processing signatures. The testing processes are much quicker than previous methods because of employing comparatively a small quantity of points from load
frequency features. However, we would like to indicate that the number of appliance is limited and no transient event is involved. The performance may come down when more devices and events are used
in experiments.
4. Conclusion
This paper has proposed one approach of nonintrusive appliance load monitoring for electrical consumption managing. This approach can automatically detect the switch-on and switch-off events of
domestic appliances and classify different appliances using load features and advanced algorithms, therefore, monitoring the house power consumption of individual devices. The new transient detection
algorithm, in combination with turn-on and turn-off transient waveforms analysis, is developed to detect the mutative power events. The load classification technique, which employs support vector
machine to recognize different appliances, is capable of identifying kinds of power loads with improved recognition accuracy and computational speed of NILM results. A recognition rate of higher than
90% has been achieved in recognizing various electrical devices. Our experiments have shown that recognition approaches based on support vector machines will be a trend for the process of residential
load monitoring.
1. G. Guo, S. Li, and K. Chan, “Face recognition by support vector machines,” in Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 196–201, 2000.
2. S. Du and T. Wu, “Support Vector Machines for Regression,” Acta Simulata Systematica Sinica, TP18, CNKI:SUN:XTFZ.0.2003-11-022, 2003.
3. S. Inagaki, T. Egami, T. Suzuki, H. Nakamura, and K. Ito, “Nonintrusive appliance load monitoring based on integer programming,” Electrical Engineering in Japan, vol. 174, no. 2, pp. 1386–1392,
2011. View at Publisher · View at Google Scholar · View at Scopus
4. S. Aoki, M. Onishi, A. Kojima, and K. Fukunaga, “Detection of a solitude senior’s irregular states based on learning and recognizing of behavioral patterns,” IEEJ, vol. 125, pp. 259–265, 2005.
5. C. Laughman, K. D. Lee, R. Cox et al., “Advanced nonintrusive monitoring of electric loads,” IEEE Power and Energy Magazine, pp. 56–563, April 2003.
6. J. Li, G. Poulton, and G. James, “Agent-based distributed energy management,” in Proceedings of the 20th Australian Joint Conference on Advances in Artificial Intelligence, vol. 4830, pp.
569–578, December 2007.
7. J. Li, G. Poulton, and G. James, “Coordination of distributed energy resource agents,” Applied Artificial Intelligence, vol. 24, no. 5, pp. 351–380, 2010. View at Publisher · View at Google
Scholar · View at Scopus
8. Y. Guo, J. Li, and G. James, “Evolutionary optimisation of distributed energy resources,” in Proceedings of the 18th Australian Joint Conference on Advances in Artificial Intelligence, vol. 3809,
pp. 1086–1091, Sydney, Australia, December 2005.
9. R. Li, J. Li, G. Poulton, and G. James, “Agent-based optimisation systems for electrical load management,” in Proceedings of the 1st International Workshop on Optimisation in Multi-Agent Systems,
pp. 60–69, Estoril, Portugual, May 2008.
10. J. Li, G. James, and G. Poulton, “Set-points based optimal multi-agent coordination for controlling distributed energy loads,” in Proceedings of the 3rd IEEE International Conference on
Self-Adaptive and Self-Organizing Systems (SASO '09), pp. 265–271, San Francisco, Calif, USA, September 2009. View at Publisher · View at Google Scholar · View at Scopus
11. J. Li, G. Poulton, G. James, and Y. Guo, “Multiple energy resource agent coordination based on electricity price,” Journal of Distributed Energy Resources, vol. 5, pp. 103–120, 2009.
12. G. W. Hart, “Nonintrusive appliance load monitoring,” Proceedings of the IEEE, vol. 80, no. 12, pp. 1870–1891, 1992. View at Publisher · View at Google Scholar · View at Scopus
13. S. B. Leeb, S. R. Shaw, and J. L. Kirtley, “Transient event detection in spectral envelope estimates for nonintrusive load monitoring,” IEEE Transactions on Power Delivery, vol. 10, no. 3, pp.
1200–1210, 1995. View at Publisher · View at Google Scholar · View at Scopus
14. M. C. Ferris and T. S. Munson, “Interior point methods for massive support vector machines,” Tech. Rep. 00-05, Computer Sciences Department, University of Wisconsin, Madison, Wis, USA, 2000.
15. C. J. C. Burges, “A tutorial on support vector machines for pattern recognition,” Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121–167, 1998. View at Scopus
16. J. J. Moré and G. Toraldo, “Algorithms for bound constrained quadratic programming problems,” Numerische Mathematik, vol. 55, no. 4, pp. 377–400, 1989. View at Publisher · View at Google Scholar
· View at Scopus
17. O. L. Mangasarian and D. R. Musicant, “Active set support vector machine classifiation,” in Advances in Neural Information Processing Systems, T. Leen, T. Dietterich, and V. Tresp, Eds., vol. 13,
pp. 577–583, MIT Press, Cambridge, Mass, USA, 2001.
18. J. Li, S. West, and G. Platt, “Power decomposition based on SVM regression,” in Proceedings of the 4th International Conference on Modelling, Identification and Control (ICMIC '12), pp.
1256–1261, Wuhan, China, June 2012.
19. A. J. Bijker, X. Xia, and J. Zhang, “Active power residential non-intrusive appliance load monitoring system,” in IEEE AFRICON Conference, pp. 1–6, September 2009. View at Publisher · View at
Google Scholar · View at Scopus
20. H.-H. Chang, K.-L. Chen, Y.-P. Tsai, and W.-J. Lee, “A new measurement method for power signatures of non-intrusive demand monitoring and load identification,” in Proceedings of the 46th IEEE
Industry Applications Society Annual Meeting (IAS '11), pp. 1–7, Orlando, Fla, USA, 2011. View at Publisher · View at Google Scholar · View at Scopus
21. T. Onoda, H. Murata, G. Rätsch, and K.-R. Müller, “Experimental analysis of Support Vector Machines with different kernels based on non-intrusive monitoring data,” in Proceedings of the
International Joint Conference on Neural Networks (IJCNN '02), vol. 3, pp. 2186–2191, Honolulu, Hawaii, USA, 2002. View at Scopus
22. R. Kadouche, B. Chikhaoui, and B. Abdulrazak, “User's behavior study for smart houses occupant prediction,” Annals of Telecommunications, vol. 65, no. 9-10, pp. 539–543, 2010. View at Publisher ·
View at Google Scholar · View at Scopus
23. S. R. Gunn, “Support vector machines for classification and regression,” Faculty of Engineering, Science and Mathematics School of Electronics and Computer, University of Southampton, 1998.
24. G. L. Grinblat, L. C. Uzal, H. A. Ceccatto, and P. M. Granitto, “Solving nonstationary classification problems with coupled upport vector machines,” IEEE Transactions on Neural Networks, vol. 22,
no. 1, pp. 37–51, 2011. View at Publisher · View at Google Scholar · View at Scopus
25. J. Liang, S. K. K. Ng, G. Kendall, and J. W. M. Cheng, “Load signature studypart I: basic concept, structure, and methodology,” IEEE Transactions on Power Delivery, vol. 25, no. 2, pp. 551–560,
2010. View at Publisher · View at Google Scholar · View at Scopus
26. M. Zeifman and K. Roth, “Nonintrusive appliance load monitoring: review and outlook,” IEEE Transactions on Consumer Electronics, vol. 57, no. 1, pp. 76–84, 2011. View at Publisher · View at
Google Scholar · View at Scopus
27. J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679–698, 1986. View at Scopus
28. D. Demigny and T. Kamlé, “A discrete expression of canny's criteria for step edge detector performances evaluation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no.
11, pp. 1199–1211, 1997. View at Scopus
29. F. Truchetet, F. Nicolier, and O. Laligant, “Subpixel edge detection for dimensional control by artificial vision,” Journal of Electronic Imaging, vol. 10, no. 1, pp. 234–239, 2001. View at
Publisher · View at Google Scholar · View at Scopus
30. I. Pitas and A. Venetsanopoulos, “Nonlinear mean filters in image processing,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 34, no. 3, pp. 573–584, 1986.
31. A. Benazza-Benyahia, J. C. Pesquet, and H. Krim, “A nonlinear diffusion-based three-band filter bank,” IEEE Signal Processing Letters, vol. 10, no. 12, pp. 360–363, 2003. View at Publisher · View
at Google Scholar · View at Scopus
32. M. A. Schulze, “An edge-enhancing nonlinear filter for reducing multiplicative noise,” in Nonlinear Image Processing VIII, E. R. Dougherty and J. Astola, Eds., vol. 3026 of Proceedings of SPIE,
pp. 46–56, San Jose, Calif, USA, February 1997.
33. H. Hwang and R. A. Haddad, “Multilevel nonlinear filters for edge detection and noise suppression,” IEEE Transactions on Signal Processing, vol. 42, no. 2, pp. 249–258, 1994. View at Publisher ·
View at Google Scholar · View at Scopus
34. O. Laligant and F. Truchetet, “A nonlinear derivative scheme applied to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 2, pp. 242–257, 2010. View at
Publisher · View at Google Scholar · View at Scopus
35. L. Jiang, S. Luo, and J. Li, “An approach of household power appliance monitoring based on machine learning,” in Proceedings of the 5th International Conference on Intelligent Computation
Technology and Automation (ICICTA '12), pp. 577–580, January 2012.
36. T. Joachims, “Making large-scale SVM learning practical,” LS8-Report, University of Dortmund, 1998.
37. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at Publisher · View at Google Scholar · View at Scopus
38. A. B. Ji, J. H. Pang, and H. J. Qiu, “Support vector machine for classification based on fuzzy training data,” Expert Systems with Applications, vol. 37, no. 4, pp. 3495–3498, 2010. View at
Publisher · View at Google Scholar · View at Scopus
39. S. Luo, Q. Hu, X. He, J. Li, J. S. Jin, and M. Park, “Automatic liver parenchyma segmentation from abdominal CT images using support vector machines,” in Proceedings of the IEEE/CME International
Conference on Complex Medical Engineering (ICME '09), p. 10071, Tempe, Ariz, USA, April 2009. View at Publisher · View at Google Scholar · View at Scopus
40. L. Jiang, J. Li, S. Luo, and S. West, “Literature review of power disaggregation,” in Proceedings of the IEEE International Conference on Modelling Identification and Control, pp. 38–42, 2011.
41. R. Debnath and H. Takahashi, “Kernel selection for the support vector machine,” IEICE Transactions on Information and Systems, vol. E87-D, no. 12, pp. 2903–2904, 2004. View at Scopus | {"url":"http://www.hindawi.com/journals/acisc/2012/742461/","timestamp":"2014-04-18T01:59:11Z","content_type":null,"content_length":"111724","record_id":"<urn:uuid:e41bece5-990e-4f27-824e-033641503b79>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some aspects of quantum statistical inference
Seminar Room 1, Newton Institute
In classical parametric statistical inference, an important question is `What parts of the data are informative about the parameters of interest?'. Key concepts here are those of sufficient
statistic, ancillary statistic and cut. Some analogous concepts for quantum instruments will be outlined. | {"url":"http://www.newton.ac.uk/programmes/QIS/seminars/2004111710301.html","timestamp":"2014-04-21T04:31:23Z","content_type":null,"content_length":"3931","record_id":"<urn:uuid:2dd3c67c-9a58-4a5a-a5f5-f5052d70fd50>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Re Harvey's "nonconstructive proofs."
Gabriel Stolzenberg gstolzen at math.bu.edu
Tue Jan 29 23:37:19 EST 2008
On November 11, 2007, Harvey Friedman posted a very short message,
"nonconstructive proofs." I am not sure what he meant to say in it,
but, on a reading I find attractive (though very possibly not what
he meant), I find it remarkable.
On this reading, Harvey's posting is, at least in part, a response
to my own message, "the power of nonconstructive reasoning," of Nov 6,
wherein I discuss a fairly popular example of the alleged power of
nonconstructive reasoning and show that it (this particular example)
is an illusion. (The illusion is that a constructive proof of the
same statement, if there is one, would be less elegant and much more
complicated than the classical one. The statement is that there are
two irrational numbers, x and y, such that x^y is rational.)
So far as I can make out, the content of Harvey's posting is that,
for Kruskal's theorem and two related ones, (1) the usual proofs are
"highly nonconstructive" and (2) there are also constructive proofs
(in reasonable senses of "constructive"), but, unlike the ones in the
simple example I considered, they appear to be "more involved."
To me, this is a good observation. Harvey describes the situation
without trying to draw any unwarranted conclusion from it. By itself,
the fact that the existing constructive proofs are more involved means
very little.
E.g., it could be that the authors of these proofs were merely
trying to "constructivize" a classical proof, instead of working
"naturally" constructively (as if classical mathematics didn't exist).
Such constructivizations are frequently messier than the classical
proofs they copy. (The standard, embarrassingly lame, excuse for this
is, sure, it's more complicated, but this is because it's a stronger
result. This is the flip side of the equally lame, "a classical proof
is a good first step toward getting a constructive one.")
Or these "more involved" constructive proofs might simply have
been first tries. The first proof of a theorem or theory (even
a published one) is often, in effect, a first draft, which is then
reworked (and reworked and reworked and...), usually by others.
Speaking only for myself, if it could be shown convincingly that
every constructive proof of Kruskal et al has to be messier/more
complicated than the nicest classical proof, that would be fantastically
interesting---even more so, if there was a satisfying explanation of
why this is so. (E.g., does it have to do with the impredicativity that
Harvey has shown every proof of Kruskal requires?)
Finally, based on past experience, I would find it nice but much
less exciting to have a constructive proof of Kruskal that is at least
as nice as the nicest classical one. In the several test cases that I
tried for other assertions, some of which took several years to complete
(it is not just a matter of making a proof but of creating a theory
from which it emerges in a natural way), this is indeed how it worked
As for Kruskal itself, in around 1990, I helped to rework what, so
far as I knew, was the first constructive proof of Kruskal. Though,
as it stood, much of it seemed to have been done by brute force, it
seemed to be a promising candidate for being transformed (by a series
of reworkings) into a very nice proof of Kruskal. But the author
dropped the project and, so far as I know, that was the end of it.
Gabriel Stolzenberg
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2008-January/012612.html","timestamp":"2014-04-17T21:59:12Z","content_type":null,"content_length":"5883","record_id":"<urn:uuid:0ee28056-9caf-4605-91f9-6a9d9ed8a910>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00129-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: August 2009 [00309]
[Date Index] [Thread Index] [Author Index]
Re: Generalized Fourier localization theorem?
• To: mathgroup at smc.vnet.net
• Subject: [mg102496] Re: [mg102444] Generalized Fourier localization theorem?
• From: danl at wolfram.com
• Date: Tue, 11 Aug 2009 04:05:12 -0400 (EDT)
• References: <200908092219.SAA09118@smc.vnet.net>
> The following is a math question, not a Mathematica question, but it
> relates to a Mathematica calculation I'm attempting to do, so I hope it
> can be raised in this group.
> Suppose a complex-valued function f[x] with x real, has a region of
> finite width within the range -Infinity < x < +Infinity where the
> function f[x] is identically zero.
> Does this imply that its Fourier transform g[s] with s real can
> _not_ have any such region of finite width where g[s] is identically
> zero within its similar domain?
> Similar theorem for the Fourier series of a periodic function?
> Thanks for any pointers.
No to the first. A Dirac comb has infinitely many such regions, and it's
FT is also a Dirac comb.
In[1]:= FourierTransform[DiracComb[x], x, t]
Out[1]= DiracComb[-(t/(2 \[Pi]))]/Sqrt[2 \[Pi]]
Though the comb is a periodic function, this does not comprise a
counterexample for the periodic case. There we have discrete frequencies
(by scaling, can take them as the integers), so it's not obvious to me
what would be the correct analogue of the question. But in any case we now
have all components of the FT nonzero (they all are unity).
In[2]:= Integrate[DiracDelta[x]*Exp[I*x*t], {x, -Pi, Pi},
Assumptions -> Element[t, Reals]]
Out[2]= 1
Daniel Lichtblau
Wolfram Research
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2009/Aug/msg00309.html","timestamp":"2014-04-16T10:24:43Z","content_type":null,"content_length":"26693","record_id":"<urn:uuid:5995afb3-d443-43aa-b9d2-514f0190320e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Snub Anti-Prisms
This is an infinite family of 2-uniform polyhedra where a base polygon (an n-gon) is surrounded by vertices of the form n.3.3.3.3, the snub vertices are then of the form 3.3.3.3.3 Substituting 2 for
n gives the Johnson Solid snub disphenoid, {3} is a regular icosahedron, {4} is the Johnson Solid snub square anti-prism and {5} is a snub pentagonal anti-prism, however this last example is
noticeably not convex. Note: The n-gonal members of this family are briefly discussed in Professor Bonnie Stewart's "Adventures Among the Toroids" 2^nd Ed. Ex 115 on page 160.
Substituting an n/d-gon for the n-gon in the above extends the family to allow an infinite number of locally convex polyhedra (in the sense that no dihedral angle is > pi). If this is not obvious
from the images above take a look at this cutaway model of a 7/3 snub anti-prism where only one rotation of the triangular faces around the prismatic axis of symmetry is shown.
These polyhedra are generated from the anti-prisms by dividing the anti-prism into two portions, each of which consists of the base n/d-gon and it's edge connected triangular faces. A ring of snub
triangles is then inserted between these two portions. This is evident in this example of a highlighted 5/2 snub anti-prism with the snub triangles in yellow.
The upper bound for local convexity is at n/d = 4.7445... Click here for proof. Cases generated around this limit include 4.5 (see 9/2), 4.666 (see 14/3) and 4.75 (see 19/4). Only the first two are
A full table of locally convex snub anti-prisms with d>1 and n<=12 is below.
│ Snub Anti-prisms: │ 5/2 │ 7/2 │ 7/3 │ 8/3 │ 9/2 │ 9/4 │ 10/3 │ 11/3 │ 11/4 │ 11/5 │ 12/5 │
Snub anti-prisms also exist for n/d<2. The lower limit is undetermined. In these cases the base polygon is retrograde. In general the retrograde n/(n-d) snub anti-prism is isomorphous to the n/d snub
anti-prism. Examples of retrograde snub anti-prisms are given here: 7/4, 5/3, 3/2, 10/7, 4/3, 5/4 ^[1]
As the icosahedron is a member of this family, the various isomorphs to the triangular snub anti-prism also occur in this list of isomorphs to the icosahedron.
The snub anti-prisms can be gyro-elongated by the insertion of a band of triangles between the two halves of the snub anti-prism. This generates a family I term the pentakis pentaprisms. They can
also be deformed into anti-prisms and gyrobicupolas - see 'Twisters'.
A family also exists of "great" snub anti-prisms, an example of the 7/3 great snub anti-prism is shown above. These polyhedra are isomorphous to the snub anti-prisms but have starred vertices. They
have the same relationship to the snub anti-prisms as the great icosahedron has to the icosahedron. Indeed the great icosahedron is a member of this family and can be regarded as a "triangular great
snub anti-prism".
A table of those generated to date with n/d>2 is as follows:
│Great Snub Anti-Prisms: │4│5 ^[1]│5/2│7/2│7/3│8/3│9/2│9/4│10/3│11/3│11/4│11/5│12/5│
Great snub anti-prisms exist for n/d<2, the retrograde base polygons then having the effect of partially un-crossing the starred vertices. Examples are the 3/2, 4/3, 5/3, 7/4, 8/5 and 5/4 ^[1]. Note
the triangular edges on the 4/3 example are coplanar with the square faces.
Exotic Snub Anti-Prism Isomorphs
Two additional isomorphs exist for certain ranges of n/d. These are (a) the hybrid snub anti-prisms and (b) the inverted snub anti-prisms. Both families have pyramidical rather than prismatic
symmetry and are 4-regular polyhedra.
The hybrid snub anti-prisms are so named as they have the appearance that one half of a pro-grade n/d snub anti-prism is joined to one half of its retrograde n/d* twin. For example the 5/2-5/3 hybrid
(above) the 7/3-7/4 hybrid or the 8/3-8/5 hybrid. The limits of this 'hybrid' family are undetermined. The hybrid 3-3/2 snub anti-prism also exists and is synonymous to a tri-inverted icosahedron.
(see isomorphs to the icosahedron). This is apparent by examination of this 4-4/3 hybrid.
The inverted snub anti-prisms are so named as one of the half snub anti-prisms is inverted back into the centre of the figure, meaning that the {n/d}-gonal cap is predominantly hidden. For example
this 4, 5/2 or 7/3 example (above). The limits of this 'inverted' family are undetermined. The inverted triangular snub anti-prism also exists and is synonymous to a tri-everted great icosahedron.
(see isomorphs to the icosahedron)
An unexpected coplanar case ^[2]
With the size of the base polygon, the two bases move closer together, at n=9 they have reached a point where the edge connected triangles are coplanar with the base, the enneagonal snub antiprism is
shown on the left above (VRML, OFF). Coplanarity is also evident in the 9/7 snub anti-prism (above right) (VRML, OFF) where the edge connected triangular faces are coplanar with the 9/2-gonal caps
but are now pointing inwards. (Note: the 9/2-gons in this model have been coloured blue to avoid migraines when viewing the VRML file).
Notes and acknowledgements
[1] I am indebted to Adrian Rossiter, developer of the Antiprism software for the generation of the great 5, the 5/4 isomorphs, and the generation of the coplanar 9 and 9/7. Adrian has developed
software to generate an arbitrary isomorph of any n/d snub anti-prism and has also generated two fascinating animations of the {101/d} antiprism for varying d. The files are around 6MB each, see
www.antiprism.com/misc/snu101_s0.gif (snub anti-prisms) and www.antiprism.com/misc/snu101_s1.gif (great snub anti-prisms).
[2] The coplanarity of the {9/7} snub anti-prism was discovered by Roger Kaufman, that of the {9}-snub anti-prism by Adrian Rossiter. | {"url":"http://www.orchidpalms.com/polyhedra/snub-anti-prisms/prisms1.html","timestamp":"2014-04-16T18:55:28Z","content_type":null,"content_length":"13850","record_id":"<urn:uuid:82a29f3d-430f-4141-9c6c-320be8d3617d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Standard Operation Procedure for measuring
average ASTM grain size of non-equiaxed grain structure
1. Lineal Intercept Procedure
2. Circular Intercept Procedures
□ Hilliard Single-Circle Procedure
□ Abrams Three-Circle Procedure
Generally, for non-equiaxed structure, more information can be obtained by making separate size determinations along parallel line arrays that coincide with all the three principal directions of the
specimen. Therefore, longitudinal (l), transverse (t) and plane (p) specimen sections are used. The number of intersection was counted. (An interception is a point where a test line is cut by a grain
Lineal Intercept Procedure
1. The average grain size is estimated by counting the number if grains intercepted by one or more straight lines sufficiently long enough to yield at least 50 intercepts.
2. It is desirable to select a combination of test line length and magnification such that a single field will yield the required number of intercepts.
3. Usually the straight test lines will lie inside the grains, precision will be reduced if the average count per test line is low. If possible, use either a longer test lines or a lower
4. Use the test lines of known lengths and count the number of intersection on three to five blindly selected and widely separated fields and then an average number of intersections is calculated
for all the principal directions planes l, t and p.
5. Mean number of interceptions per unit length, NL, on the fields of longitudinal (NL(l)), transverse (NL(t)) and planar (NL(p)) planes were thus calculated. (formula shown below)
Hilliard Single-circle Procedure
1. A single circle was used as test line. This will eliminate the problem of being bias when counting of the grain boundaries as in the Lineal Intercept method.
2. The test circle diameter should never be smaller than the largest observed grain.
3. Do not use a small test circle as it is rather inefficient as a great many fields must be evaluated to obtain a high degree of precision.
4. A small reference mark is placed at the top of the circle to indicate the place to start and stop the count.
5. Use the test lines of known circumferences (length) and count the number of intersection on three to five blindly selected and widely separated fields until sufficient counts are obtained to
yield the required precision.
6. Repeat the previous step for all the principal directions planes l, t and p.
7. The mean number of interceptions per unit length, NL, on the fields of longitudinal (NL(l)), transverse (NL(t)) and planar (NL(p)) planes were thus calculated. (formula shown below)
8. Recommended 35 counts per circle with the test circle applied blindly over as large a specimen area as feasible until the desired total number of counts is obtained.
Abrams Three-Circle Procedure
1. From experiment finding that a total of 500 counts per specimen normally yields an acceptable precision.
2. Consists of three concentric and equally spaced circles having a total of 500mm
3. Use the circular test lines of known circumferences (length) and count the number of intersection on at least five blindly selected and widely separated field.
4. Separately record the count of the intersections per pattern for each of the tests.
5. Repeat the previous step for all the principal directions planes l, t and p.
6. Mean number of interceptions per unit length, NL, on the fields of longitudinal (NL(l)), transverse (NL(t)) and planar (NL(p)) planes were thus calculated. (formula shown below.
Calculation of results
• Use the following equation to find NL(n) for each principal direction plane
NL(n) = Ni / (L/M)
NL = mean number of interceptions per unit length,
Ni = the number of interceptions counted on the field,
L = the length of the test line(s) used,
n = the principal direction plane, and
M = the magnification.
• Take the equation below to find the average NL,:
NL = (NL(l) . NL(t) . NL(p) ) 1/3
• Calculate the mean lineal intercept Lm, for each field using the following equation:
Lm = 1/ NL
• Determine ASTM grain size number using the following equatio
G = (-6.6457 log10 Lm) – 3.298
1. ASTM E 112 – Standard test methods for determining average grain size, p229-251 (NTU)
2. Grain size measurement, p85- (NUS TN689.2 Pra)
3. ASTM E 930-92 – Standard test methods for determining the largest grain observed in a metallographic section (ALA grain size), p666-670 (NTU)
4. ASTM E 1181-87 – Standard test methods for characterizing duplex grain sizes, p725-738 (NTU)
5. ASTM E 1382-97 – Standard test methods for determining average grain size using semiautomatic and automatic image analysis, p855-878 (NTU)
6. BS DD 44:1975 – Methods for the determination of grain size of non-ferrous alloys, p3-10 (NTU, BS)
7. JIS H 0501 – Methods for estimating average grain size of wrought copper and copper alloys, p261-165 (NTU)
8. Metal Handbook (Metallography) – Grain size & particle distribution, p129-134 | {"url":"http://www.tppinfo.com/metallurgical_lab/grain_size.html","timestamp":"2014-04-19T22:06:10Z","content_type":null,"content_length":"13871","record_id":"<urn:uuid:aa314085-f808-4a1c-a15e-c15a9951bd1c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Aluminum trigonometry
I ran into an interesting trigonometry problem a few weeks ago, when I was working on the cable pole installation. The problem was simple enough: extend the pole such that the trash trucks wouldn’t
snag it, thus ripping it down. But the real question was: how tall must I extend it to get adequate clearance? Indeed.
Let’s take a look at the problem^ {1} a little closer. As I tend to be a visual person, I have to start with a drawing:
This is a crude diagram of my problem. The trapezoidal shape represents my aluminum pole and cable line and its geometric relationship to the alley and road. To help with all these visuals, I’ll show
you the alley again (click for a bigger view):
As with all math problems, one should start with the knowns. I analyzed the pole and its surrounds by taking some measurements. Here’s what I know:
1. pole height = 8.5ft
2. distance from the pole base to the alley road = 6ft
3. width of the alley road = 13ft
4. at the current pole height, the height of the cable from the start of the alley = 11ft
The goal in any mathematical problem is to find as many equations describing the system as variables. This will allow you to solve the problem. So, let’s solve it!^ {2}
The next step would be to define our variables:
The blue colored vertical line on the far left of my diagram above is the original height of the pole. The red line above it, which I will call
The angle that blue cable makes with the horizontal is
Now we can start to define some trigonometric identities. I’ll be using the tangent based on our above knowns. First, we can state the old pole height as:
While we know that:
We have our first variable solution:
Now, we also know that:
Let’s clean it up:
Equation #1:
We can express
Equation #2:
Substituting Equation #2 into #1 and we have:
Equation #3:
Now the desired clearance will be:
Let’s set that to be necessarily 14′.^ {3}. So now we have another relation:
Equation #4:
There we have it, two equations left, two unknowns. Let’s substitute Equation #4 into #3 and we have:
So I bought a 4 foot extension pole and got about 14 feet of cable clearance. A little hard math meets the real world.
1. It occurred to me only after the project was initially complete that I might not have enough clearance for passing trucks in the alley. That’s what I get when I don’t put in enough time on the
design side and speed ahead into production! [↩]
2. And I must thank the wonderful PHP Math Publisher plugin! [↩]
3. I called the local trash company to find out how big these trucks are [↩]
Tags: trigonometry | {"url":"http://www.electrolund.com/2008/11/aluminum-trigonometry/","timestamp":"2014-04-20T16:50:14Z","content_type":null,"content_length":"49333","record_id":"<urn:uuid:9b23a43e-4fc8-4157-9191-bda0a1264208>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Concordville Trigonometry Tutor
Find a Concordville Trigonometry Tutor
...This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and
non-euclidean geometry. I taught Algebra 2 with a national tutoring chain for five years. I have taught Algebra 2 as a private tutor since 2001.
12 Subjects: including trigonometry, calculus, writing, geometry
...I hold a PhD in Algorithms, Combinatorics and Optimization. I have 14 years' experience as a practicing actuary. I am a Fellow of the Society of Actuaries, having completed the actuarial exam
18 Subjects: including trigonometry, calculus, statistics, geometry
...Science related math concepts in pre-algebra that I can clarify for students include scientific notation and use of significant figures. My initial SAT Math training comes from working at a
local tutoring center. There I interacted with 10-12 students 2-3 times per week over the course of several months to prepare them for the Math section of the SAT.
9 Subjects: including trigonometry, chemistry, algebra 2, geometry
...My background is in engineering and business, so I use an applied math approach to teaching. I find knowing why the math is important goes a long way towards helping students retain
information. After all, math IS fun!In the past 5 years, I have taught differential equations at a local university.
13 Subjects: including trigonometry, calculus, algebra 1, geometry
I love math and teaching. There is an awesome reward when watching a struggling student as he begins to understand what he needs to do and how everything fits together. I previously taught Algebra
I, II, III, Geometry, Trigonometry, Precalculus, Calculus, Intro to Statistics, and SAT review in a public school.
12 Subjects: including trigonometry, calculus, statistics, geometry | {"url":"http://www.purplemath.com/concordville_pa_trigonometry_tutors.php","timestamp":"2014-04-19T05:09:55Z","content_type":null,"content_length":"24431","record_id":"<urn:uuid:9413eba5-bfe0-408b-a012-03d4d9e9f0a1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find the value of x1^6 +x2^6 of this quadratic equation without solving it
1. The problem statement, all variables and given/known data
Solve for [itex]x_1^6+x_2^6[/itex] for the following quadratic equation where [itex]x_1[/itex] and [itex]x_2[/itex] are the two real roots and [itex]x_1 > x_2[/itex], without solving the equation.
2. Relevant equations
3. The attempt at a solution
I tried factoring it and I got [itex](-5x+\sqrt{19})^2-4=0[/itex]
What can I do afterwards that does not constitute as solving the equation? Thanks.
Hello chloe1995. Welcome to PF !
Suppose that x
and x
are the solutions to the quadratic equation, [itex]\displaystyle \ \ ax^2+bx+c=0\ .[/itex]
Then [itex]\displaystyle \ \ x_1 + x_2 = -\frac{b}{a}\ \ [/itex] and [itex]\displaystyle \ \ x_1\cdot x_2=\frac{c}{a}\ .\ [/itex] | {"url":"http://www.physicsforums.com/showthread.php?p=4239999","timestamp":"2014-04-16T07:35:18Z","content_type":null,"content_length":"34048","record_id":"<urn:uuid:932e51e2-1a95-4b7a-bd44-cd1bc65c2241>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jessica Austin: Computer science, robotics, and Agile software development
This fall, I’m going back to school to study Robotics as a graduate student.
It’s been almost five years since I graduated from undergrad, so to prepare myself I created a list of study materials for review. I hope others might find this list of recommendations helpful.
There were two main areas I wanted to cover: mathematics review, and introduction to robotics concepts. The latter section might be useful for someone interested in robotics but not sure which areas
they want to pursue.
Please comment if you have any questions!
General Math
Man, I wish I had read this book BEFORE undergrad. In this book, Velleman does three things:
• describes basic concepts in Logic
• gives common proof strategies, with plenty of examples
• dives into more set theory, defining functions, etc
He does all this assuming the reader is NOT a mathematician–in fact, he does an excellent job of explaining a mathematician’s thought process when trying to prove something.
I highly recommend this book if you feel uncomfortable reading and/or writing proofs, since it will make the following math books much more enjoyable to read!
Barron’s College Review Series: Calculus
This book was my warm-up. It is very simple, and is focused more on computation than rigorous proofs. I think I got through it in a weekend, while completing most of the exercises. It does NOT
include multivariate calculus.
Khan Academy lectures, while time-consuming, are a great reference if there is a specific concept that you’re struggling with. That said, I don’t recommend watching the whole series, but rather
searching for a specific topic (say, “gradient”) when you want more information.
Probability and Statistics
Khan Academy: Probability and Statistics (combined with Combinatorial Probabilities cheat sheet)
I have to say: I always had problems getting combinatorics straight in my head, and watching these videos + completing the exercises really helped.
Introduction to Bayesian Statistics by Bolstad
This book is AMAZING. Bayesian statistics is extremely important to modern robotics, and this book provides an excellent introduction. Highly recommended!
Note that if you’re already comfortable with traditional probability, you can skip the Khan Academy altogether and skip straight to the Bolstad book.
Differential Equations
Elementary Differential Equations by Boyce and DiPrima
All-around excellent book. Probably my favorite, most-referenced textbook from undergrad.
Khan Academy: Differential Equations
Again, don’t watch the all the lectures, but use them as a reference when you want a simple, thoroughly-explained overview of a specific topic.
Linear Algebra
Linear Algebra by Hefferon (also available in print)
If you had to pick a single math topic to study before entering robotics, linear algebra would be it. This book is particularly good because it starts with solving systems of equations, defining
spaces, and creating functions and maps between spaces–and only after this foundation is laid does it introduce matrices as a convenient form for dealing with these concepts.
Again, don’t watch the all the lectures, but use them as a reference when you want a simple, thoroughly-explained overview of a specific topic.
I’ve been programming since high school, so I didn’t really need much review in this area. However, The Nature of Code is an amazing book, it’s free!, and it includes online exercises in the
Processing language, so I have to recommend it.
Also note that the Udacity CS-373 course includes programming exercises in Python.
If you complete the following courses, you’ll get a high-level understanding of some of the most important concepts in robotics.
Udacity CS-373, Artificial Intelligence for Robotics
Topics include: Localization, Particle Filters, Kalman Filters, Search (including A* Search), PID control, and SLAM (simultaneous localization and mapping). If you understand these concepts, you can
write software for a mobile robot! Even better, each section has multiple programming exercises in Python, so you really get practice with the topic.
If you want to dig deeper into some of the above topics, I recommend Sebastian’s book, Probabilistic Robotics
Udacity CS-271, Introduction to Artificial Intelligence
If you’re interested in Machine Learning, this is a great course. It’s not as slick as CS-373, but still worthwhile.
ChiBots SRS RoboMagellan 2012: Nomad Overview
This summer, my friend Bill Mania and I entered our robot in the ChiBots SRS RoboMagellan contest. To steal the description directly from the website:
Robo-Magellan is a robotics competition emphasizing autonomous navigation and obstacle avoidance over varied, outdoor terrain. Robots have three opportunities to navigate from a starting point to
an ending point and are scored on time required to complete the course with opportunities to lower the score based on contacting intermediate points.
Basically, we had to develop a robot that could navigate around a campus-like setting, find GPS waypoints marked by orange traffic cones, and do it faster than any of the other robots entered.
To give you an idea of what this looked like for us, here’s a picture of us testing in Bill’s backyard:
For our platform, we used a modified version of the CoroWare CoroBot, with additional sensors like ultrasonic rangefinders, a 6-DOF IMU, and wheel encoders.
Our software platform was ROS — rospy specifically — and we made liberal use of various components in the navigation stack. We were even able to attend the very first ROSCon in St. Paul, MN, which
was a blast and greatly expanded our knowledge of the software and what it was capable of.
Over the next few weeks, I’ll be writing more detailed posts about the robot and specific challenges we faced, including:
• Hardware and sensor overview
• Using robot_pose_ekf for sensor fusion of IMU + wheel encoders to allow us to navigate using dead reckoning
• Localization in ROS using a very, very sparse map
• Our attempts to use the move_base stack with hobby-grade sensors, and why we ended up writing our own strategy node
• Using OpenCV + ROS to find an orange traffic cone, and using this feedback to “capture” the waypoint
In the meantime, enjoy this video of the above scene, from the robot’s point of view!
Navigating a known map using a Generalized Voronoi Graph: an example
voronoi-bot is a robot that navigates by creating a Generalized Voronoi Graph (GVG) and then traveling along this graph to reach the goal. It requires a full map of the environment in order to
I completed this project during a class for Joel Burdick while an undergrad at Caltech. I’ve since added the code to github and started cleaning up the files so that they’re easier to understand and
reuse (refactoring, adding tests, etc). This is still in progress, but the code is functional in the meantime.
To read more about using GVG for navigation, I recommend the following:
Sensor Based Planning, Part II: Incremental Construction of the Generalized Voronoi Graph
Howie Choset, Joel Burdick
Mobile Robot Navigation: Issues in Implementation the Generalized Voronoi Graph in the Plane
Howie Choset, Ilhan Konukseven, and Joel Burdick
Path Planning for Mobile Robot Navigation using Voronoi Diagram and Fast Marching
Robotics Lab, Carlos III University
Agile Retrospectives Workshop
Earlier this year at work I led a short workshop on Agile Retrospectives. We already make use of the retrospective format very frequently within the technology department, but other departments are
not as familiar, so I put together this workshop to show them how amazingly useful retrospectives can be.
The full presentation is here, and I’ve also replicated a version of it throughout the following post.
Agile Retrospectives Workshop
What is a retrospective?
A retrospective is a team activity, where team members meet to understand their current process, discuss how it can be improved, and generate action items that can be acted on before the next
What are they good for?
• improving your process
• learning from past mistakes
• celebrating accomplishments
• getting your team on the same page
• improving your work environment
• making good teams great
Who can run them?
• retros do not have to be run by managers!
• in fact, it’s usually better if they are not
A retrospective is not…
a time to blame people
Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available,
and the situation at hand
– The Retrospective “Prime Directive”
a forum for fixing everything
• retro often, and make small, incremental changes each time
Outline of a Retrospective
• Set the stage
□ Make sure people feel comfortable
□ Check the safety level
• Gather Data
• Generate insights
• Plan of action
• Close
□ Want to leave on a positive note!
□ Review accomplishments and action items
Outline of a Retrospective: Example
Set the stage
• Team gathers in conference room
• Retro moderator gauges the “safety level”
□ team has been through lots of retros, comfortable giving feedback
□ things have been pretty “normal” since last retro
□ => high level of safety
• Moderator introduces the format (see picture below)
□ Divide the board into four sections: Things we’re happy about, things we’re not happy about, Ideas, Appreciations
□ Write notes on stickies and put them on the board, followed by team discussion
Generate insights
• Team members write down points on stickies and put them up on the board
• Moderator takes a few min to “aggregate” the issues and mentally sort through them
Plan of action
• Team discusses the stickies
• Comes up with action items, if needed
• (don’t have to have an action item for every point–sometimes the discussion is enough)
• Moderator reviews action items
Tips for Moderators
the format of a retro is very fluid, and facilitating is the art of choosing the correct format for the situation
that said, you can greatly improve your retrospectives with a little planning and empathy
Planning a Retrospective
• Take some time before each retro to figure out the best format for the participants
□ What is the expected safety level?
□ How experienced are they in giving feedback?
□ What sort of issues do you expect to hear about?
□ Do they need to dig deep into an obvious issue, or do they need to brainstorm and mix things up?
□ Is an outside facilitator more appropriate?
• Talk to some of the participants beforehand if you need to
Empathy is important!
• The best facilitators are able to monitor the emotions of the participants, and adjust the format appropriately
• You want to create an atmosphere where people can open up and get to the root cause of their issues
Teaching empathy is difficult, so if you are interested in learning more, I highly recommend reading An Anatomy of a Retrospective
Exercise: create your own retro
• We’ll split up into groups, and each group will get a scenario
• Consider the scenario, and choose which format would be appropriate (we’ll go over some examples in a moment), or create your own
Each group should answer the following questions for their scenario:
• What is the expected safety level?
• How experienced is the team in giving feedback?
• What sort of issues do you expect to hear about?
• Do they need to dig deep into an obvious issue, or do they need to brainstorm and mix things up?
• Who would be the most appropriate facilitator?
• How much time is needed?
Scenario 1
You work on a small, cross-functional team (3 developers, 1 QA, 1 BA) in the technology department. Your team has weekly sprints and bi-weekly retrospectives. The team is fairly mature, everyone is
familiar with one another and this process has been going on for about six months at this point.
You recently started developing software for a new platform: Android. You’ve done 3 releases now, so things are pretty stable, but you want to see if you can do things better next time.
Scenario 2
You work in sales, and your team has spent the last six months working on replacing a legacy phone system with a shiny, new one. There were some bumps along the way, but the new system is now out the
door and working fine.
This project involved a wide range of people from your department, technology, building management, etc. Some people worked on the project for its full length, others only worked on the project for a
few weeks, but they’re all here now.
Some people in the room have done retrospectives, but not all. In fact, lots of people are skeptical that this will even be a good use of their time.
Scenario 3
You work in the Customer Service department. Your team has retrospectives regularly, but it seems like lately the same problems keep coming up every time. You’ve had action items in the past, but it
doesn’t seem like that is working, since the problems still exist.
People are getting frustrated and are stuck when trying to think of fresh solutions. Every idea you come up with seems to either a) not work in practice, or b) require help from the Technology
department, which is too busy with other projects.
Scenario 4
You are a member of the Senior Management Team. It’s the beginning of a new quarter, which is always a great time to step back and take a look at the big picture.
Most of you have done a retrospective before. There are a lot of strong-willed people in the room. Everyone is very busy, and doesn’t have any more than the hour that they’ve set aside for this
Further Reading: A List of Retrospective Formats Of Which I Am Quite Fond
Timeline retrospective
An excellent format for a team retro at the end of a project
Appreciation Retrospective
Pair with the Timeline Retro above, for the end of a particularly difficult project
Complexity retro/root cause analysis
Another retro that is good for a small team (stream team) after the end of a project
Good ol’ Starfish, a team standby
Learning Matrix
Actions Centered
These are very similar to pluses/deltas format, but with a little extra to mix things up and get people thinking creatively again
Top 5
Gather stickies just like +/deltas, but then only discuss the top five. Sometimes good to split up into five teams to discuss if it is a larger group. Good to use if it seems like retrospectives are
too broad and don’t go deep into any particular topic
Circles and Soup
Allows the team to recognize which factors are within their control, so they can be constructive when making future plans
Good to use when team is feeling frustrated with issues/politics outside their control, or to preface a future-planning session
Sails and Anchors
Sometimes you need to step back and look at the big picture–what is pushing us forward? what is holding us back?
Force Field Analysis
Pick a goal, figure out what is pushing you towards that/holding you back. Sails and anchors format can also be used for the same purpose
Values-driven retro
http://retrospectivewiki.org/index.php?title=Pillars_Of_Agile_Spiderweb_Retrospective (same thing, different format)
Useful if you want to ensure all team members are on the same page about values
Helps the team think of creative solutions to problems
And finally…
Retrospective Surgery
A retrospective for your retrospectives =)
Interview on RosieSays
My friend Emily has a fantastic blog over at Rosie Says, and she’s doing a series, So What Do You Do Exactly? where she interviews people she knows but doesn’t understand exactly what it is they do
everyday (the beer post is my favorite so far).
Anyway, she interviewed me a little while ago, to learn what it’s like to be a kickass lady software developer. Check it out!
Chicago GTUG Presentation: Building Robots with the Sparkfun IOIO
Last night I presented at the Chicago GTUG. It was held at 1871 in Merchandise Mart, and wow is that a great space! It was a real pleasure to talk there.
Here’s a link to the presentation: https://docs.google.com/presentation/d/1id7sUVDHFXhKzujg3dPWivC3kM5o3r7NIrWkq3IB_Ws/edit
Links to references from the presentation:
Fritzing Part for a generic Dual Motor Controller
I needed a part for a generic motor driver when I was working in Fritzing the other day, and I couldn’t find one so I decided to create one.
This motor driver consists of the basics needed for any dual motor controller:
• VIN
• GND
• Motor 1 IN (+)
• Motor 1 IN (-)
• Motor 2 IN (+)
• Motor 2 IN (-)
• VCC
• M1A
• M1B
• M2A
• M2B
For example the DFRobot DRI0002 2A Dual Motor Controller or the Robokits RKI-1004 5A Dual Motor Driver.
iohannes: a robot based off the Sparkfun IOIO
I will post something more thorough next week, but I wanted to get some pictures and a video up for the robot I’ve been working on.
The robot was inspired by the Sparkfun IOIO: a great little board that allows you to merge the world of Android phones with the world of hobby electronics. The result? A relatively cheap robotics
platform with a huge range of possibilities.
Here’s the breakdown:
Lynxmotion Tri-Track chassis with two 12V DC motors — $220.95 (I actually got this used for $175)
Robokits RKI-1004 Dual Motor Driver (up to 5A) — $16 (thanks, Robot City Workshop!)
12.0V 2200mAh NiMh battery pack — $24
SparkFun IOIO — $49.95
HC-SR04 Ultrasonic distance sensor — $13.95
HTC Evo — “free”
9.6V 1800mA NiMh battery pack — $18
Total cost: $342.85
Currently the robot has two modes: a manual control mode and an autonomous, obstacle-avoidance mode. In manual mode you simply tell the motors to go forward, backwards, left or right. In
obstacle-avoidance mode the robot will move forward until an obstacle is detected, at which point it will execute an “evasive maneuver” to clear the obstacle, and continue as before.
Code is here: https://github.com/jessicaaustin/robotics-projects
Next steps: get rosjava installed and integrated with the application. This will allow for remote control of the robot, plus remote computation like image processing of camera data.
Example code for Python OpenCV tutorials
Lately I’ve been getting into OpenCV. There are plenty of great tutorials out there, but I hate copy/pasting snippets of code from a blog post when I want to try something out.
So I started this repo on github: opencv-tutorial.
All the code runs! and has plenty of comments. I hope you find it useful as you go through these tutorials yourself. I also think the files are good starter templates for more complex scripts.
Moving from Stage 3 to Stage 4: laser and ranger
I recently moved to a new laptop, and in the process of installing player/stage, I moved from Stage 3.2.2 to Stage 4.0.1.
When I tried running a project of mine, however, I got the following error:
jaustin@navi:~/dev/player-bots/voronoibot$ robot-player cfg/voronoi.cfg
Registering driver
Player v.3.0.2
* Part of the Player/Stage/Gazebo Project [http://playerstage.sourceforge.net].
* Copyright (C) 2000 - 2009 Brian Gerkey, Richard Vaughan, Andrew Howard,
* Nate Koenig, and contributors. Released under the GNU General Public License.
* Player comes with ABSOLUTELY NO WARRANTY. This is free software, and you
* are welcome to redistribute it under certain conditions; see COPYING
* for details.
invoking player_driver_init()...
Stage driver plugin init
** Stage plugin v4.0.1 **
* Part of the Player Project [http://playerstage.sourceforge.net]
* Copyright 2000-2009 Richard Vaughan, Brian Gerkey and contributors.
* Released under the GNU General Public License v2.
Stage plugin: 6665.simulation.0 is a Stage world
stall icon /usr/share/stage/assets/stall.png
[Loading cfg/lab.world][Include pioneer.inc][Include map.inc][Include sick.inc]f: cfg/lab.world
err: Model type laser not found in model typetable (/build/buildd/stage-4.0.1/libstage/world.cc CreateModel)
err: Unknown model type laser in world file. (/build/buildd/stage-4.0.1/libstage/world.cc CreateModel)
The model type laser is defined in a couple of configs I have, like so:
name "stage"
provides [ "position2d:0" "laser:0" "speech:0" "graphics2d:0" "graphics3d:0" ]
model "r0"
define sicklaser laser
# laser-specific properties
# factory settings for LMS200
range_max 8.0
fov 180.0
samples 361
#samples 90 # still useful but much faster to compute
# generic model properties
color "blue"
size [ 0.156 0.155 0.19 ] # dimensions from LMS200 data sheet
At first I thought I had screwed up my stage installation, and was at the point where I was going to try to install from source. But then! I saw this in the README.txt file that is bundled with Stage
4 source:
Version 4.0.0
Major new release since worldfile syntax and API have both changed due
to a new unified ranger model. Laser model is now deprecated in favour
of the ranger. This follows a similar change in Player 3, where laser
interface support was deprecated but still in place due to lack of
Stage support. This release fixes that.
So since “laser” has been replaced by “ranger”, I replaced those words in my config files as well:
name "stage"
provides [ "position2d:0" "ranger:0" "speech:0" "graphics2d:0" "graphics3d:0" ]
model "r0"
define sicklaser ranger
# laser-specific properties
# factory settings for LMS200
range_max 8.0
fov 180.0
samples 361
#samples 90 # still useful but much faster to compute
# generic model properties
color "blue"
size [ 0.156 0.155 0.19 ] # dimensions from LMS200 data sheet
And it worked! | {"url":"http://blog.aus10.org/","timestamp":"2014-04-20T16:39:08Z","content_type":null,"content_length":"66540","record_id":"<urn:uuid:912c7a49-4f8c-4096-a8df-20252bb9498b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00318-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pre-Algebra Chapter 8 Vocab
NAME: ________________________
3 Written Questions
3 Multiple Choice Questions
1. a line segment that passes through the center of a circle and goes all the way across the circle
2. the set of all points in a plane that are the same distance from a given point
3. the distance around a polygon
2 True/False Questions
1. circumference → the distance around a circle
2. Area → the number of square units needed to cover a given surface. | {"url":"http://quizlet.com/4865971/test","timestamp":"2014-04-20T23:56:37Z","content_type":null,"content_length":"32075","record_id":"<urn:uuid:cf2efbaa-5060-4e37-8cf3-97693a7e6699>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
Burton, WA Science Tutor
Find a Burton, WA Science Tutor
...I feel communication is the strongest skill required for good tutoring. I have been helping kids for the past 3 years and have developed wonderful communication to help children in a variety
of settings. I recently was a volunteer tutor at the Kent and Covington libraries where I tutored children K-12th grade in many subjects.
25 Subjects: including biology, physics, trigonometry, microbiology
...I have degrees in Chemistry and Biology. I am a highly motivated to teacher, with a deep desire to pass on my passion for science. I can take a complicated subject and break it down into
simpler terms for students having troubles understanding.
16 Subjects: including physics, astronomy, biochemistry, botany
...I have worked as the Lead Math Tutor at the Math Resource Center of Highline Community College for one year, taking upon quite a load of responsibility while maintaining a high GPA in the
field of Physics. Currently I am transferring to the University of Washington Seattle campus on a full-ride ...
10 Subjects: including physics, philosophy, calculus, algebra 1
...I also have many years of experience with various computer systems and applications. I am currently employed as a Computer Support Technician at a moving company, as well as a writing tutor. I
have been raised by a family of teachers and the tradition continues.
30 Subjects: including physics, philosophy, physical science, reading
...My job involves collecting data in the field, conducting lab and statistical analyses, writing reports, and presenting results to various agencies and managers so they may make informed
decisions. I will help you make sense of potentially confusing terms and concepts by applying real world examples and using language that is easy to understand. It doesn't need to be
3 Subjects: including biology, ecology, botany
Related Burton, WA Tutors
Burton, WA Accounting Tutors
Burton, WA ACT Tutors
Burton, WA Algebra Tutors
Burton, WA Algebra 2 Tutors
Burton, WA Calculus Tutors
Burton, WA Geometry Tutors
Burton, WA Math Tutors
Burton, WA Prealgebra Tutors
Burton, WA Precalculus Tutors
Burton, WA SAT Tutors
Burton, WA SAT Math Tutors
Burton, WA Science Tutors
Burton, WA Statistics Tutors
Burton, WA Trigonometry Tutors
Nearby Cities With Science Tutor
Algona, WA Science Tutors
Burley, WA Science Tutors
Dockton, WA Science Tutors
Fox Island Science Tutors
Harbor Heights, WA Science Tutors
Manchester, WA Science Tutors
Olalla Science Tutors
Olalla Valley, WA Science Tutors
Ruston, WA Science Tutors
Seahurst Science Tutors
Shorewood Beach, WA Science Tutors
Steilacoom Science Tutors
Sylvan, WA Science Tutors
Wauna Shores, WA Science Tutors
Yoman Ferry, WA Science Tutors | {"url":"http://www.purplemath.com/Burton_WA_Science_tutors.php","timestamp":"2014-04-19T19:43:25Z","content_type":null,"content_length":"23922","record_id":"<urn:uuid:c3d46f66-f2f6-461d-ad09-7c6b7e1bfaaf>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question on differentiation/integration...
November 2nd 2010, 11:35 AM
Question on differentiation/integration...
Here is the question...
A 100 litre tank is initially full of a mixture of 10% alcohol and 90% water. Simultaneously, a pump drains the tank at 4 litres/second, while a mixture of 80% alcohol and 20% water is poured in
at rate 3 litres/second. Thus the tank will be empty after 100 seconds. Assume that the two liquids mix thoroughly, and let y litres be the amount of alcohol in the tank after t seconds.
dy/dt = 2.4 - ( 4y / (100 - t) )
Find y as a function of t. Hence deduce that the maximum amount of alcohol in the tank occurs after about 34 seconds, and is about 39.5 litres.
* I have identified the integrating factor as A = 4 / (100 - t)
Thank you so so much if you can help!!
November 2nd 2010, 01:31 PM
So y(0)= .1(100)= 10 litres.
Simultaneously, a pump drains the tank at 4 litres/second, while a mixture of 80% alcohol and 20% water is poured in at rate 3 litres/second. Thus the tank will be empty after 100 seconds. Assume
that the two liquids mix thoroughly, and let y litres be the amount of alcohol in the tank after t seconds.
dy/dt = 2.4 - ( 4y / (100 - t) )
Find y as a function of t. Hence deduce that the maximum amount of alcohol in the tank occurs after about 34 seconds, and is about 39.5 litres.
* I have identified the integrating factor as A = 4 / (100 - t)
Thank you so so much if you can help!!
Good! Since you have an integrating factor, you can solve for y(t). (Don't forget to use y(0)= 10 to determine the constant of integration.) Once you have done that, differentiate y with respect
to t and set equal to 0 just as you would with any function to find where there is a maximum or minimum.
November 2nd 2010, 01:41 PM
I've used the integrating factor but am still really struggling! I have...
y / (100 - t)^4 = (integration sign) 2.4 / (100 - t)^4
What should I end up with now??
Thank you for your help so far! | {"url":"http://mathhelpforum.com/calculus/161849-question-differentiation-integration-print.html","timestamp":"2014-04-18T04:59:02Z","content_type":null,"content_length":"6233","record_id":"<urn:uuid:8edd72d4-a0f5-413a-bb9e-ab79ba5e8e58>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Penrose transform
From Encyclopedia of Mathematics
A construction from complex integral geometry, its definition very much resembling that of the Radon transform in real integral geometry. It was introduced by R. Penrose in the context of twistor
theory [a4] but many mathematicians have introduced transforms which may now be viewed in the same framework.
In its most general formulation, one starts with a correspondence between two spaces complex manifold but it can also be a CR-manifold or, indeed, a manifold with an involutive or formally integrable
structure so that it makes sense to say that a smooth submanifold of
In the classical case, fibration [a3] for the Penrose transform in this real setting. There are several examples from representation theory, the prototype being due to W. Schmid [a5] with Lie group,
semi-simple). In complex geometry there is also the Andreotti–Norguet transform [a1], with
In all of these transforms, one starts with a cohomology class Differential form), if de Rham cohomology), and so on. Often, the coefficients are twisted by a holomorphic or CR-bundle on vector space
. Supposing that the dimension of this vector space is constant as one varies the cycle, this gives a vector bundle on
In the classical case, or its real form, suppose that one considers a Dolbeault cohomology class
In any case, there are preferred local trivializations of the bundles involved and one can ask whether the resulting function is general. It will be holomorphic in the complex case and smooth in the
real case, but there are further conditions. Unlike the analogous Radon transform, these conditions apply locally, so that, for example, there results an isomorphism
for any open subset Laplace operator). Similarly, the Penrose transform interprets Dirac equation on
In several cases, these transforms have appeared much earlier as integral transforms of holomorphic functions. For example, on this level, the Penrose description of harmonic functions is due to H.
Bateman in 1904. A proper understanding, however, arises only when these holomorphic functions are viewed as Čech cocycles representing a cohomology class. See [a2] for further discussion.
There is a "non-linear" version of the Penrose transform, due to R.S. Ward [a6] and known as the Ward transform. In the classical case, it identifies certain holomorphic vector bundles on Yang–Mills
field) on
[a1] A. Andreotti, F. Norguet, "La convexité holomorphe dans l'espace analytique des cycles d'une variété algébrique" Ann. Sci. Norm. Sup. Pisa , 21 (1967) pp. 31–82
[a2] M.G. Eastwood, "Introduction to Penrose transform" , The Penrose Transform and Analytic Cohomology in Representation Theory , Contemp. Math. , 154 , Amer. Math. Soc. (1993) pp. 71–75
[a3] N.J. Hitchin, "Linear field equations on self-dual spaces" Proc. R. Soc. Lond. , A370 (1980) pp. 173–191
[a4] R. Penrose, "On the twistor description of massless fields" , Complex manifold techniques in theoretical physics , Res. Notes Math. , 32 , Pitman (1979) pp. 55–91
[a5] W. Schmid, "Homogeneous complex manifolds and representations of semisimple Lie groups" , Representation Theory and Harmonic Analysis on Semisimple Lie Groups , Math. Surveys Monogr. , 31 ,
Amer. Math. Soc. (1989) pp. 223–286 (PhD Univ. Calif. Berkeley, 1967)
[a6] R.S. Ward, "On self-dual gauge fields" Phys. Lett. , A61 (1977) pp. 81–82
How to Cite This Entry:
Penrose transform. M.G. Eastwood (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Penrose_transform&oldid=12831
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Penrose_transform","timestamp":"2014-04-18T20:47:29Z","content_type":null,"content_length":"26568","record_id":"<urn:uuid:2d6a5459-3852-4c27-bb04-98af691715a5>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
Manhattan Beach Algebra 1 Tutor
Find a Manhattan Beach Algebra 1 Tutor
...Sometimes, all it takes is to redirect the student to practice better time-management and study habits and then practice their subject diligently. Other students need greater attention paid to
their particular learning style for them to improve in their studies. I know that many students are vi...
22 Subjects: including algebra 1, English, reading, writing
...I am also very friendly and believe in every students' abilities to learn and achieve. I am a hard worker and will make sure I give my best to tutoring my students. I received my Master's in
Education last year and am currently a chemistry teacher in Van Nuys.
35 Subjects: including algebra 1, reading, English, chemistry
...In 1988, our gross was $1.4K. We were a non-profit, ASB-owned student company leasing space on the Stanford University campus. Our revenues included a Stanford an annual bulk subscription worth
54 Subjects: including algebra 1, English, writing, geometry
...Some of my projects have included making jackets, dresses, pants, purses, quilts, padded cellphone covers, and even small tents. My favorite part of sewing, and my specialty, is tailoring and
modifying clothes. I work on everything from hemming, replacing zippers, and repairing knitted items, to completely changing the look of clothes.
69 Subjects: including algebra 1, reading, chemistry, Spanish
...In today's educational system teachers are bombarded with oversized classrooms and often are unable to provide extra attention for the few children who need it. That is where I come in. My
strengths as a tutor are assessing the gaps in a student's understanding of the material and constructing creative ways to bridge those gaps.
11 Subjects: including algebra 1, chemistry, biology, algebra 2 | {"url":"http://www.purplemath.com/Manhattan_Beach_algebra_1_tutors.php","timestamp":"2014-04-19T20:16:13Z","content_type":null,"content_length":"24314","record_id":"<urn:uuid:eed1eb3e-afc3-40c5-b4b7-88c69ca90d2a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
The performance of the adaptive optics (AO) is evaluated by simulating point spread functions (PSFs) using ESO's numerical AO simulation tool OCTOPUS. In future, we also plan to use L. Jolissaint's
analytical PSF simulation tool PAOLA.
For each simulated PSF we also provide the paramaters of an analytic fit to its radial intensity profile. This was obtained using eltpsffit, a custom-built tool to fit the numerically simulated PSFs
with a combination of various analytical functions. The purpose of this tool is to obtain speckle-free, more manageable representations of these (very large) PSF images.
Below we provide a brief description of the simulations. In addition, here is some general introductory and supplementary information on AO (more), types of AO, AO simulations and atmospheric
turbulence which may be helpful.
Finally, please note that the MAORY consortium has also made their simulated MCAO PSFs available online.
ESO PSF simulations
Here we give a basic description of ESO's numerical AO simulation tool OCTOPUS. Additional details are provided by Le Louarn et al. (2005) and Le Louarn et al. (2004).
The atmosphere is represented by a small number (≈ 10) of infinitely thin turbulent layers which act as phase screens. Each screen is set up randomly and independently from one another according to a
von Karman power spectrum of the refractive index fluctuations, i.e. a power-law spectrum with index -11/3 curtailed at finite inner and outer scales. The normalisation of each layer's power spectrum
is given by the atmosphere's Cn2 profile, which describes the distribution of the turbulence in the atmosphere as a function oh height above the telescope.
The light from the (natural or laser) guide stars (NGS or LGS) is propagated to the wavefront sensors (WFS) using the geometrical optics approximation, i.e. tracking only phase fluctuations but not
amplitude fluctuations (scintillation) of the light. The finite height of any LGS is taken into account by propagating their light in a conic fashion, rather than cylindrically as for real
astronomical sources (like NGS). All WFS are assumed to be of the Shack-Hartmann type, i.e. an array of lenslets is used to produce multiple images of the guide star, each in its own sub-aperture on
the WFS detector. The images on the WFS are formed by calculating the squared modulus of the Fourier transform of the incoming complex amplitude at the location of each lenslet. These images are then
degraded by adding photon noise according to the guide star brightness and WFS integration time, as well as read-out noise. Subsequently, the centroids of the guide star images in all of the
sub-apertures are measured, and all measurements from all WFS are assembled into a single vector. (Note that the calculation of the centroids ignores LGS spot elongation because new algorithms are
currently being studied which will deal with this effect optimally.)
The position measurement vector is then multplied with the AO system's pre-computed command matrix. The command matrix is the inverse of the interaction matrix which describes how the guide star
positions in each of the WFS sub-apertures respond to a unit 'push' (or 'pull') from each of the deformable mirror's (DM) actuators. The result of multiplying the positional measurement vector with
the command matrix is therefore a vector describing the positions of the DM's actuators that are required to compensate for the measured wavefront distortions. The DM is modeled as a special phase
screen. Its shape (given by the actuators' positions and the assumed influence function) is subtracted from the phase of a wavefront coming in from the desired direction in the field of view (and
propageted cylindrically). The resulting wavefront is then Fourier transformed to create a quasi-instantaneous or short exposure PSF, which is recorded.
To account for the temporal evolution of the atmosphere the above process is then repeated many times. Each iteration starts by shifting the phase screens representing the atmosphere by an amount
determined by each layer's wind speed and the duration of one iteration in real time. Then the whole process of guide star wavefront propagation, wavefront sensing, DM shape calculation and short
exposure PSF generation is repeated. Actually, the DM shape that is subtracted from the wavefront in any given iteration is not the shape computed in the same iteration, but rather the shape computed
several iterations earlier. This delay accounts for the time required to integrate on the guide star (= 1 iteration), read out the WFS detectors, compute the actuator positions, and actuate the DM.
Each iteration results in a short exposure PSF. All of these are summed together to generate the final, 'long' exposure PSF. However, since the frame rate of an AO system is typically 500 – 1000 Hz
it takes thousands of iterations to cover just a few seconds of real time. Herein lies the main limitation of these simulations: since they are computationally so expensive it is impossible to
generate truly long exposure PSFs (covering many minutes of real time) and one is limited to a few seconds of exposure time. The result is that residual speckles are still visible in the final PSFs
which, of course, would not be present in real astronomical observations covering many minutes. This is obviously an undesirable feature when using these PSFs to simulate astronomical images. One way
to overcome this limitation is to fit the simulated PSFs with a suitably chosen analytical function, or a combination of several different functions, in order to obtain a speckle-free representation.
Note that the above simulation process describes a Monte Carlo simulation: two realisations of the same PSF will differ from one another because of the random nature of the simulated atmosphere and
because of the noise associated with the guide star images.
ESO simulation parameters
Obviously, there a large number of input parameters that are required for the simulations. Due to the computational cost of these simulations it is impossible to explore this parameter space
comprehensively. Hence, only a relatively small number of input parameters (such as, e.g., the wavelength of observation) were varied to create different PSFs (see below). All other parameters were
held constant for all simulations. These parameters and their values are listed below.
Turbulence power spectrum
Number of turbulent layers
Height [m] Fractional C[n]^2 Windspeed [m/s]
0 0.335 12.1
600 0.223 8.6
1200 0.112 18.6
2500 0.090 12.4
5000 0.080 8.0
9000 0.052 33.7
11500 0.045 23.2
12800 0.034 22.2
14500 0.019 8.0
18500 0.011 10.0
LGS positions [arcmin from field centre, i=0…4]
x = 3 cos( i×72°)
y = 3 sin( i×72°)
0.75 cos( i×72°), 0
0.75 sin( i×72°), 0
Number of sub-apertures per WFS
Number of CCD pixel per sub-aperture
none (infinite flux, no read-out noise)
Number of actuators per DM
Tilt with respect to layers
ESO PSF fitting
In order to obtain speckle-free, more manageable representations of the PSF images, the radial profiles of all PSFs in the database have been fit with combinations of analytical functions using the
custom-built tool eltpsffit. We only performed 1D fits of the PSF profiles, not 2D fits of the whole images, because generally full 2D fitting did not result in a significantly improved
representation of the PSF.
High-Strehl PSFs were typically fit with four to five components: an (obstructed) Airy disk to represent the diffraction limited core and additional Lorentzian or Moffat function components to
represent the halo. Low-Strehl PSFs usually required only two to three Lorentzian or Moffat components. Our (somewhat arbitrary) aim was to obtain fits that lie within 5% of the data at all radii
(not just in the centre). This was generally achieved except for the mid-IR LTAO PSFs which are much more difficult to fit.
Note that these fits are not perfect representations of the PSFs. Different choices for the fitting weights, number or type of the fit components, etc., may well lead to more accurate descriptions,
especially when considering only a restricted range of radii. Hence, depending on your application, you may wish to use eltpsffit to produce your own fits.
ESO PSF database
All PSFs are stored as fits images. The PSF image database is organised in a directory tree structure, where the levels correspond to those input parameters that were varied to create different PSFs:
• type of AO
• zenith distance
• wavelength
• size of the field of view
• position within field of view
At a given level of the tree you will find one directory for each value of the level's corresponding parameter for which simulations were performed. The actual PSF images are contained in the lowest
level directories. To find the PSF that corresponds to a particular combination of parameter values simply work your way down the directory tree.
Note that due to the computational cost of these simulations it is not possible to homogeneously populate even this relatively small 5-dimensional parameter space (let alone the full parameter space,
see above). PSFs were generated only for those combinations of parameter values that have so far been deemed interesting or necessary. Hence some parts of parameter space are populated more densely
than others, while other parts are entirely empty. Those combinations of parameter values that have so far not been simulated do not have corresponding branches in the database directory tree.
An existing branch usually only has one PSF image in its lowest level directory. However, for a few combinations of parameter values the simulations have been repeated, resulting in more than one
realisation of the same PSF. In these cases the lowest level directory contains more than one PSF image. As explained above, all PSFs have been fitted with analytical functions in order to obtain a
speckle-free representation. The files containing the results of the fits are also stored in the lowest level directories. These files have the same root filename as the corresponding PSF image but
different extensions. Their contents are explained here.
Note that the fits header of each PSF image contains a record of the above parameter values that were used to simulate this PSF. The header also contains the usual basic image information such as the
image's pixel size in mas (keywords CDELT1 and CDELT2).
The AO type 'NOAO' refers to the seeing-limited case (read: no AO). Zenith distance is given in degree. The wavelength of observation is given in m in the image headers, but is listed by the
corresponding filter name in the database directory tree. The parameter 'size of the field of view' only applies to GLAO PSFs and is given in arcmin. This level is absent in the LTAO and NOAO parts
of the directory tree. The parameter 'position whithin field of view' consists of an x and a y coordinate given in arcsec for LTAO and NOAO and in units of the size of the field of view for GLAO.
Below is the database directory tree. Click on a folder icon to expand or collapse that folder. Click on the folder name itself to go to that folder.
ESO PSF properties
These plots show how the PSF depends on the various parameters above.
PAOLA PSF simulations
PAOLA is an analytical PSF modeling tool developed by L. Jolissaint. The reasons to explore anlaytical PSF simulations in addition to ESO's numerical simulations include: (i) they are free of speckle
noise; (ii) the code is much faster and so one can explore a larger parameter space; (iii) one wants to understand the sensitivity of any DRM conclusions to the details of the PSF.
The fundamentals of analytical PSF modelling are described by Jolissaint et al. (2006).
So far, this work has not been completed...
E-ELT Science
What's New?
Science Case
Project Science Team
Phase B (2006 – 2011) | {"url":"http://www.eso.org/sci/facilities/eelt/science/drm/tech_data/ao/","timestamp":"2014-04-18T01:14:46Z","content_type":null,"content_length":"81726","record_id":"<urn:uuid:e331ec5d-daa5-4fa9-a4cc-b5e7fe0dca6a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
Faster suffix sorting
Results 1 - 10 of 35
- ACM COMPUTING SURVEYS , 2007
"... Full-text indexes provide fast substring search over large text collections. A serious problem of these indexes has traditionally been their space consumption. A recent trend is to develop
indexes that exploit the compressibility of the text, so that their size is a function of the compressed text l ..."
Cited by 173 (78 self)
Add to MetaCart
Full-text indexes provide fast substring search over large text collections. A serious problem of these indexes has traditionally been their space consumption. A recent trend is to develop indexes
that exploit the compressibility of the text, so that their size is a function of the compressed text length. This concept has evolved into self-indexes, which in addition contain enough information
to reproduce any text portion, so they replace the text. The exciting possibility of an index that takes space close to that of the compressed text, replaces it, and in addition provides fast search
over it, has triggered a wealth of activity and produced surprising results in a very short time, and radically changed the status of this area in less than five years. The most successful indexes
nowadays are able to obtain almost optimal space and search time simultaneously. In this paper we present the main concepts underlying self-indexes. We explain the relationship between text entropy
and regularities that show up in index structures and permit compressing them. Then we cover the most relevant self-indexes up to date, focusing on the essential aspects on how they exploit the text
compressibility and how they solve efficiently various search problems. We aim at giving the theoretical background to understand and follow the developments in this area.
, 2003
"... Abstract. Suffix trees and suffix arrays are widely used and largely interchangeable index structures on strings and sequences. Practitioners prefer suffix arrays due to their simplicity and
space efficiency while theoreticians use suffix trees due to linear-time construction algorithms and more exp ..."
Cited by 149 (6 self)
Add to MetaCart
Abstract. Suffix trees and suffix arrays are widely used and largely interchangeable index structures on strings and sequences. Practitioners prefer suffix arrays due to their simplicity and space
efficiency while theoreticians use suffix trees due to linear-time construction algorithms and more explicit structure. We narrow this gap between theory and practice with a simple linear-time
construction algorithm for suffix arrays. The simplicity is demonstrated with a C++ implementation of 50 effective lines of code. The algorithm is called DC3, which stems from the central underlying
concept of difference cover. This view leads to a generalized algorithm, DC, that allows a space-efficient implementation and, moreover, supports the choice of a space–time tradeoff. For any v ∈ [1,
√ n], it runs in O(vn) time using O(n / √ v) space in addition to the input string and the suffix array. We also present variants of the algorithm for several parallel and hierarchical memory models
of computation. The algorithms for BSP and EREW-PRAM models are asymptotically faster than all previous suffix tree or array construction algorithms.
- Journal of Discrete Algorithms , 2003
"... Abstract. We present a linear time algorithm to sort all the suffixes of a string over a large alphabet of integers. The sorted order of suffixes of a string is also called suffix array, a data
structure introduced by Manber and Myers that has numerous applications in pattern matching, string proces ..."
Cited by 73 (1 self)
Add to MetaCart
Abstract. We present a linear time algorithm to sort all the suffixes of a string over a large alphabet of integers. The sorted order of suffixes of a string is also called suffix array, a data
structure introduced by Manber and Myers that has numerous applications in pattern matching, string processing, and computational biology. Though the suffix tree of a string can be constructed in
linear time and the sorted order of suffixes derived from it, a direct algorithm for suffix sorting is of great interest due to the space requirements of suffix trees. Our result improves upon the
best known direct algorithm for suffix sorting, which takes O(n log n) time. We also show how to construct suffix trees in linear time from our suffix sorting result. Apart from being simple and
applicable for alphabets not necessarily of fixed size, this method of constructing suffix trees is more space efficient. 1
"... In this paper we consider the problem of computing the suffix array of a text T [1, n]. This problem consists in sorting the suffixes of T in lexicographic order. The suffix array [16] (or pat
array [9]) is a simple, easy to code, and elegant data structure used for several fundamental string matchi ..."
Cited by 59 (4 self)
Add to MetaCart
In this paper we consider the problem of computing the suffix array of a text T [1, n]. This problem consists in sorting the suffixes of T in lexicographic order. The suffix array [16] (or pat array
[9]) is a simple, easy to code, and elegant data structure used for several fundamental string matching problems involving both linguistic texts and biological data [4, 11]. Recently, the interest in
this data structure has been revitalized by its use as a building block for three novel applications: (1) the Burrows-Wheeler compression algorithm [3], which is a provably [17] and practically [20]
effective compression tool; (2) the construction of succinct [10, 19] and compressed [7, 8] indexes; the latter can store both the input text and its full-text index using roughly the same space used
by traditional compressors for the text alone; and (3) algorithms for clustering and ranking the answers to user queries in web-search engines [22]. In all these applications the construction of the
suffix array is the computational bottleneck both in time and space. This motivated our interest in designing yet another suffix array construction algorithm which is fast and "lightweight" in the
sense that it uses small space...
- In Proc. Workshop on Algorithms in Bioinformatics, in Lecture Notes in Computer Science , 2002
"... Abstract. In large scale applications as computational genome analysis, the space requirement of the suffix tree is a severe drawback. In this paper, we present a uniform framework that enables
us to systematically replace every string processing algorithm that is based on a bottomup traversal of a ..."
Cited by 43 (5 self)
Add to MetaCart
Abstract. In large scale applications as computational genome analysis, the space requirement of the suffix tree is a severe drawback. In this paper, we present a uniform framework that enables us to
systematically replace every string processing algorithm that is based on a bottomup traversal of a suffix tree by a corresponding algorithm based on an enhanced suffix array (a suffix array enhanced
with the lcp-table). In this framework, we will show how maximal, supermaximal, and tandem repeats, as well as maximal unique matches can be efficiently computed. Because enhanced suffix arrays
require much less space than suffix trees, very large genomes can now be indexed and analyzed, a task which was not feasible before. Experimental results demonstrate that our programs require not
only less space but also much less time than other programs developed for the same tasks. 1
- ACM Computing Surveys , 2007
"... In 1990, Manber and Myers proposed suffix arrays as a space-saving alternative to suffix trees and described the first algorithms for suffix array construction and use. Since that time, and
especially in the last few years, suffix array construction algorithms have proliferated in bewildering abunda ..."
Cited by 39 (10 self)
Add to MetaCart
In 1990, Manber and Myers proposed suffix arrays as a space-saving alternative to suffix trees and described the first algorithms for suffix array construction and use. Since that time, and
especially in the last few years, suffix array construction algorithms have proliferated in bewildering abundance. This survey paper attempts to provide simple high-level descriptions of these
numerous algorithms that highlight both their distinctive features and their commonalities, while avoiding as much as possible the complexities of implementation details. New hybrid algorithms are
also described. We provide comparisons of the algorithms ’ worst-case time complexity and use of additional space, together with results of recent experimental test runs on many of their
- In Proceedings of the Ninth International Symposium on String Processing and Information Retrieval. Springer-Verlag, Lecture Notes in Computer Science , 2002
"... Using the suffix tree of a string S, decision queries of the type "Is P a substring of S?" can be answered in O(|P|) time and enumeration queries of the type "Where are all z occurrences of P in
S?" can be answered in O(|P|+z) time, totally independent of the size of S. However, in large scale appli ..."
Cited by 38 (1 self)
Add to MetaCart
Using the suffix tree of a string S, decision queries of the type "Is P a substring of S?" can be answered in O(|P|) time and enumeration queries of the type "Where are all z occurrences of P in S?"
can be answered in O(|P|+z) time, totally independent of the size of S. However, in large scale applications as genome analysis, the space requirements of the suffix tree are a severe drawback. The
suffix array is a more space economical index structure. Using it and an additional table, Manber and Myers (1993) showed that decision queries and enumeration queries can be answered in O(|P|+log |S
|) and O(|P|+log |S|+z) time, respectively, but no optimal time algorithms are known. In this paper, we showhow to achieve the optimal O(|P|) and O(|P|+z) time bounds for the suffix array. Our
approach is not confined to exact pattern matching. In fact, it can be used to efficiently solve all problems that are usually solved bya top-down traversal of the suffix tree. Experiments show that
our method is not only of theoretical interest but also of practical relevance.
- In ESEC/FSE , 2005
"... Cloning in software systems is known to create problems during software maintenance. Several techniques have been proposed to detect the same or similar code fragments in software, so-called
simple clones. While the knowledge of simple clones is useful, detecting design-level similarities in softwar ..."
Cited by 34 (7 self)
Add to MetaCart
Cloning in software systems is known to create problems during software maintenance. Several techniques have been proposed to detect the same or similar code fragments in software, so-called simple
clones. While the knowledge of simple clones is useful, detecting design-level similarities in software could ease maintenance even further, and also help us identify reuse opportunities. We observed
that recurring patterns of simple clones – so-called structural clones- often indicate the presence of interesting design-level similarities. An example would be patterns of collaborating classes or
components. Finding structural clones that signify potentially useful design information requires efficient techniques to analyze the bulk of simple clone data and making non-trivial inferences based
on the abstracted information. In this paper, we describe a practical solution to the problem of
, 2004
"... Abstract. In this paper we consider the linear time algorithm of Kasai et al. [6] for the computation of the Longest Common Prefix (LCP) array given the text and the suffix array. We show that
this algorithm can be implemented without any auxiliary array in addition to the ones required for the inpu ..."
Cited by 31 (3 self)
Add to MetaCart
Abstract. In this paper we consider the linear time algorithm of Kasai et al. [6] for the computation of the Longest Common Prefix (LCP) array given the text and the suffix array. We show that this
algorithm can be implemented without any auxiliary array in addition to the ones required for the input (the text and the suffix array) and the output (the LCP array). Thus, for a text of length n,
we reduce the space occupancy of this algorithm from 13n bytes to 9n bytes. We also consider the problem of computing the LCP array by “overwriting” the suffix array. For this problem we propose an
algorithm whose space occupancy can be bounded in terms of the empirical entropy of the input text. Experiments show that for linguistic texts our algorithm uses roughly 7n bytes. Our algorithm makes
use of the Burrows-Wheeler Transform even if it does not represent any data in compressed form. To our knowledge this is the first application of the Burrows-Wheeler Transform outside the domain of
data compression. The source code for the algorithms described in this paper has been included in the lightweight suffix sorting package [13] which is freely available under the GNU GPL. 1
- 14th Annual Symposium on Combinatorial Pattern Matching , 2003
"... We describe an algorithm that, for any v 2 [2; n], constructs the suffix array of a string of length n in O(vn + n log n) time using O(v + n= p v) space in addition to the input (the string) and
the output (the suffix array). By setting v = log n, we obtain an O(n log n) time algorithm using O n= p ..."
Cited by 25 (5 self)
Add to MetaCart
We describe an algorithm that, for any v 2 [2; n], constructs the suffix array of a string of length n in O(vn + n log n) time using O(v + n= p v) space in addition to the input (the string) and the
output (the suffix array). By setting v = log n, we obtain an O(n log n) time algorithm using O n= p | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=771377","timestamp":"2014-04-24T20:58:57Z","content_type":null,"content_length":"40135","record_id":"<urn:uuid:2f9e4084-8e6a-46f5-975c-01626fe45f3d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
A research company has been hired by a realty company to do an analysis of heating cost of homes in the region. The realty company wanted to be able to predict the heating cost of a typical single-family home. The realty company was constantly being asked questions regarding heating costs by potential home buyers. It was believed that these variables would impact heating costs (Y'): mean daily outdoor temperature (X1), the number of inches of insulation (X2), and the age in years of the furnace ...
A research company has been hired by a realty company to do an analysis of heating cost of homes in the region. The realty company wanted to be able to predict the heating cost of a typical
single-family home. The realty company was constantly being asked questions regarding heating costs by potential home buyers. It was believed that these variables would impact heating costs (Y'):
mean daily
outdoor temperature (X1), the number of inches of insulation (X2), and the age in years of the furnace ...
i have seen the formula but i could not see the statement. Maybe you need to make it complete.
Asked 11/7/2011 6:39:54 PM
0 Answers/Comments
Not a good answer? Get an answer now. (FREE)
There are no new answers. | {"url":"http://www.weegy.com/home.aspx?ConversationId=CC6FCD05","timestamp":"2014-04-16T19:43:40Z","content_type":null,"content_length":"38019","record_id":"<urn:uuid:48e9c64d-0e10-4838-b6dd-7226e071bbf5>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Averaging coefficients across datasets
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Averaging coefficients across datasets
From Maarten buis <maartenbuis@yahoo.co.uk>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Averaging coefficients across datasets
Date Fri, 13 Feb 2009 11:37:43 +0000 (GMT)
--- On Thu, 12/2/09, Carlos Rodriguez wrote:
> I have 10 imputed datasets and need to run regressions on each of them
> and then average the coefficients on the explanatory variables across
> them. Unformatunately for my purposes, neither Zelig nor miest or Clarif
> would work. I'm doing it by hand: running each of my many regressions
> on each of my 10 datasets and then calculating the average for each of
> my many expalnatory varaibles. This is obvioulsy highly inefficient, so
> I thought I'd ask you if there's a command that would do it in Stata
> automatically or some more efficient way to go about it?
It looks like are trying to multiple imputation by hand. Some time ago I sent an example to the Statalist on how to do that when all imputed files are stacked into one file, but you can easily adapt that example to your own problem. The trick I used is to store the coefficients in matrices, and use Mata to do all the computations. The formulas I used can be found here: http://www.stat.psu.edu/~jls/mifaq.html#howto .
Hope this helps,
*------------------------- begin example -----------------------------
sysuse nlsw88, clear
replace wage = . if uniform() < invlogit(5 - .5*grade)
ice wage grade age union, clear m(5)
reg wage grade age union if _mj == 1
matrix b = e(b)'
matrix v = e(V)
matrix V = vecdiag(v)'
reg wage grade age union if _mj == 2
matrix b = b, e(b)'
matrix v = e(V)
matrix V = V, vecdiag(v)'
reg wage grade age union if _mj == 3
matrix b = b, e(b)'
matrix v = e(V)
matrix V = V, vecdiag(v)'
reg wage grade age union if _mj == 4
matrix b = b, e(b)'
matrix v = e(V)
matrix V = V, vecdiag(v)'
reg wage grade age union if _mj == 5
matrix b = b, e(b)'
matrix v = e(V)
matrix V = V, vecdiag(v)'
b = st_matrix("b")'
V = st_matrix("V")'
Qbar = mean(b)'
Ubar = mean(V)'
B = diagonal(variance(b))
T = Ubar :+ 1.2:*B
se = sqrt(T)
df= 4:* (1 :+ (5:*Ubar):/(6:*B)) :* (1 :+ (5:*Ubar):/(6:*B))
t = Qbar:/se
p = 2*ttail(df, abs(t))
ci = Qbar :- invttail(df,0.025):*se, Qbar :+ invttail(df,0.025):*se
result = Qbar, se, t, df, p, ci
st_matrix("result", result)
matrix rownames result = grade age union _cons
matrix colnames result = coef std_err t df p lb ub
matrix list result
*--------------------------- end example --------------------------
(For more on how to use examples I sent to the Statalist, see
http://home.fsw.vu.nl/m.buis/stata/exampleFAQ.html )
Maarten L. Buis
Department of Social Research Methodology
Vrije Universiteit Amsterdam
Boelelaan 1081
1081 HV Amsterdam
The Netherlands
visiting address:
Buitenveldertselaan 3 (Metropolitan), room N515
+31 20 5986715
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-02/msg00520.html","timestamp":"2014-04-18T05:35:19Z","content_type":null,"content_length":"8498","record_id":"<urn:uuid:db8bc229-8fb8-47ab-a2c0-c950e9a36652>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/piscez.in/answered","timestamp":"2014-04-20T14:06:48Z","content_type":null,"content_length":"97835","record_id":"<urn:uuid:71acc478-74ed-4495-925c-ac523f3db3f4>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
Should Mathematics Be Pursued for Its Own Sake, Not for Its Social Utility Summary
This section contains 5,002 words
(approx. 17 pages at 300 words per page)
Should Mathematics Be Pursued for Its Own Sake, Not for Its Social Utility?
Viewpoint: Yes, mathematics should be pursued for its own sake, as doing so will eventually present practical benefits to society.
Viewpoint: No, mathematics should first be pursued by everyone for its societal utility, with the mathematically gifted being encouraged to continue the pursuit of math for math's sake at advanced
One of the great problems in mathematics is not a "math problem" at all; rather, it is a question of the means by which the study of mathematics should be pursued. This may be characterized as a
debate between "pure" mathematics, or mathematics for its own sake, and applied mathematics, the use of mathematics for a specific purpose, as in business or engineering. This is a concern...
This section contains 5,002 words
(approx. 17 pages at 300 words per page) | {"url":"http://www.bookrags.com/research/should-mathematics-be-pursued-for-i-sind-03/","timestamp":"2014-04-18T09:26:25Z","content_type":null,"content_length":"33951","record_id":"<urn:uuid:b8c9f432-2b84-4a03-81d9-b5eb508ca3ae>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Development of Categorical Logic
- COMMUNICATIONS IN MATHEMATICAL PHYSICS , 2009
"... The aim of this paper is to relate algebraic quantum mechanics to topos theory, so as to construct new foundations for quantum logic and quantum spaces. Motivated by Bohr’s idea that the
empirical content of quantum physics is accessible only through classical physics, we show how a noncommutative C ..."
Cited by 9 (1 self)
Add to MetaCart
The aim of this paper is to relate algebraic quantum mechanics to topos theory, so as to construct new foundations for quantum logic and quantum spaces. Motivated by Bohr’s idea that the empirical
content of quantum physics is accessible only through classical physics, we show how a noncommutative C*-algebra of observables A induces a topos T (A) in which the amalgamation of all of its
commutative subalgebras comprises a single commutative C*-algebra A. According to the constructive Gelfand duality theorem of Banaschewski and Mulvey, the latter has an internal spectrum �(A) in T
(A), which in our approach plays the role of the quantum phase space of the system. Thus we associate a locale (which is the topos-theoretical notion of a space and which intrinsically carries the
intuitionistic logical structure of a Heyting algebra) to a C*-algebra (which is the noncommutative notion of a space). In this setting, states on A become probability measures (more precisely,
valuations) on �, and self-adjoint elements of A define continuous functions (more precisely, locale maps) from � to Scott’s interval domain. Noting that open subsets of �(A) correspond to
propositions about the system, the pairing map that assigns a (generalized) truth value to a state and a proposition assumes an extremely simple categorical form. Formulated in this way, the quantum
theory defined by A is essentially turned into a classical theory, internal to the topos T (A). These results were inspired by the topos-theoretic approach to quantum physics proposed by Butterfield
and Isham, as recently generalized by Döring and Isham.
"... In this paper it is shown that the ordered structure of the Dedekind real numbers is effectively homogeneous in any topos with natural numbers object. This result is obtained within the
framework of local set theory. ..."
Add to MetaCart
In this paper it is shown that the ordered structure of the Dedekind real numbers is effectively homogeneous in any topos with natural numbers object. This result is obtained within the framework of
local set theory.
"... Abstract. This paper identifies constructions needed for modeling product structure, shows which ones can be represented in OWL2 and suggests extensions for those that do not have OWL2
representations. A simplified mobile robot specification is formalized as a Knowledge Base (KB) in an extended logi ..."
Add to MetaCart
Abstract. This paper identifies constructions needed for modeling product structure, shows which ones can be represented in OWL2 and suggests extensions for those that do not have OWL2
representations. A simplified mobile robot specification is formalized as a Knowledge Base (KB) in an extended logic. A KB is constructed from a signature of types (classes), typed properties, and
typed variables and operators. Modeling product structure requires part decompositions, connections between parts, data valued properties, typed operations with variables, and constraints between
property values. Data valued properties represent observable properties assumed or measured regarding a product. Operations with variables are used to define constraint properties such as fact that
the total product weight is the some of weights of product components. Constructors, which take arguments from the signature and have properties as values, are used to specify families of properties
such as part properties. These constructions are illustrated for the mobile robot. Operators and variables are represented as type, property, and operations within type theory. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=7077728","timestamp":"2014-04-19T01:59:14Z","content_type":null,"content_length":"17897","record_id":"<urn:uuid:98c18273-a272-44dc-b7ac-b095372a85b5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stone Park Algebra 2 Tutor
Find a Stone Park Algebra 2 Tutor
...I work patiently with student according to their need to make the learning productive and fun. I have tutored Trigonometry to many high school students. I have worked with many students who
have difficulty memorizing unit circle in degrees, and radians with their values.
11 Subjects: including algebra 2, calculus, geometry, trigonometry
I'm a third year PhD student at University of Illinois at Chicago studying Probability and Statistics with an interest in Game Theory. I am also a TA and I enjoy the teaching aspect of my program
very much. I have been an active math tutor both professionally and for friends for the past few years and I love the experience of helping someone gain an understanding of mathematics.
22 Subjects: including algebra 2, calculus, geometry, statistics
...Most people who meet me can't believe that I'm actually an engineer because of my outgoing personality. I have been tutoring since I was in high school and I have a desire to help people
understand math and science much better. I'm not happy until you "get it" and you can "solve it'!!Thanks for your consideration.
12 Subjects: including algebra 2, physics, trigonometry, precalculus
...My methods have proven to be very successful. I have a Masters degree in applied mathematics and most coursework for a doctorate. This includes linear algebra, modern algebra, mathematical
physics, topology, complex and real variable analysis and functional analysis in addition to calculus and differential equations.
18 Subjects: including algebra 2, physics, calculus, geometry
...World History develops critical thinking and asks students to think through issues unique to each time and place. Art demands that we tap into our cultural side to understand how we as people
create and share ideas visually that are central to our experiences and beliefs. Archaeology takes a third approach of interpreting the role of objects in our lives.
10 Subjects: including algebra 2, calculus, geometry, algebra 1
Nearby Cities With algebra 2 Tutor
Bellwood, IL algebra 2 Tutors
Berkeley, IL algebra 2 Tutors
Broadview, IL algebra 2 Tutors
Forest Park, IL algebra 2 Tutors
Franklin Park, IL algebra 2 Tutors
Hillside, IL algebra 2 Tutors
Hines, IL algebra 2 Tutors
Maywood, IL algebra 2 Tutors
Melrose Park algebra 2 Tutors
North Riverside, IL algebra 2 Tutors
Oakbrook Terrace, IL algebra 2 Tutors
River Forest algebra 2 Tutors
River Grove algebra 2 Tutors
Riverside, IL algebra 2 Tutors
Westchester algebra 2 Tutors | {"url":"http://www.purplemath.com/Stone_Park_Algebra_2_tutors.php","timestamp":"2014-04-19T12:20:38Z","content_type":null,"content_length":"24307","record_id":"<urn:uuid:e05afc61-8e0d-4058-8b25-84d89bba689b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Printable Math Worksheets for
Seventh Grade Math Worksheets
7th grade math introduces kids to many new concepts that build heavily on what was taught in the earlier grades. The math worksheets for Grade 7 available online are an effective way to get kids to
practice math and sharpen their math skills.
Make peace with proportion problems with this easy, fun proportion worksheet! With a given set of numbers, students have to ...See more
Knowing how to calculate proportions is an integral part of our daily lives! Its application in simple tasks like cooking, exercising, and driving ... See more
Just as its name suggests, ‘More Proportion’ is a simple, fun, proportion worksheet that offers plenty of practice to the kids.See more
Proportions are used in our daily lives to estimate prices of things and compare a figure with another. You can apply proportions in the simplest of daily tasks, ...See more
Proportional or Not’ is a beginner-level proportion worksheet that tests students’ preliminary understanding of the concept. See more
How important it is to learn proportion? Is it important at all? You will have the answer once you are introduced to the concept of proportion in your class.See more
After solving elementary proportion worksheets like ‘Plenty of Proportion’ and ‘More Proportions’ one after another, ...See more
This is a cracking tables worksheet perfect for 7th graders that need to be filled with negative multiples. The grid has a few blank cells and a few already-filed cells...See more
Repeated multiplication of the same number by itself is the theory of exponents explained simply. Any number expressed by exponent 1...See more
Mixed equations are a great way to learn the complex steps leading to advanced math problems. 7th graders are introduced to this complex yet cool...See more
How easy for you to tell the answer to this one: 2x1x3x2x4x5x10? Knowing tables and counting them effortlessly gives one a grip over the cool subject of math.See more
Advanced math problems and equations are always made easy if there’s enough expertise and skills in solving mixed equations.See more
7th grade math is not an easy job! It is the important stage where students learn to synthesize many concepts that they’ve learnt in earlier grades...See more
Repeated multiplication of the same number by itself is the theory of exponents explained simply. Any number expressed by exponent 1...See more
Reducing fractions is a great way to recollect earlier learnt concepts in math like fractions and divisions. The more 7th graders practice...See more
The theory of exponents can be explained very simply. If a number is multiplied by itself repeatedly, it is definitely a math of exponents – also called powers.See more
Interesting figures are expressed in percentages almost 99% of times. Some examples are 94% of life on Earth is aquatic, 70% of the planet is ocean, 50% of the...See more
Repeated multiplication of the same number by itself is the theory of exponents explained simply. Any number expressed by exponent 1...See more
Repeated multiplication of the same number by itself is the theory of exponents explained simply. Any number expressed by exponent 1See more
“Characteristics of Solids” is a great worksheet that lets students explore and manipulate the basic solid shapes in their three dimensional forms before discussing their characteristics. See more
‘Cat and Rat’ is a cool geometry worksheet that shows kids how to design a maze of their own using only a grid and some tracing paper! See more
Drawing angles is easy-peasy when you have a protractor, but it is also possible to draw angles without one. Challenge yourself to try and construct the angles in this worksheet without a protractor.
See more
“Characteristics of Solids” is a great worksheet that lets students explore and manipulate the basic solid shapes in their three dimensional forms before discussing their characteristics. See more
"Angles of Polygons” is a simple yet comprehensive worksheet that familiarizes sixth graders with the formula used to determine the sum of the angles of a polygon. See more
Do your students find geometry formulae difficult to remember? Well that’s sure to change with this geometry worksheet, “Area of Parallelograms”. See more
This simple geometry worksheet will help your students understand how the formula for the area of a trapezoid is derived.See more
One of the earliest geometry formulae learnt is that for the area of a triangle. Help your young learner arrive at the formula herself with this simple geometry worksheet for kids. See more
“Area of Circles” is an incredibly simple geometry worksheet that will have your budding mathematician deriving geometry formulae in no time!See more
Find out with our ‘The Whole Truth’ worksheet! In this worksheet, kids need to read each of the sentences written and fill in the blanks. See more
Do your kids know all about the different angles? Find out with our ‘Name That Angle’ worksheet. See more
This simple ‘True or False?’ printable geometry worksheet will be able to test how much kids know about angles. See more
‘Label the Diagram’ is a printable geometry worksheet in which students need to label the figure drawn as per the instructions provided. See more
There are so many geometric terms! See if your kids know them all with our ‘Find Your Own Angles’ worksheet. See more
Seventh Grade Math Worksheets
7th grade math introduces kids to many new concepts that build heavily on what was taught in the earlier grades. The math worksheets for Grade 7 available online are an effective way to get kids to
practice math and sharpen their math skills!
Free and Printable Math Worksheets for 7th Graders
Math for 7th graders can be intimidating. From learning new concepts like the Pythagorean Theorem to carrying out other advanced operations like solving proportions and equations, 7th graders have a
lot to grasp and understand.
There is a plethora of free math worksheets online that are easy to print and hand out. These worksheets include math problems and sums based on the broad curriculum standards of each grade. For
teachers and homeschooling parents who are on the lookout for ways to get 7th graders to practice math, these math worksheets are a useful resource. They are also a convenient way to monitor each
child’s progress and identify their problem areas.
Math Worksheets and Games for Grade 7
Make math fun for 7th graders. Engage them with fun and exciting online math games that give them the opportunity to solve math problems and have fun at the same time. The virtual world here at Math
Blaster is filled with a variety of cool math games for kids. The perfect mix of entertainment and education, these games encourage kids to solve tough math problems and gain more points to advance
in the game. Get your kids hooked on these educational games and watch their math grades go up! | {"url":"http://www.mathblaster.com/teachers/math-worksheets/7th-grade-math-worksheets","timestamp":"2014-04-20T08:14:37Z","content_type":null,"content_length":"119375","record_id":"<urn:uuid:f9488d78-0842-42f5-8197-dd4edf750a45>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Show that the vectors (1,0,1,0,1,0) and (0,1,1,1,1,-1) generate the same subspace in R^6 as the vectors (4,-5,-1,-5,-1,5) and (-3,2,-1,2,-1,-2) Would very much appreciate some help, how do I show
that two vectorpairs generates the same subspace? How do I calculate what subspace one pair of vectors generates?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/510957b5e4b0d9aa3c460f94","timestamp":"2014-04-16T19:59:55Z","content_type":null,"content_length":"25412","record_id":"<urn:uuid:aadb253c-772c-47cb-a49c-44b152664899>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
User user36162
bio website
visits member for 9 months
seen Feb 9 at 8:40
stats profile views 61
2 accepted Riesz potential inequality
2 asked Riesz potential inequality
9 accepted Integral and conformal mappings II
Nov Integral and conformal mappings II
8 comment Probably it is a correct construction.
Nov Integral and conformal mappings II
8 revised added 109 characters in body
Nov Integral and conformal mappings II
8 comment If $D_n$ is smooth (for example $C^2$), then $|f'(z)|\le C_n$, so why the integral diverges? I am assuming that $D_n$ are images of $n/(n+1) D$ under a $C^2-$$K$ q.c. diffeomorphic
mapping of the unit disk onto itself.
Nov Integral and conformal mappings II
8 revised added 7 characters in body
8 asked Integral and conformal mappings II
8 accepted Uniform convergence of conformal mappings
Nov Uniform convergence of conformal mappings
8 comment Yes you right, I understand the point. Thanks.
7 awarded Commentator
Nov Uniform convergence of conformal mappings
7 revised deleted 41 characters in body
Nov Uniform convergence of conformal mappings
7 revised added 46 characters in body
7 asked Uniform convergence of conformal mappings
2 accepted Holder class of analytic functions
Oct Holder class of analytic functions
1 comment Yes (n1) means nontangential!
Oct Holder class of analytic functions
1 comment &Koushik: It is related to little Bloch space.
Oct Holder class of analytic functions
1 comment No, when I said $|z|\to 1$ uniformly I had in mind that $z\to e^{it}$ for some $t$ and throughout the unit disk. Nontangentialy means that $z$ also tends to $e^{it}$ but inside an fixed
Oct Holder class of analytic functions
1 revised deleted 2 characters in body
Oct asked Holder class of analytic functions | {"url":"http://mathoverflow.net/users/36162/user36162?tab=activity","timestamp":"2014-04-18T23:41:33Z","content_type":null,"content_length":"43940","record_id":"<urn:uuid:acbf8ba6-222a-449d-9d9f-9115db14777e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00288-ip-10-147-4-33.ec2.internal.warc.gz"} |
Circus Tower Problem
It's one of those days where I'm feeling like a complete idiot since I can't seem to figure this out. The problem:
A circus is designing a tower routine consisting of people standing atop one another’s shoulders. For practical and aesthetic reasons, each person must be both shorter and lighter than the person
below him or her. Given the heights and weights of each person in the circus, write a method to compute the largest possible number of people in such a tower EXAMPLE: Input (ht, wt): (65, 100) (70,
150) (56, 90) (75, 190) (60, 95) (68, 110) Output: The longest tower is length 6 and includes from top to bottom: (56, 90) (60,95) (65,100) (68,110) (70,150) (75,190)
My code:
def circus_tower(data)
tower_height = 0
sorted = data.sort_by {|x| x.first}
for i in (0..sorted.size)
if (sorted[i].last > sorted[i+1].last)
sorted[i], sorted[i+1] = sorted[i+1], sorted[i]
tower_height += 1
I'm passing in the data as an array of arrays. I keep getting this error: circus_tower': undefined methodlast' for nil:NilClass (NoMethodError). I've tried to go about comparing the second element of
each array in the sorted array a number of different ways(e.g., at(1), [-1]) and I keep getting a variation of the same error. Can anyone tell me what's going on? | {"url":"http://www.dreamincode.net/forums/topic/332596-circus-tower-problem/page__pid__1924189__st__0","timestamp":"2014-04-17T01:36:56Z","content_type":null,"content_length":"92786","record_id":"<urn:uuid:484bdadc-9504-4f62-bdc7-a79939b253fd>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pages created and maintained by:
Dolores Gende
Questions, comments and suggestions:
The Planning Guide that I use for teaching AP Physics B starts the last week of August. It takes about 28 weeks to cover the required topics leaving 2 full weeks to conduct a review.
The Content Outline given below corresponds to the official
A more detailed topic outline is found on page 17 of the pdf under the heading: "Learning Objectives for AP Physics"
The links for each individual topic contain a wide variety of Teaching Resources such as demos, labs, homework, notes, key ideas etc. | {"url":"http://apphysicsb.homestead.com/guidemain.html","timestamp":"2014-04-20T21:25:23Z","content_type":null,"content_length":"55930","record_id":"<urn:uuid:e488ead6-522d-4614-ab12-cdee907b3885>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
Continuous Computation
Research in continuous computation (analog computation) focuses on the theoretical similarities and differences between it and discrete (digital) computation, especially with regard to computational
theories of intelligence in AI and cognitive science.
Publications (reverse chronological order)
1. My talk “Transcending Turing Computability,” handouts or slides (both in postscript form).
2. “Words Lie in Our Way,” by Bruce MacLennan, Minds and Machines, special issue on “What is Computation?” Vol. 4, No. 4 (November 1994), pp. 421-437.
3. “Image and Symbol: Continuous Computation and the Emergence of the Discrete,” [ps, pdf] by Bruce MacLennan, invited contribution for book, Artificial Intelligence and Neural Networks: Steps
Toward Principled Integration, edited by Vasant Honavar and Leonard Uhr, New York, NY: Academic Press, 1994, pp. 207-240. Also University of Tennessee, Knoxville, Department of Computer Science
Technical Report CS-93-199, December 18, 1992 (revised August 5, 1993), 33 pages. A proposed theoretical construct (the simulacrum) for connectionist models analogous to the calculus in symbolic
4. “Continuous Symbol Systems: The Logic of Connectionism,” [pdf (370 KB), ps (1.5 MB)] by Bruce MacLennan, Neural Networks for Knowledge Representation and Inference, edited by Daniel S. Levine and
Manuel Aparicio IV, Hillsdale, NJ: Lawrence Erlbaum, 1994, pp. 83-120. Also University of Tennessee, Knoxville, Computer Science Department technical report CS-91-145, September 1991, 47 pages.
This paper presents a preliminary formulation of continuous symbol systems and indicates how they may aid in understanding the development of connectionist theories. (N.B.: Some figures are
missing from the above electronic versions of this paper.)
5. “A Universal Field Computer That is Purely Linear,” [ps, pdf] by David H. Wolpert and Bruce J. MacLennan, University of Tennessee, Knoxville, Department of Computer Science Technical Report
CS-93-206, September 14, 1993, 28 pp.; by David H. Wolpert and Bruce J. MacLennan. Also Santa Fe Institute Technical Report 93-09-056. This paper proves a particular field computer (a spatial
continuum-limit neural net) governed by a purely linear integro-differential equation is computationally universal.
6. “Grounding Analog Computers” [html], by Bruce MacLennan, June 1993. (commentary on S. Harnad, “Grounding Symbols in the Analog World with Neural Nets”), by Bruce MacLennan, Think 2, June 1993,
pp. 48-51. Reprinted in Psycoloquy 12 (52), 2001. Also available as postscript and pdf.
7. “Characteristics of Connectionist Knowledge Representation,” [ps, pdf] by Bruce MacLennan, Information Sciences 70 (1993), pp. 119-143. Also University of Tennessee, Knoxville, Department of
Computer Science Technical Report CS-91-147, November 1991, 22 pages. We present a construct, called a simulacrum, which has a similar relation to connectionist knowledge representation as the
calculus does to symbolic knowledge representation.
8. “Continuous Spatial Automata” [pdf] by Bruce MacLennan, University of Tennessee, Knoxville, Department of Computer Science Technical Report CS-90-121, November 1990, 9 pages. Definition of
continuous spatial automata, in which the cells and their states form a continuum; continuous “Life” as an example. A sequence of example states is also available.
9. “Continuous Computation: Taking Massive Parallelism Seriously,” by Bruce MacLennan, June 1989.
to Bruce MacLennan / MacLennan@cs.utk.edu
This page is www.cs.utk.edu/~mclennan/contin-comp.html
Last updated: 2007-11-24. | {"url":"http://web.eecs.utk.edu/~mclennan/contin-comp.html","timestamp":"2014-04-18T18:11:20Z","content_type":null,"content_length":"6809","record_id":"<urn:uuid:26e1c339-158d-482e-9992-0ad83544dcc5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
Philadelphia Calculus Tutor
...I studied civil engineering at Lafayette College and earned a Bachelor's of Science in the field. I worked as an intern at the Department of Transportation, conducted research after my
sophomore year with a professor at Lafayette, and I worked as an intern for a consulting firm after my junior year. I passed the fundamentals of engineering exam (FE) in the spring of my junior
21 Subjects: including calculus, reading, physics, geometry
...The only difference is the "words" of this language. Instead of normal words nature uses numbers and figures. If you learn how to put these "words" in order to make sense than you can speak in
this language.
9 Subjects: including calculus, physics, geometry, algebra 1
...I have tutored all levels of Math from pre-algebra all the way up to Multivariable Calculus. I also have experience in tutoring chemistry, Organic Chemistry, physics and many other classes! I
have been a full time teacher for the past 4 years after receiving my Master's Degree in secondary Math education from Temple U.
18 Subjects: including calculus, chemistry, physics, geometry
As an experienced and patient tutor of physics and of math subjects through calculus and college first year linear algebra and differential equations, my great delight is in seeing the "light
bulb" of comprehension in my students' faces. These subjects are not as difficult as they often seem to my ...
13 Subjects: including calculus, physics, geometry, algebra 1
...I taught International Baccalaureate Mathematical Studies to juniors at Harriton High School for 3 months, as a student teacher. During this time I acquired a very in depth knowledge of Algebra
ll. Also, math builds upon itself, so being a math major in college gives me more than enough insight and understanding to tutor Algebra ll.
8 Subjects: including calculus, Spanish, geometry, algebra 1 | {"url":"http://www.purplemath.com/Philadelphia_calculus_tutors.php","timestamp":"2014-04-19T05:09:08Z","content_type":null,"content_length":"24372","record_id":"<urn:uuid:5376a1a7-250a-47bd-95af-8e6fb2d8c694>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
mecha-0.0.4: Mecha is a solid modeling language geared for machine design. Source code Contents Index
sphere :: Double -> Solid Source
A sphere with diameter centered at origin.
cone :: Double -> Double -> Double -> Solid Source
A cone with base at the origin, given base diameter, top diameter, and height.
box :: (Double, Double) -> (Double, Double) -> (Double, Double) -> Solid Source
A hollow cylinder with base at the origin, given outer diameter, inner diamter, and height.
A box with ranges or X, Y, and Z positions.
cube :: Double -> Solid Source
A cube with edge length centered at origin.
cylinder :: Double -> Double -> Solid Source
A cylinder with base at the origin, given diameter and height.
Produced by Haddock version 2.4.2 | {"url":"http://hackage.haskell.org/package/mecha-0.0.4/docs/Language-Mecha-Solid.html","timestamp":"2014-04-16T14:11:35Z","content_type":null,"content_length":"10331","record_id":"<urn:uuid:e0061a07-d7a3-4ef5-ae37-d84f35e239e2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oakwood, CA Statistics Tutor
Find an Oakwood, CA Statistics Tutor
...Thanks for taking the time to view my profile. I received my B.S in Psychology and am currently an MBA & M.S of Finance student at the University of Southern California (USC). I have more than
7 years of experience and am confident that I can help you achieve your goals! I am very patient and make sure that I personalize all my lessons based on you or your child's needs.
25 Subjects: including statistics, reading, English, SAT math
...I began programming in high school, so the first advanced math that I did was discrete math (using Knuth's book called Discrete Mathematics). I have also participated in high school math
competitions (ie AIME) and a college math competition (the Putnam) for several years, and in both cases the ma...
28 Subjects: including statistics, Spanish, French, chemistry
...Earned Ph.D. degree in physical Chemistry. B.Sc. degree in chem. eng. Professorship in Chemistry in the United States.
10 Subjects: including statistics, chemistry, calculus, algebra 1
...I am fully hands-on with descriptive techniques, filtering data, pre-analysis, as well as a host of inferential statistical techniques, including parametric tests (ANOVA, ANCOVA,MANOVA,
MANCOVA, and Regression modeling)and nonparametric tests. I have a phD degree in economics. I completed my di...
7 Subjects: including statistics, economics, SPSS, Microsoft Excel
...I've also taught Math Olympiad and contest math -- I was on the state championship math team in high school. I am a seasoned professional with test prep-- I've seen how several companies
teach, I've worked with most every published SAT prep book available, and I know what works best and how to get results. My students describe me as energetic and funny.
49 Subjects: including statistics, reading, writing, English
Related Oakwood, CA Tutors
Oakwood, CA Accounting Tutors
Oakwood, CA ACT Tutors
Oakwood, CA Algebra Tutors
Oakwood, CA Algebra 2 Tutors
Oakwood, CA Calculus Tutors
Oakwood, CA Geometry Tutors
Oakwood, CA Math Tutors
Oakwood, CA Prealgebra Tutors
Oakwood, CA Precalculus Tutors
Oakwood, CA SAT Tutors
Oakwood, CA SAT Math Tutors
Oakwood, CA Science Tutors
Oakwood, CA Statistics Tutors
Oakwood, CA Trigonometry Tutors
Nearby Cities With statistics Tutor
Bicentennial, CA statistics Tutors
Cimarron, CA statistics Tutors
Dockweiler, CA statistics Tutors
Farmer Market, CA statistics Tutors
Foy, CA statistics Tutors
Glassell, CA statistics Tutors
Lafayette Square, LA statistics Tutors
Miracle Mile, CA statistics Tutors
Pico Heights, CA statistics Tutors
Rancho Park, CA statistics Tutors
Rimpau, CA statistics Tutors
Sanford, CA statistics Tutors
Santa Western, CA statistics Tutors
Vermont, CA statistics Tutors
Wilcox, CA statistics Tutors | {"url":"http://www.purplemath.com/Oakwood_CA_statistics_tutors.php","timestamp":"2014-04-16T19:43:42Z","content_type":null,"content_length":"24227","record_id":"<urn:uuid:90c40c0f-d162-4a9e-bab9-67a75ac25149>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00035-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fast approximation algorithms for multicommodity flow problems
Results 1 - 10 of 141
- In Proceedings of the 39th Annual Symposium on Foundations of Computer Science , 1998
"... This paper considers the problem of designing fast, approximate, combinatorial algorithms for multicommodity flows and other fractional packing problems. We provide a different approach to these
problems which yields faster and much simpler algorithms. Our approach also allows us to substitute short ..."
Cited by 268 (5 self)
Add to MetaCart
This paper considers the problem of designing fast, approximate, combinatorial algorithms for multicommodity flows and other fractional packing problems. We provide a different approach to these
problems which yields faster and much simpler algorithms. Our approach also allows us to substitute shortest path computations for min-cost flow computations in computing maximum concurrent flow and
min-cost multicommodity flow; this yields much faster algorithms when the number of commodities is large.
, 1995
"... This paper presents fast algorithms that find approximate solutions for a general class of problems, which we call fractional packing and covering problems. The only previously known algorithms
for solving these problems are based on general linear programming techniques. The techniques developed ..."
Cited by 232 (14 self)
Add to MetaCart
This paper presents fast algorithms that find approximate solutions for a general class of problems, which we call fractional packing and covering problems. The only previously known algorithms for
solving these problems are based on general linear programming techniques. The techniques developed in this paper greatly outperform the general methods in many applications, and are extensions of a
method previously applied to find approximate solutions to multicommodity flow problems. Our algorithm is a Lagrangean relaxation technique; an important aspect of our results is that we obtain a
theoretical analysis of the running time of a Lagrangean relaxation-based algorithm. We give several applications of our algorithms. The new approach yields several orders of magnitude of improvement
over the best previously known running times for algorithms for the scheduling of unrelated parallel machines in both the preemptive and the non-preemptive models, for the job shop problem, for th...
, 1993
"... We develop a framework that allows us to address the issues of admission control and routing in high-speed networks under the restriction that once a call is admitted and routed, it has to
proceed to completion and no reroutings are allowed. The "no rerouting" restriction appears in all the proposal ..."
Cited by 214 (43 self)
Add to MetaCart
We develop a framework that allows us to address the issues of admission control and routing in high-speed networks under the restriction that once a call is admitted and routed, it has to proceed to
completion and no reroutings are allowed. The "no rerouting" restriction appears in all the proposals for future high-speed networks and stems from current hardware limitations, in particular the
fact that the bandwidth-delay product of the newly developed optical communication links far exceeds the buffer capacity of the network. In case the goal is to maximize the throughput, our framework
yields an on-line O(lognT )- competitive strategy, where n is the number of nodes in the network and T is the maximum call duration. In other words, our strategy results in throughput that is within
O(log nT ) factor of the highest possible throughput achievable by an omniscient algorithm that knows all of the requests in advance. Moreover, we show that no on-line strategy can achieve a better
, 1996
"... The construction of disjoint paths in a network is a basic issue in combinatorial optimization: given a network, and specified pairs of nodes in it, we are interested in finding disjoint paths
between as many of these pairs as possible. This leads to a variety of classical NP-complete problems for w ..."
Cited by 140 (0 self)
Add to MetaCart
The construction of disjoint paths in a network is a basic issue in combinatorial optimization: given a network, and specified pairs of nodes in it, we are interested in finding disjoint paths
between as many of these pairs as possible. This leads to a variety of classical NP-complete problems for which very little is known from the point of view of approximation algorithms. It has
recently been brought into focus in work on problems such as VLSI layout and routing in high-speed networks; in these settings, the current lack of understanding of the disjoint paths problem is
often an obstacle to the design of practical heuristics.
- IN PROC. OF THE ELEVENTH INTERNATIONAL CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE , 1995
"... Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI researchers studying automated planning and reinforcement learning. In this paper, we summarize
results regarding the complexity of solving MDPs and the running time of MDP solution algorithms. We argu ..."
Cited by 131 (10 self)
Add to MetaCart
Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI researchers studying automated planning and reinforcement learning. In this paper, we summarize
results regarding the complexity of solving MDPs and the running time of MDP solution algorithms. We argue that, although MDPs can be solved efficiently in theory, more study is needed to reveal
practical algorithms for solving large problems quickly. To encourage future research, we sketch some alternative methods of analysis that rely on the structure of MDPs.
- SIAM J. Comput , 1998
"... Abstract. It is shown that the minimum cut ratio is within a factor of O(log k) of the maximum concurrent flow for k-commodity flow instances with arbitrary capacities and demands. This improves
upon the previously best-known bound of O(log 2 k) and is existentially tight, up to a constant factor. A ..."
Cited by 125 (7 self)
Add to MetaCart
Abstract. It is shown that the minimum cut ratio is within a factor of O(log k) of the maximum concurrent flow for k-commodity flow instances with arbitrary capacities and demands. This improves upon
the previously best-known bound of O(log 2 k) and is existentially tight, up to a constant factor. An algorithm for finding a cut with ratio within a factor of O(log k) of the maximum concurrent
flow, and thus of the optimal min-cut ratio, is presented.
- in Proc. 26 th ACM Symp. Theory of Computing , 1994
"... Communication in all-optical networks requires novel routing paradigms. The high bandwidth of the optic fiber is utilized through wavelengthdivision multiplexing: a single physical optical link
can carry several logical signals, provided that they are transmitted on different wavelengths. We study t ..."
Cited by 120 (0 self)
Add to MetaCart
Communication in all-optical networks requires novel routing paradigms. The high bandwidth of the optic fiber is utilized through wavelengthdivision multiplexing: a single physical optical link can
carry several logical signals, provided that they are transmitted on different wavelengths. We study the problem of routing a set of requests (each of which is a pair of nodes to be connected by a
path) on sparse networks using a limited number of wavelengths, ensuring that different paths using the same wavelength never use the same physical link. The constraints on the selection of paths and
wavelengths depend on the type of photonic switches used in the network. We present eflicient routing techniques for the two types of photonic switches that dominate current research in all-optical
networks. Our results es-
- SIGACT News , 2002
"... this article, we review some of the characteristic features of ad hoc networks, formulate problems and survey research work done in the area. We focus on two basic problem domains: topology
control, the problem of computing and maintaining a connected topology among the network nodes, and routing. T ..."
Cited by 115 (0 self)
Add to MetaCart
this article, we review some of the characteristic features of ad hoc networks, formulate problems and survey research work done in the area. We focus on two basic problem domains: topology control,
the problem of computing and maintaining a connected topology among the network nodes, and routing. This article is not intended to be a comprehensive survey on ad hoc networking. The choice of the
problems discussed in this article are somewhat biased by the research interests of the author
, 1999
"... We describe fully polynomial time approximation schemes for various multicommodity flow problems in graphs with m edges and n vertices. We present the first approximation scheme for maximum
multicommodity flow that is independent of the number of commodities k, and our algorithm improves upon the ru ..."
Cited by 96 (6 self)
Add to MetaCart
We describe fully polynomial time approximation schemes for various multicommodity flow problems in graphs with m edges and n vertices. We present the first approximation scheme for maximum
multicommodity flow that is independent of the number of commodities k, and our algorithm improves upon the runtime of previous algorithms by this factor of k, performing in O (ffl \Gamma2 m 2 )
time. For maximum concurrent flow, and minimum cost concurrent flow, we present algorithms that are faster than the current known algorithms when the graph is sparse or the number of commodities k is
large, i.e. k ? m=n. Our algorithms build on the framework proposed by Garg and Konemann [4]. They are simple, deterministic, and for the versions without costs, they are strongly polynomial. Our
maximum multicommodity flow algorithm extends to an approximation scheme for the maximum weighted multicommodity flow, which is faster than those implied by previous algorithms by a factor of k= log
W where W is ... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=74840","timestamp":"2014-04-19T23:01:17Z","content_type":null,"content_length":"36696","record_id":"<urn:uuid:0c6abc7f-d6dc-4a37-9e71-06e55a2a4a39>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Three Variable Hilbert Polynomials
up vote 0 down vote favorite
Let $V$ be a linear space, $V_1$, $V_2$ and $V_3$ be linear subspaces of $V$. Consider $\mathbb P(V/V_1)\times\mathbb P(V/V_2)\times\mathbb P(V/V_3)$ as a space parametrizing the triples of linear
subspaces $(W_1, W_2,W_3)$, with $V_i\subset W_i\subset V$ and $\dim(W_i)-\dim(V_i)=1$ for $i=1,2,3$, and also $V_1\cap V_2\cap V_3=0$.
Now consider the closed subspace P consisting of the points $(W_1,W_2,W_3)\in \mathbb P(V/V_1)\times\mathbb P(V/V_2)\times\mathbb P(V/V_3)$, where $\bigcap_IW_i\supsetneqq \bigcap_IV_i$, with $I$
arbitrary subset of $\{1,2,3\}$.
Then the question is: now that P is a closed subscheme of $\mathbb P(V/V_1)\times\mathbb P(V/V_2)\times\mathbb P(V/V_3)$, how to compute trivariate Hilbert Polynomial of P without explicitly writing
down the coordinates of P?
add comment
1 Answer
active oldest votes
Your question isn't correct. Such $W_1$ are parametrized by $\mathbb P (V\setminus V_1)$ not by $\mathbb P (V/V_1)$ as you wrote. And $\mathbb P (V\setminus V_1)$ is not a closed variety.
In any case, $W_1\cap W_2\cap W_3\ne 0$ means that there is vector $v\in W_1\cap W_2\cap W_3$. So, you can parametrize almost all such triples by vectors $v$. So, we found one component of
up vote 1 $P$ (it is $\mathbb P(V) $). The other ones are when $W_1\cap W_2\subset V_3$ and the same triples with permuting indices (and if $V_1\cap V_2\cap V_3 = 0$ then these triples are
down vote parametrized by $\mathbb P(V_3)$). Etc. It is easy to compute Hilbert polynomial for linear subspaces. But if $V_1\cap V_2\cap V_3\ne 0$ then situation is more complicated, and< I think it
is strange to expect a good answer.
The question is correct. Such $W_1$ are parametrized by $\mathbb P(V/V_1)$. You are right in the sense that if I do not require $\V_1\cap V_2\cap V_3=0$, the problem is really
complicated. So I add this condition after revising my problem! – BLI Nov 22 '11 at 5:47
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/79901/three-variable-hilbert-polynomials?sort=votes","timestamp":"2014-04-19T17:48:21Z","content_type":null,"content_length":"51278","record_id":"<urn:uuid:f4eae6cd-99d4-4a8f-92cd-cb332133f95b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generating Functions
Re: Generating Functions
Assuming you mean "What did you do to solve it?",I did nothing.I will try something though.I have Maxima now.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Generating Functions
Let me know when you get an answer.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Generating Functions
What do you get for 2 dice?
And what for 5 of them?
Last edited by anonimnystefy (2012-05-05 02:26:49)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Generating Functions
7 / 12 for the two die.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Generating Functions
What is your GF?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Generating Functions
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Generating Functions
Why would you use 6's? You are looking at the sums of the numbers.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Generating Functions
The question is a probability question is it not?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Generating Functions
Well, I am not from Missouri but I still want to see it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Generating Functions
Sorry for deleting the post
What are you getting for 5? I get 295/432.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Generating Functions
Yes, that is correct.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Generating Functions
Unfortunately,my code runs n*sqrt(n) complexity,and in Maxima it would seem that thar is a lot. The poor system cannot handle such complexities.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Generating Functions
What code?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Generating Functions
My Maxima code. I must say that I am sorry that it cointains parts of procedural programming.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Generating Functions
Haven't you rewritten it to look like mine?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Generating Functions
Not that code. I wrote another to count coeffs with prime powers of x.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Generating Functions
You aren't mad at me fior using while and if,are you? *cuteface*
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Generating Functions
No, the generating function is. Can't you hear it?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Generating Functions
I can. It hurts me so much. But it was the only way I could do it.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Generating Functions
You did it! let me see your answer.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Generating Functions
I didn't. Not yet.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Generating Functions
Hmmmm. You say it was the only way you could do it but yet it did not do it?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Generating Functions
Could can also express ability.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Generating Functions
So the lab needs this to launch that rocket. Are you going to say, in some axiomatic systems it is .5 and in some other axiomatic systems it is
Or are you going to say I can not do it because my computer is too slow?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Generating Functions
The latter.
Here is the answer for 200:
Last edited by anonimnystefy (2012-05-05 05:20:11)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=217085","timestamp":"2014-04-18T08:08:35Z","content_type":null,"content_length":"38859","record_id":"<urn:uuid:bd6e9e5d-1afe-450b-b105-8a6065db1137>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
convert 64 kilos to pounds
You asked:
convert 64 kilos to pounds
141.095847798322 pounds
the mass 141.095847798322 pounds
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/convert_64_kilos_to_pounds","timestamp":"2014-04-19T22:52:33Z","content_type":null,"content_length":"52685","record_id":"<urn:uuid:26d6d964-9a6b-4317-a22f-240cb414aea3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Tutor] project euler prime factorization problem
Steven D'Aprano steve at pearwood.info
Mon Aug 30 01:34:59 CEST 2010
On Mon, 30 Aug 2010 05:31:00 am bob gailer wrote:
> > My current reasoning was something of this sort: Find all the
> > factors of a number, then reduce them to just the prime factors
> Very inefficient. IMHO the proper way is to generate a list of all
> the prime numbers up to the square root of 600851475143, then test
> each (starting with the largest and working down) till you discover a
> factor. That then is the answer.
Actually your approach is inefficient too, and it won't always work.
Consider what happens if you are asked for the prime factors of 101.
You would generate the primes up to 10:
2, 3, 5, 7
but none of those are factors.
In the case of 600851475143, you wastefully generate 62113 prime numbers
when you actually only need 224.
The right way is to start at 2, then 3, and so forth, working up rather
than down. Don't forget that there can be repeated factors.
Steven D'Aprano
More information about the Tutor mailing list | {"url":"https://mail.python.org/pipermail/tutor/2010-August/078181.html","timestamp":"2014-04-16T17:46:46Z","content_type":null,"content_length":"3614","record_id":"<urn:uuid:83a7ccfb-dad5-4db5-bc76-1ff2ca3aaa63>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find a real-valued function
Which Real valued weird functions do you know about? Just for a middle school project...
Last edited by myway; November 9th 2010 at 01:11 PM.
Have you came across the Cantor function c(x) on the unit interval? What do you get if you subtract a small multiple of x from it, for example the function $c(x) - \frac12x$ ? | {"url":"http://mathhelpforum.com/calculus/162150-find-real-valued-function.html","timestamp":"2014-04-19T18:44:50Z","content_type":null,"content_length":"36261","record_id":"<urn:uuid:346bcccf-73e3-4c5d-bab9-af2be79405db>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are Our Teachers Prepared to Provide Instruction in Statistics at the K–12 Levels?
October 2000
Statistics is a rapidly growing area within the mathematics curriculum at the K–12 levels. The emergence of statistics noticeably unfolded with the NCTM's Curriculum and Evaluation Standards for
School Mathematics (Reston, Va.: NCTM, 1989) and continues in Principles and Standards for School Mathematics (Reston, Va.: NCTM, 2000). The foundation and implementation of Advanced Placement
statistics began in the 1990s. Numerous resources for incorporating statistics into the mathematics curriculum have been developed throughout the 1990s, with the Quantitative Literacy Series (White
Plains, N.Y.: Dale Seymour Publications, 1987) by J. Landwehr et al., leading the way in the late 1980s.
What has been obvious since the mid-1990s is that most mathematics teachers are not comfortable teaching statistics. Most believe that their backgrounds are inadequate to properly use materials in
teaching statistics. Many in-service teachers and some prospective teachers have either never taken a formal statistics course or they have taken a "cookbook course." The intended statistics
curriculum for the K–12 levels (including Advanced Placement statistics) is not the traditional formula-oriented statistics. It is intended to be taught at a conceptual level. Additionally, such
Advanced Placement statistics course topics as sampling and experimental design are not covered in most undergraduate teacher-preparation programs. Statistical literacy, or a working knowledge of
statistics for everyday use, is the current goal. This goal does not allow teachers simply to teach comfortable algorithms that are easy to learn. Today's students are encouraged to discover key
ideas and concepts with hands-on activities or by using simulation and to be able to communicate their ideas. This discovery is aided through computers or calculators—tools with which many in-service
and prospective teachers feel inadequate.
With the rapid growth of Advanced Placement statistics, the need for additional secondary teacher training has become essential. The number of Advanced Placement statistics examinations given has
grown from 7500 exams in 1997 to 35 000 exams in 2000. This increase implies that many teachers, new and old, are teaching statistics. Much training has been through workshops. However, only so much
can be covered in a one-day to one-week workshop. The tasks of deciding what to teach, understanding the material, and knowing how to teach it are overwhelming.
At the University of Georgia, a statistics course was developed by the Department of Statistics in conjunction with the College of Education to prepare both preservice and in-service teachers in
statistics. The course combines instruction in content and methods for delivering the content, as well as the use and demonstration of resources and materials for secondary teachers to incorporate in
their classrooms. Technology is emphasized with the use of TI-83 calculators and MINITAB on the computer. Instruction is also given on advanced statistical topics to help prospective statistics
teachers understand the whys of what they will teach. Currently, again as a joint effort, a new course is being developed for K-8 mathematics teachers.
Teacher-preparation institutions must ensure the statistics education of preservice teachers, in addition to reaching out to in-service teachers. Ideally, statisticians, teacher educators, and
mathematicians should be involved in this education. We cannot wait any longer. The time is now.
│Chris Franklin, chris@stat.uga.edu, is an instructor and honors professor in the Department of Statistics at the University of Georgia. She was a prime developer of the University of Georgia │
│statistics course that she describes. │
│ │ | {"url":"http://www.nctm.org/resources/content.aspx?id=1776","timestamp":"2014-04-17T19:22:49Z","content_type":null,"content_length":"47122","record_id":"<urn:uuid:5ac530a1-d5a5-4cd7-8936-c2b87ff45227>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shoreline, WA Calculus Tutor
Find a Shoreline, WA Calculus Tutor
...This is underscored by the numerous times students have come back to share good news with me: "That paper you helped me revise? I got an A on it!" "The geometry exam we looked over together? I
just aced my next one!" "The spelling and grammar you worked on with my son?
35 Subjects: including calculus, English, writing, reading
...My favorite types of math are Trigonometry, Geometry, Algebra 1 and 2. I truly like and understand math concepts and enjoy helping others understand the underlying principles. I would like to
meet with you and discuss potential tutoring opportunities.
26 Subjects: including calculus, chemistry, physics, geometry
...My name is Joslynn, and I am currently a student at community college. I plan to transfer to the University of Washington in a year and double major in Bioengineering and mechanical
engineering (I plan to go into bioprinting, so that's why there's the weird combination of majors). Also, I plan t...
12 Subjects: including calculus, chemistry, physics, geometry
...For the past two years, I have been employed at Western Washington University's Tutoring Center and for two years before that I worked at the Math Center at Black Hills High School in Olympia.
I am certified level 1 by the College Reading and Learning Association, and have tutored subjects rangi...
13 Subjects: including calculus, physics, statistics, geometry
If you want someone who can teach your child/student with great real world experience in Math and Physics, then that's me. I've worked at NASA Johnson Space Center training Astronauts in Space
Shuttle Systems like Guidance, Propulsion and Flight Controls. I have a Bachelor's in Aerospace Engineeri...
12 Subjects: including calculus, physics, geometry, algebra 1
Related Shoreline, WA Tutors
Shoreline, WA Accounting Tutors
Shoreline, WA ACT Tutors
Shoreline, WA Algebra Tutors
Shoreline, WA Algebra 2 Tutors
Shoreline, WA Calculus Tutors
Shoreline, WA Geometry Tutors
Shoreline, WA Math Tutors
Shoreline, WA Prealgebra Tutors
Shoreline, WA Precalculus Tutors
Shoreline, WA SAT Tutors
Shoreline, WA SAT Math Tutors
Shoreline, WA Science Tutors
Shoreline, WA Statistics Tutors
Shoreline, WA Trigonometry Tutors | {"url":"http://www.purplemath.com/shoreline_wa_calculus_tutors.php","timestamp":"2014-04-21T02:41:15Z","content_type":null,"content_length":"24052","record_id":"<urn:uuid:a7f3367f-c388-4571-ad7c-5cbc5a5dda7b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00005-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-dev] Scipy Tutorial (and updating it)
josef.pktd@gmai... josef.pktd@gmai...
Thu Dec 4 15:23:36 CST 2008
>> I like the new Sphinx docs a lot if I have to look up a specific topic,
>> but when I don't know which function might be useful for my case, then
>> the overview and speed of looking around is pretty tedious, compared to
>> the windows help files for matlab or R. (As an aside: I actually like
>> the style sheet of the documentation editor better than sphinx.)
> Can you be more specific: which elements of the page would you like to
> see changed?
This might be mostly a taste question, I don't like the font size and
line spacing in sphinx css. I find smaller fonts and smaller line
spacing much easier to follow and much faster to skim for relevant
with sphinx rendered
also I like the overview tables of functions and classes that you have
in the editor
As I mentioned I really like the windows help file format, with
search, tree and lists on the left navigation pane, together with the
"see also" it is very fast to check out different related functions.
In the stats front page, which is pretty long the toc tree on the left
does not have enough depth for navigation around.
A question to the new docs: what's the relationship between the front
rst page and the info.py of a sub package, the two files above, both
have currently roughly the same content. Should it stay that way or do
they have different purpose?
One problem for editing the doc string, I found:
stats.info.py is rendered here
put clicking on source, leads to __init__.py instead of info.py
I will have to sign up for editing, since immediate feedback on rst
errors, is much better than only editing the source without feedback,
for example http://docs.scipy.org/scipy/docs/scipy.stats.stats.kstest/
More information about the Scipy-dev mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-dev/2008-December/010485.html","timestamp":"2014-04-20T15:51:37Z","content_type":null,"content_length":"5182","record_id":"<urn:uuid:35ca99c5-8ba9-4bf2-ac44-44bf0e656bd2>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
My Book On P=NP Is Now Available
September 17, 2010
The P=NP Question and Gödel’s Lost Letter book is now available
Kurt Gödel is one of he main reasons I started this blog almost two years ago. We all owe him a great deal for so many things. For his completeness theorem, his incompleteness theorem, his notion of
recursive functions, for Gödel numbers, for his work proving the relative consistency of set theories, and much more.
Today I want to thank Gödel and all of you for making writing this blog so much fun. Sometimes the time and energy required to write the next post is large, but it is worth it. Thanks again. Here is
a photo mosaic of many of you that have been part of this enterprise—especially its first year, 2009.
If you cannot see the faces look at this larger version.
I started in 2009 not to write a blog. I thought there were plenty of excellent math and theory blogs, so I began instead to write a book on P=NP. I actually had a large part of the book written—not
all—when one day I thought—forget the book. I would just write a blog. I opened a WordPress account, started to write, figured out how to handle HTML, and created my first post. Of course no one read
it, but I wrote a second one, and then a third. I pressed on with the writing. Sounds a bit like the lines from one of the great scenes in Monty Python and The Holy Grail:
When I first came here, this was all swamp. Everyone said I was daft to build a castle on a swamp, but I built in all the same, just to show them. It sank into the swamp. So I built a second one.
That sank into the swamp. So I built a third. That burned down, fell over, then sank into the swamp. But the fourth one stayed up. And that’s what you’re going to get, Lad, the strongest castle
in all of England.
Eventually the blog stopped falling down or burning up, and found a set of readers—you. I thank you for your continued readership. Some of my posts have been fun, some serious, some technical, more
than a few wacky. I thank you for reading them, for making comments on them, and in general telling me directly or indirectly “thank you.”
$\displaystyle \S$
I want to make an announcement that my book entitled The P=NP Question and Gödel’s Lost Letter is now published. It is available here at Amazon.com, for example. I started writing a book that became
a blog. Now the blog has become a book.
The book was a great deal of work—more than I would have guessed. After all it “just” consisted of selected posts from the first year of this blog. But books are very different from blogs, and I had
to do quite a bit extra work to get the book even into the shape it is. Susan Lagerstrom-Fife is my editor at Springer, who helped me this last year to put the book together. I thank her.
Some additional material is included in the book, all essays have been edited and updated. So if you enjoy this blog you should enjoy the book. Some of you who have started reading this blog more
recently may especially enjoy the book, since it covers posts from only the first year.
Please take a look at the book, post a review at Amazon.com if inclined, and perhaps even buy a copy. I have no doubt about Susan’s first statement to me when we discussed writing the book:
You will not make any money on this book.
I have no illusions about that, but I felt it would be fun to get a book out. A book is still a more permanent object than a blog. Also, as I said, the book has some additional essays, and some
additional information.
Finally, the book has a secret code embedded in it that contains a fact about P=NP that I have never told anyone. If you look at every ${57^{th}}$ character of the prime numbered chapters, then ${\
Just kidding. But I hope you enjoy the book.
I plan further books based on this blog and on other topics. For example I am thinking about one on quantum algorithms without qubits and another on mathematical surprises and embarrassments.
I have been awed at the number of you reading this blog and am very grateful for your interest and support. The book tries to thank you all—you will see that I have tried hard to acknowledge all who
have commented on posts. Check the book’s index to see if you are mentioned.
Open Problems
Finally, I want to again thank all of you who have supported this effort for close to two years now. It started in February of 2009: we are well beyond 200 posts, and I plan on going on for some
time. I thank you all for making this effort fun.
Since I always like to ask open questions: should I put out other collections of these essays? What do you think? A final question: how about a collage T-shirt? I have thought about having them made
up and sold for cost—would be about ten dollars. Would anyone want one?
1. September 17, 2010 5:05 pm
Well, thank you sir. You just made useless part of the 500 pages I printed two weeks ago, for which I spent quite some time on the formatting — this is quite a task to put so much goodies in less
than half a dozen thousands pages!
Funny coincidences apart, thank you so much for your blog, and my hugest congratulations for the book: this is precisely what I was searching for… two weeks ago :-)
2. September 17, 2010 5:16 pm
3. September 17, 2010 6:03 pm
A Summary or Explain to my paper “A Polynomial Time Algorithm for Hamilton Cycle and Its Proof”
For the paper, see arXiv:1004.3702, Lizhi Du
1, The number of break points is less than n(n-1)/2 in a n-vertex graph. Each time the algorithm must get a new main break point(or no break point, which is limited), so, the time complexity is
2, If there are some Hamilton cycles(paths) in the graph, the algorithm surely can find a Hamilton cycle(path) before using over all main break points, also, if when using over all main break
points, it still can not get a Hamilton cycle(path), this means no Hamilton cycle(path) in the graph.
3, How to prove 2? If we can exhaustively compute all undirected graph, with any number of vertices, any edges, in all possible step orders, all are fulfil 2, sure, this means we can prove 2. But
the number of undirected graphs is ∞. How to do?
4, My idea is: using some 12-vertex graphs(sometimes maybe 11, 10) to constitute any graph with any bigger number of vertices, so we only need to exhaustively compute all graphs with 12 vertices
(maybe 11, 10). This means, on any step of the computing for any big graph, we can combine some 12-vertex graphs(maybe 11,10) to form the bigger graph.
5, The three items as mentioned in the paper can prove 4. Especially when the maximum degree of the graph is three or four(NPC too). For graphs with more vertex degree, I also have a way to prove
its correctness.
Each time, we need to consider more than 12 vertices(by combining some 12 vertex graphs) to compute, then split it to several 12 vertices graphs by all possible deleting.
Notes: A big graph can become many small graphs(with 12,11,or 10 vertices) by all available deleting, the other direction is also true(the word “all” is important)!
6, So, we can prove the polynomial time algorithm is correct by computing all 12-vertex graphs, each in all possible step orders(need to combine and split).
Now, if someone want to reject my paper. He must reject any one of the above 6 items. Item 1 is only a fact, 2 and 3 are apparently logical, 4 is only an expression, so, the only way for you is
5,or 6, but 5 is absolutely true, no any possibility to be wrong. So, the only opportunity for you is 6. For 6, what I say is meaningless, you can do it
Dear Dick: the last time to bother your web page, very sorry.
□ September 17, 2010 7:06 pm
Congratulations on a fine book, and a wonderful weblog … and most of all, on an outstanding job of teaching and community-building … all of which are tough-yet-vital jobs.
Please allow me to quote Edmund Blackadder—from the Blackadder episode Ink and Incapacity—in conveying to you, on behalf of the all your readers, “our most sincere contrafibularities! We are
anaspeptic, frasmotic, even compunctuous with anticipation for your book!”
□ September 20, 2010 11:43 am
a concrete algorithm like this could easily (if it is correct) gain attention by translation into SAT and the solution of an RSA factoring challenge number in SAT form. i do not understand
why authors claiming constructive algorithms for p=np do not attempt this, if they are so confident in their results.
4. September 17, 2010 6:09 pm
Love this blog and congratulations on the book!
I am perhaps slightly disappointed by the fact that the book is $80 for about 200 pages, compared to, say, the Arora-Barak text book which is $50 for 600 pages.
5. September 17, 2010 8:22 pm
Thank you for writing interesting posts. i discovered your blog about 6 months ago (along with a few others i enjoy reading) and reading interesting blog posts like your motivated myself to
continue my education at the graduate level in computer science where i will be studying algorithms.
Your book seems to have a lot of great articles, i added it to my bookmark folder of books to buy. Looking forward to eventually reading it.
6. September 17, 2010 11:11 pm
Dear Prof. Lipton,
I would like to thank you for your great posts. My major is mathematics but I believe that by reading your blog I learn a lot about Theoretical Computer science.
I hope you also publish a cheaper paperback edition which is good for student budget.
7. September 17, 2010 11:30 pm
Congratulations on a great book. I’ve been reading it on Amazon’s “Look Inside the Book.”
I really love this blog. I only found it because of the Deolalikar festivities, but it gives me the same warm feeling I used to get as a child when I grabbed the latest Scientific American and
turned immediately to Martin Gardner’s “Mathematical Games” column.
8. September 18, 2010 12:08 am
Congratulations! I just ordered a copy and look forward to reading it.
Having written books both the traditional way, and via blogs, I have to say that the blog route is ultimately more efficient in many ways (though not every book could be written as an unfolding
blog). The near-real-time feedback is particularly valuable. Perhaps this method of writing will become more common in our discipline in the future…
□ September 18, 2010 10:28 am
But shouldn’t everyone be told in advance that the blog will/might be used to write a book?
Otherwise people who contribute helpful feedback might feel used.
☆ September 18, 2010 11:17 pm
Amir: why would you feel used at all?
First, I’m sure that only a small portion of the comments are truly useful, so that the ratio work caused to payoff is not really in the book author’s favor.
Second, the book project may never materialize, for a variety of reasons (including when the material turned out less interesting than one expected — obviously not a problem for this
blog!) so I see why one wouldn’t want to talk about it upfront.
And third, if you’re going to spend some time composing a thoughtful comment and articulating a tricky technical point on a blog, isn’t it more gratifying to think it may affect the book
in some way rather than languish in the far reaches of the comments section?
☆ September 19, 2010 9:06 am
I am sorry if that is a problem. Never was what I wanted.
☆ September 19, 2010 6:48 pm
I’ve only made a few comments on this blog, none of them contributing to CS theory in any way. So I don’t feel used, but others might.
□ September 18, 2010 11:29 pm
Professor Tao: I must confess to picking up one of the volumes from your blog purely for the novelty value. Still, I was struck straight away by how the many comments that your blog receives
shaped the presentation of the material (and this was made especially clear since comments contributions were very carefully documented in the book).
I don’t necessarily think that all books will be written this way in the future, but your books certainly show the immense improvements that the collaborative web model has to offer over
traditional publication. It’s like getting the 3rd or 4th edition of the book in first printing! Or even better, really.
So thanks to all the mathematicians and scientists who work so hard on their blogs to keep us abreast of what they’re thinking about. And now I look forward to receiving my copy of Gödel’s
Lost Letter.
9. September 18, 2010 2:10 am
Thanks for writing these wonderful posts. My answer to your open question is: definitely.
10. September 18, 2010 2:31 am
11. September 18, 2010 5:02 am
Congratulations! Very happy to see the wonderful discussions archived in this manner.
Having said that, I must admit that I am a bit disappointed with Springer’s proof-reading job with your book. A cursory look for 10 seconds and typos already:
“To make the it fun…”
“He use to walk…”
Will buy the book, anyway. Thanks!
□ September 18, 2010 10:46 am
There are some proofreading issues. The Maxwell Smart quote should be “Missed it by that much.”
☆ September 19, 2010 9:07 am
Yes I will set up errata page. It is easy to make errors.
12. September 18, 2010 6:37 am
Thank you for your blog. In case I can’t get your book, I wish you won’t wrap up your blog really soon now that the book is out.
□ September 18, 2010 10:00 pm
There are on the one hand all kinds of Hi-Tech blogs, XHTML Dream Weaver, Amazon, Kindle, Wi-Fi, CT Scan, etc. and on the other hand getting caught in proper covers of the wells just around
the curvatures of a passing street. Who’s to say that there are no Persian Poetry Masters such as Molana one of the verses of whom is: the seven cities of love Attar [who almost happened to
be Molana's contemporary Poet] traveled while we are still caught in the curvature of a passing street. All that is needed is visiting the market and buy what you need, but attempting to get
around the curvature of submitting your paper to SIGACT NEWS or publishing your poetry or a finely tuned Springer book must be a bit trickier.
13. September 18, 2010 7:01 am
I think it’s a wonderful idea to turn this excellent blog into a book — thank you so much for taking the time to do so! On the other hand I must admit that I find the book quite pricy. I don’t
mean to say that the content isn’t worth the money, it just that you obviously don’t get very much of it (judging from the reaction of Lagerstrom-Fife).
14. September 18, 2010 11:15 am
Based on the publication date on Amazon, the manuscript surely must have been complete or very nearly so when Deolalikar’s “proof” emerged. Was there ever a time during that furor that you were
concerned that the book may have barely missed being able to discuss the proof in the context of the broader theme, or even mention it?
□ September 19, 2010 8:59 am
It was to Springer months ago. Their process takes much longer than you, or I, might have thought.
15. September 18, 2010 11:28 am
1) On amazon its 40 dollars cheaper if bought NEW rather than USED. I can see that—
I like to have books that are already broken in.
2) Not available on Kindle yet. Kindle WOULD make sense since you can easily have
your kindle and not have your laptop or Wi-Fi.
3) I’ll be reviewing both your book and one of Terry Tao’s blog-books.
Where will the reviews appear? On complexity blog of course (as well as in
SIGACT NEWS).
4) I’ve already begun reading both of them. Both are AWESOME!
5) Congrads or Congrats on getting the book out!
□ September 19, 2010 9:08 am
Thank you very much.
16. September 18, 2010 3:34 pm
I am thinking about one on quantum algorithms without qubits and another on mathematical surprises and embarrassments.
Mathematical surprises and embarrassments? That sounds quite interesting.
17. September 18, 2010 7:17 pm
Congratulations! Thanks for both, blog and book!
I’m looking forward for the book on quantum computation.
18. September 19, 2010 8:44 am
Congratulations, Dick. My Amazon order went in right away. What a great contribution.
19. September 19, 2010 9:08 am
Congratulations to the CS community for receiving this gem of a book.
It wasn’t surprising that the people who were interested core of theory would follow your posts with high enthusiasm, given your high order of reputation as a theoretician.
What surprised me was that after Deolalikar-episode the ‘popularity’ index sky-rocketed.
My wife, who had no interest in theory CS, started taking immense interest in following your blog; of course not at the level where she’d understand everything, but definitely at the level where
a subset of the material would spark her interest to go and read up more.
From many of my friends, who do not actively do research, or not even actively follow some of it, I started hearning about ‘Lipton blog’ :)
I think Vinay Deolalikar deserves some thanks for that as well.
I am alsmot certain that if one did a frequency distribution of words in your blog (including comments), “Deolalikar” would rank very high with P and NP :)
□ September 19, 2010 9:20 am
But hasn’t Deolalikar’s career been adversely impacted by his P vs NP proof attempt and its subsequent popularization?
Sure, this whole episode has given a popularity boost to this blog in particular and CS theory in general, but it came at a significant cost to one individual.
☆ September 19, 2010 9:40 am
I feel that’s an orthogonal question.
As Scott Aaronson had put it, there were countless severed heads that lined his path already. So it was a really bold attempt, and one should only get credit for even a failed attempt of
this order trying to bind ideas from FMT, complexity, statistical mechanics.
☆ September 19, 2010 2:50 pm
Why do you say that? I see no evidence his career has suffered. I don’t see why it should.
☆ September 19, 2010 5:09 pm
Why would his career be affected? Wasn’t it a genuine attempt?
☆ September 19, 2010 6:53 pm
I think the problem is that he still claims to have a proof after experts have concluded that the approach is unworkable.
It’s unlikely that he would have found fixes to his proof given the difficulty of the problem and in such a short time.
So unless he really does have a correct proof at some point, I think that this could impact negatively on his career.
☆ September 19, 2010 11:45 pm
Everybody has a right to ‘save face’. Also, history has taught us that going against the barrage of criticisms to be worthwhile as even ‘experts’ can be proven wrong. So, I’m quite
anxious to see his next move.
□ September 20, 2010 10:15 am
There is a saying in philosophy that “Philosophical problems are never solved, they are dissolved.” The same can happen in every branch of the STEM enterprise … think about the problems of
vacuum-tube reliability and urban horse-manure disposal, for example. These problems were never solved … but they were dissolved.
In the context of TCS, even failed attempts to prove P≠NP can exert the same salutary effect … when the attempt introduces new proof technologies that are sufficiently strong, that the
problems solved by the new technologies came to be viewed as more natural/interesting than the P≠NP problem as originally conceived.
This point-of-view has been (for me) illuminated particularly by Oded Goldreich’s memorial essay “On Promise Problems: in Memory of Shimon Even (1935–2004)”, which greatly helped me towards
an (in retrospect obvious) appreciation that multiple avenues lead to—and therefore multiple natural definitions bear upon—the P≠NP problem.
More broadly, Dick’s weblog (and now, a book too!) has helped everyone appreciate that P≠NP isn’t just a deep problem, it also is a wide problem. For which, well-deserved thanks and
appreciation are arriving from all quarters.
20. September 19, 2010 12:51 pm
Congratulations on the book!
And I, for one, would buy such a t-shirt. ;)
21. September 19, 2010 5:36 pm
I think that your open question
should I put out other collections of these essays?
has a very easy answer. The success of your blog posts and the responses to the publication of your book clearly indicate that you should, provided you feel that you have the time to keep writing
such informative and thoughtful posts. I am truly amazed by the quality of the posts on your blog, which offers a great service to the TCS community. Keep it up!
In case they may be of use, here are two small typos that I spotted while having a look at the preview of the book initial pages on Amazon.com.
Line 1 of 1.1. He use ==> He used
Line 3 of 1.2 the the ==> the
I will ask our library to buy a copy of the book straight away.
22. September 20, 2010 10:44 am
“You will not make any money on this book. ”
But someone seems to be making money out of this book if it costs almost $80.
If you are not making any money out of it, why not put a pdf file
on your website that anyone can download, instead of having each
interested reader pay $80 to Springer?
Nice book!
□ September 20, 2010 11:59 pm
If you are not making any money out of it, why not put a pdf file
on your website that anyone can download, instead of having each
interested reader pay $80 to Springer?
You really don’t understand human nature, do you?
Dick Lipton is in Academia, he doesn’t get his kicks from money.
He gets much more than $80 on each book, he gets status (as if he didn’t had enough…)
So he doesn’t mind about spending other people money for this purpose and many are happy to oblige and even add a little toping on the cake by commenting on this post.
OK, I will oblige too, “Terrific Book professor Lipton”, but I won’t buy it! :-)
23. September 20, 2010 11:26 am
Apologies in advance, I don’t know about the history of this theorem. I notice in the prologue to your book, you discuss Steve Cook, but not Levin, and refer to his theorem as “Cook’s Theorem.”
Elsewhere I’ve heard it referred to as the “Cook-Levin Theorem”. Is this controversial? What is the status of Levin’s contribution?
□ September 20, 2010 12:54 pm
It is called both. I used Cook only; the history is a bit complex, and both are fine names for the great result.
24. September 20, 2010 11:30 am
congratulations! i am just an interested amateur, but yours is my favorite math/theory blog.
25. September 20, 2010 5:05 pm
Congratulations! I can’t wait to get a copy.
26. September 20, 2010 8:24 pm
Congratulations! Probably polishing blog posts into a book is a lot of work. There is something appealing in the tentative and non-committing blog format. But there is also something very
appealing in the definite and committing book format…
One remark (or rather a question) about the NP=P episode apropos the nice talk about it mentioned in the previous post. (and I cannot avoid the temptation mentioning Ken’s beautiful and
mysterious algorithm for LP http://www.almaden.ibm.com/u/kclarkson/lp/p.pdf I always wanted to understand it much better.)
This was an opportunity for me (as for other non-experts) to realize the gap in my computational complexity education. One gap that I was always aware about (and is often mentioned here and was
also mentioned in some comments by Niel and Tim and others) is between non uniform and uniform computational complexity where the circuit complexity represent the non uniform version and Turing
machines is were things are uniform. (But what uniformity really sais was always mysterious). My question is this: Is there a way to describe say just using circuits some, perhaps even stronger
conditions of uniformity. so if you have a sequence of polynonial size circuits for 3-SAT, this is non uniform, but maybe you also demand that there is a sequence of circuits describing
efficiently these circuits, and a sequence of circuits describing these other circuits and so on..
So my questions are: 1) It there a description of uniformity just in terms of circuits, and 2) Is it possible to come with an even stronger form of uniformity to give even more restricted notion
of what effective alforithms in P are?
□ September 21, 2010 7:26 am
I would like to echo Gil’s question … and say more broadly that any discussion at all of uniformity would be very welcome, and very interesting, Because if these points are challenging for
mathematicians like Gil, imagine how challenging they are for engineers like me!
Especially, the challenge of validating a Turing machine that claims to accept a language in P is natural to engineers as a validation problem, and to mathematicians as a decision problem. Is
it a decidable problem? If not, can we restrict P so that it *is* a decidable problem? If so, is the validation problem itself in P ? If we decide to ignore these issues, so that (in effect)
an oracle decides membership in P, then does that mean the problem P≠NP is a promise problem? And if so, what is the computational power of that oracle that provides that promise?
The dovetailed intricacies of these (to engineers, very natural) questions illustrates a wonderful lesson of this past summer … the P≠NP problem is not only deep, it is wide.
□ September 21, 2010 3:21 pm
One notion that doesn’t quite work comes from how you prove that circuits can do what Turing machines can do. If you follow that proof, then you find that you can represent each step of a
Turing machine by a circuit that acts on the current values along the tape, the position of the machine, and the state of the machine (the latter two represented as binary sequences). But it
turns out to be much more convenient to get the tape to move at each step and the Turing machine to stay still. So I think a circuit that represented a Turing machine would have one part that
is the same from step to step and another part that sends all the variables to the right or left according to what one of the variables says, or something like that.
In any case, informally I think of a uniform family of circuits as being one that iterates a very simple function, whereas a non-uniform family can do what it likes. (But I’m not sure how
precise I can make that view of it.)
☆ September 21, 2010 9:53 pm
1) It there a description of uniformity just in terms of circuits, and 2) Is it possible to come with an even stronger form of uniformity to give even more restricted notion of what
effective algorithms in P are?
Gil, these are great questions. Unless I am missing something, it is non-trivial to give short, quick answers to them. Let me encourage you to ask them on
http://cstheory.stackexchange.com/ ;-)
P.S. Many congratulations, Dick!
□ September 24, 2010 2:55 pm
1) Is there a description of uniformity just in terms of circuits?
Yes (I think). Here are two, of different kinds:
a) If there is a P-time algorithm (in the problem input size) which generates the circuits in a circuit family (as a function of the input size), then we can consider the family to be
uniform. (I think this is the standard definition of the concept, but I am not an expert, so probably there’s some extra subtlety I don’t know about.)
This covers any circuit family that we could want to call uniform, but as a definition of the concept of P-time, we could complain that it’s circular.
b) If there is a 1-dimensional cellular automaton which evolves the problem input to the problem solution (for a decision problem, the solution would be a single bit in a specified cell
relative to the cells containing the input, which is a stable state of the CA), in polynomial time in input size, then this corresponds to a circuit which is periodic in 2D in a simple way
(one repeat unit per cell per time-unit), and whose state only matters in a quadratically large region relative to the solution time.
This is a very special kind of uniform circuit family, but sufficient to solve all problems in P, since a Turing machine can be easily encoded as a 1D CA. (This is similar to the encodings of
Turing machines as circuits mentioned in Gowers’ replies here — in fact, one of them is probably identical.)
☆ September 24, 2010 3:02 pm
(One way to encode a Turing machine as a 1D CA: in each cell, we represent the tape state at one point, the state the turing machine head would have if it were here now (whose value
doesn’t matter if it’s not here), and one bit saying whether the head is here now. Clearly, each such state at time t only depends on its immediate neighborhood states at time t-1, which
is all we need for this to work as a CA.)
☆ September 24, 2010 3:22 pm
Correction: when I said my description (a) was “circular”, what I meant was that as a definition of the concept of P-time, it just reduces the definition on circuit families to the
definition on Turing machines, which might not be what you want (as I realized when editing this reply for http://cstheory.stackexchange.com).
□ September 24, 2010 4:16 pm
Thanks, John, Tim, and Ryan for the comments. I did ask the question on the CS questions site. One clarification: my question about an even more restricted class based on a stronger form of
uniformity was still about a complexity class which practically contains all polynomial algorithms known to us and is restricted or tame just, perhaps, in the complexity needed to describe
the algorithm.
27. September 23, 2010 11:59 am
Requesting Kindle version of the book please.
□ September 23, 2010 12:59 pm
Kindle is due out soon
☆ September 24, 2010 10:51 am
ty :)
Any idea how soon? I can check here periodically but I might forget thx.
28. September 29, 2010 3:59 pm
Prof. Lipton:
Do you have any particular recommendations for software platforms for publishing via a blog and then a book? E.g. getting wordpress and latex to play well together, etc?
thanks, -Neal
29. October 12, 2010 2:30 am
Your book is indefensibly expensive. More than $90 for a book is outrageous. It means that someone like me, working in an underfunded Western European country with inadequate libraries, won’t
have access to it except by downloading the pdf from a Belorussian web site.
Recent Comments
John Sidles on Multiple-Credit Tests
KWRegan on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests
Leonid Gurvits on Counting Is Sometimes Eas…
Cristopher Moore on Multiple-Credit Tests
Multiple-Credit Test… on Wait Wait… Don’t F…
Amanda on Counting Is Sometimes Eas…
matrix 15 year anniv… on The Evil Genius
Phil on Wait Wait… Don’t F…
Sam Slogren on Counting Is Sometimes Eas…
Dustin Yoder on Can We Solve Chess One Da…
Amir Ben-Amram on Counting Is Sometimes Eas…
Istvan on Counting Is Sometimes Eas…
Istvan on Counting Is Sometimes Eas… | {"url":"https://rjlipton.wordpress.com/2010/09/17/my-book-on-pnp-is-now-available/","timestamp":"2014-04-16T19:09:37Z","content_type":null,"content_length":"156719","record_id":"<urn:uuid:d3de6aeb-5620-47d9-8413-0fd83a4fd18a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: April 2004 [00560]
[Date Index] [Thread Index] [Author Index]
strange problems with Random
• To: mathgroup at smc.vnet.net
• Subject: [mg47812] strange problems with Random
• From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
• Date: Tue, 27 Apr 2004 04:48:05 -0400 (EDT)
• Sender: owner-wri-mathgroup at wolfram.com
Recently Mark Coleman wrote a message with the subject "Differences in
Random Numbers" in which he described his experience of getting
statistically incorrect answers when using
Random[UniformDistribution[0,1]] instead of simple Random[]. Jens Kuska
and Bill Rowe pointed out that the package ContinuousDistributions
contains the code
UniformDistribution/: Random[UniformDistribution[min_:0, max_:1]] :=
Random[Real, {min, max}]
and that seemed to be it. However, I have now encountered a problem
that makes me feel strongly that there is more to this problem than it
seemed to me at first.
I have been doing some Monte Carlo computations of values of of
financial options for the Internet based course on this subject I am
teaching for Warsaw University. In the case of the simplest options,
the so called vanilla european options, an exact formula (due to black
and Scholes) is known. Neverheless, computations with Mathematica using
the function
Random[NormalDistribution[mu,sigma]] (for some fixed mu and sigma) have
been giving answers which are consistently higher than the
Black-Scholes value. On the other hand, when
Random[NormalDistribution[mu,sigma]] is replaced by the following code
using a well known algorithm due to Marsaglia:
RandomNormal=Compile[{mu, sigma}, Module[{va = 1., vb, rad =
2.0, den = 1.}, While[rad ³ 1.00, (va = 2.0*Random[] - 1.0;
vb = 2.0*Random[] - 1.0;
rad = va*va + vb*vb)];
den = Sqrt[-2.0*Log[rad]/rad];
mu + sigma*va*den]];
the answers agree very closely with the Black-Scholes ones.
I am not sure if this problem is related to the one mark mentioned but
it seems quite likely. One could in principle test it by replacing
Random[] by Random[Uniformdistribution[0,1]] (after loading the
Statistics`ContinuousDistributions`) package, but if one does that the
code will no longer compile. That makes the function to slow for tests
that are sufficiently precise, so I have not tested it.
In any case, I am sure that using Random[Normal ] consistently gives
wrong answers. I looked at the Statistics`NormalDistributionPackage`
but everything looks O.K. to me. I now tend to believe that the problem
lies with Random itself, that is that Random called with any
parameters or perhaps some particular parameters may not be behaving
correctly. Random[] called with no parameters does not seem to suffer
from any such problems.
Andrzej Kozlowski
Chiba, Japan | {"url":"http://forums.wolfram.com/mathgroup/archive/2004/Apr/msg00560.html","timestamp":"2014-04-18T19:23:03Z","content_type":null,"content_length":"36619","record_id":"<urn:uuid:33ff3a4a-fed7-42a8-bdd8-e8a3ba4ade89>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Complexity on Knowledge Problem
Add this one to your long reads queue, because it’s well worth it: Tom Vanderbilt writes in Nautilus about the traveling salesman problem and how algorithmic optimization helps us understand human
behavior more deeply. It’s a thorough and nuanced analysis of the various applications of algorithms to solve the traveling salesman problem — what’s the most efficient way (which you of course have
to define, either in time or money or gasoline etc.) to deliver some number n of packages to some number d of destinations, given your number t of trucks/drivers? This is a tough problem for several
reasons, and Vanderbilt’s discussion of those reasons is clear and interesting.
We can start with a simple transportation model with small numbers of packages, destinations, and trucks. But as the number of them increases, the problem to solve becomes increasingly complex,
increasing at least exponentially if not more. Then think about what happens when the locations of the destinations change every day, as is the case for UPS and FedEx deliveries. Then think about
what happens when you add in heterogeneity of the deliveries; Vanderbilt opens with a girl pointing out that her mother would never buy perishables and then leave them in the car all day, so the
nature of the item changes the constraints on the definition of the efficient route.
Her comment reflects a basic truth about the math that runs underneath the surface of nearly every modern transportation system, from bike-share rebalancing to airline crew scheduling to grocery
delivery services. Modeling a simplified version of a transportation problem presents one set of challenges (and they can be significant). But modeling the real world, with constraints like
melting ice cream and idiosyncratic human behavior, is often where the real challenge lies. As mathematicians, operations research specialists, and corporate executives set out to mathematize and
optimize the transportation networks that interconnect our modern world, they are re-discovering some of our most human quirks and capabilities.
One of the most intriguing aspects of implementing logistics that reflect good solutions to the TSP that Vanderbilt highlights is the humanity of the driver. Does the dispatcher know the driver, know
that s/he is reliable or not? That may affect how to define the route, and how many stops or changes to put on that particular person’s schedule. Is the driver prone to fatigue, and how does that
fatigue affect the driver’s decision-making? What are the heuristics or rules of thumb that different drivers use to make decisions in the face of uncertainty given the cognitive limitations of
humans? How different will the different heuristics of different drivers be, and how to they affect the logistics of the system?
What Vanderbilt finds is that good logistics systems take the organic, emergent system that incorporates those heuristics into account when devising the TSP algorithm. They leave a human component in
the logistics, but also use the human component to inform and change the algorithm. Another important element is data, because all such algorithms are going to work in conjunction with such data as
location. GIS mapping capabilities improve the data used to establish, test, and monitor TSP algorithms.
Economic theories as recipes
Michael Giberson
Today marks the 100th anniversary of the birth of Julia Child, and surprisingly, it was a blog post on economic theorizing reminded me of the famous cookbook author this morning. I’m sure it wasn’t
quite the metaphor Rajiv Sethi had in mind when he posted “On Prices, Narratives, and Market Efficiency,” but the article suggested to me that just maybe economic theories are a bit like recipes.
Sethi’s article reacts to John Kay’s response to Robert Lucas’s article addressing the Queen of England’s question: Why had economists failed to predict the financial crisis? More directly, Sethi
explores the issue of how economists should change the way they work in order to better understand economic fluctuations. Sethi likes Kay’s formulation, of prices as the “product of a clash between
competing narratives,” and he ties it usefully to some existing theorizing by Michael Harrison and David Kreps on speculators with heterogeneous beliefs.
The idea [from Harrison and Kreps] that prices are “obtained through market aggregation of diverse investor assessments” is not too far from Kay’s more rhetorically powerful claim that they are
“the product of a clash between competing narratives”. What Harrison and Kreps do not consider is how diverse investor assessments change over time, since beliefs about transition probabilities
are exogenously given in their analysis. But Kay’s formulation suggests how progress on this front might be made.
Kay suggests that more than just deductive logic will be needed to understand the variety of conflicting narratives and how they change; Sethi, while agreeing, adds that deductive reasoning could be
pushed still further.
The economic theories as recipes metaphor suggests that economics is just a way to get from a bundle of raw elements, different kinds of data and interpretations, to a more useful product. Unsettling
in such a metaphor is that recipes are many and diverse, and unifying principles appear to be hard to find. There is no general theory of cooking that will tell us the one best way to combine a given
set of raw materials into the perfect finished product. It depends on what you want to produce, and what the tools are at hand.
Perhaps too there is no one best way to analyze a given set of observations into the perfect slice of economic understanding; perhaps there is no general theory of economics. This is not Sethi’s
argument, and it is not even my own settled view. Yet if the economy itself is always open to entrepreneurial insight, which is one way to say we are never at a point in the economy where improvement
is impossible, then economics itself is similarly always open to insight and improvement.
Doing what seems like it should work: Experiments, tests, and social progress
Michael Giberson
My title is a little grand, at least the “and social progress,” but maybe it will be justified in some later, more carefully worked out version of the ideas clashing about in my head. As this is a
blog, I’m sharing the more immediate, less carefully worked out version. ;-)
I’ve been reading Redirect: The Surprising New Science of Psychological Change. My wife brought it home from the library and then recommended it to me. (Thanks!) The book makes some surprisingly
strong claims for personal improvements from what the author calls “story editing,” a bundle of techniques that subtly (or sometimes not so subtly) get people to revise their self narratives. (More
from the Scientific American blog, more from Monitor on Psychology.)
Counterpart to that focus of the book is an emphasis on testing social and psychological interventions to discover what actually seems work. Author Timothy Wilson details numerous self-help and
social change projects, some of which capture millions or even billions of dollars in public support, which seem like they should work but when subjected to careful evaluation show no evidence of
success. In fact, some very expensive programs actually seem to worsen the problem that the program was designed to fix: programs to fight teenage smoking that lead to higher smoking rates, programs
to discourage teenage pregnancy that lead to higher pregnancy rates, efforts to discourage littering – or cheating – on campus that have the opposite effects. Wilson advocates a strong preference for
testing social interventions with randomized control experiments when possible (and ethical). When randomized control tests are not possible, then other attempts at measurement and replication are
important even though difficult to do well.
Whether or not “story editing” is key to successful personal and social change – Wilson makes a strong case, but he could be cherry picking his evidence and I’m sure he has his professional critics –
the emphasis on experimentation and testing interventions is an important one.
Lynne’s posts last week on experimentation in social contexts are related: Economic experimentation, economic growth, and regulation and Experimentation, Jim Manzi, and regulation/deregulation. I’m
most of the way through Russ Robert’s EconTalk interview with Jim Manzi that Lynne mentioned in second-listed link (recommended); Manzi makes related arguments in favor of well-designed experiments
where possible, and for trial and error experimentation where controlled experimentation is not possible.
In both Wilson’s book and the Manzi interview (and apparently in Manzi’s book Uncontrolled, which I haven’t read yet), the limits of multivariate analysis of naturally generated data – i.e., almost
all econometric analysis – are examined and found wanting. As Manzi explains, “omitted variable bias” is massive when examining data on human systems; the systems are simply too complex to produce
reliable, non-obvious predictions via multivariate analysis because you cannot control for all of the possible effects and interactions influencing the data. He suggests that while 90 percent of
studies relying on well-designed randomized control experiments are subsequently replicated, that figure drops to 20 percent or so for studies relying primarily on well-designed multivariate
In a post on the deterrence effect of the death penalty, Timothy Taylor provides an example of the difficulties of using multivariate analysis to examine social policy. Taylor draws on a recent
National Research Council study on the topic, which like a similar study published in 1978 has concluded “available studies provide no useful evidence on the deterrent effect of capital
punishment.” Taylor then explains several reasons why it has been hard to draw firm conclusions from the data. While he doesn’t use the term “omitted variable bias,” it is among the problems that the
NRC study finds hampering results in this area.
The views of both Wilson and Manzi, and the case study on the effects of the death penalty, all point to a certain humility concerning our claims to understand how the world works. But humility isn’t
the end of the story, it isn’t an argument to stop; it is an argument to trust our beliefs about the social world less conclusively and also to trust them selectively: trust knowledge derived from
replicated randomized-control experiments most, trust knowledge from replicated multivariate analysis much less, trust knowledge based on trial and error learning less as well.
These ideas will, once better worked out in my head, probably also mention Vernon Smith’s work on constructivist and ecological rationality. Of course, V. Smith is known to be a fan of experimental
approaches to understanding social phenomena as well.
The constructivist way forward: experimentation! testing! social progress!
New paper: Knowledge Problem
Lynne Kiesling
I have a new paper that may be of interest to KP readers, since the subject of the paper is the same as the name of this site: Knowledge Problem. I am honored to have been invited to contribute this
paper to the forthcoming Oxford Encyclopedia of Austrian Economics (Peter Boettke and Chris Coyne, eds.). Here’s the abstract:
Hayek’s (1945) elaboration of the difficulty of aggregating diffuse private knowledge is the best-known articulation of the knowledge problem, and is an example of the difficulty of coordinating
individual plans and choices in the ubiquitous and unavoidable presence of dispersed, private, subjective knowledge; prices communicate some of this private knowledge and thus serve as knowledge
surrogates. The knowledge problem has a deep provenance in economics and epistemology. Subsequent scholars have also developed the knowledge problem in various directions, and have applied it to
areas such as robust political economy. In fact, the knowledge problem is a deep epistemological challenge, one with which several scholars in the Austrian tradition have grappled. This essay
analyzes the development of the knowledge problem in its two main categories: the complexity knowledge problem (coordination in the face of diffuse private knowledge) and the contextual knowledge
problem (some knowledge relevant to such coordination does not exist outside of the market context). It also provides an overview of the development of the knowledge problem as a concept that has
both complexity and epistemic dimensions, the knowledge problemʼs relation to and differences from modern game theory and mechanism design, and its implications for institutional design and
robust political economy.
In this paper I analyze the development of the two categories of the knowledge problem — the complexity knowledge problem and the contextual knowledge problem — and explore both the history of the
development of these concepts and their application in robust political economy and new institutional economics. As is the hallmark of a good research project, I think on balance I learned more than
I created in the process of writing this paper.
One other thing I made sure to include was a discussion of how the knowledge problem and its development relates to game theory and mechanism design, through the work of Oskar Morgenstern (and then
through some of the work of Herb Simon and Vernon Smith, among others).
Tying together economics, institutional design, history of thought, and epistemology, I hope you find this paper informative and useful! I’ll also make sure to update when the full volume is
Music, harmony, and social cooperation
Lynne Kiesling
I am a big fan of English renaissance choral music, particularly sacred polyphony from Tallis and Byrd (and stretching back to Taverner, but he’s not as distinctively polyphonic). One of the best
ensembles performing such music is Stile Antico, a group of 13 British singers who do an outstanding job with this music, and whose recordings I have recommended here before. Especially at this time
of year, their music really resonates and adds joy and beauty to life.
A couple of weeks ago we got to hear Stile Antico perform live in Milwaukee: Thomas Tallis’ Puer Natus Est mass interspersed with pieces from Byrd, White, and Taverner. The music was gorgeous, the
voices delightful, and the artists charming and gracious.
But what really struck me was their method of decentralized coordination. Typically when we think of musical performance beyond, say, a chamber quintet, coordination involves hierarchy in the form of
a conductor, to “keep everyone on the same page”. The larger the number of performers doing different things, the harder to coordinate, and therefore the greater need for a conductor … right?
Not so in this case. 13 singers, each with a particular part, bringing a distinctive element to the work. But in some ways the music is simultaneously so lush and yet so spare that if their timing is
off, the beauty of the result is diminished. 13 singers with no conductor, and they coordinate by taking their visual and verbal cues from each other in a dynamic and evolutionary manner. This is a
vivid example of decentralized coordination.
Of course the goal is harmony (in the general sense). If each individual acts and reacts to the actions of the other individuals in a way that produces a harmonious outcome, that’s beauty. And it’s
an emergent outcome; each has his or her own score and acts accordingly, adapting to the actions of the others in a way that creates emergent harmony.
The music metaphor illustrates achieving emergent order through decentralized coordination, and it’s a metaphor for social cooperation too. Adam Smith employs the harmony metaphor for social
cooperation in The Theory of Moral Sentiments, in which he invokes harmony as a desirable outcome of social interaction repeatedly (and refers to the music metaphor directly in the last reference).
Note the emphasis on harmony as distinct from uniformity — each individual brings personal, private, heterogeneous features to social interaction (whether musical or economic), and they are not the
same, not uniform. Each has an incentive, a desire to coordinate, to harmonize; in music it’s finding the complementary notes, in social systems it’s grounded in our innate desire for sympathy and
mutual sympathy, according to Smith. Each individual brings something different to the party/performance/market. The most beautiful and sublime outcomes emerge when each acts on its individual
traits with a view toward creating harmony and sympathy. And it does not necessarily require the top-down imposition of control or system-wide hierarchy, but can be achieved through decentralized
Of course there are limits to applying the music metaphor to institutional design and social cooperation, such as the scale/number of actors. But it reminds us of the possibility of cooperation and
harmony through decentralized coordination, without the need for imposed system-level control.
An example of what not to do in persuasion
Lynne Kiesling
Alex Tabarrok has an excellent post this morning at Marginal Revolution:
David Warsh and Paul Krugman try to write Hayek out of the history of macroeconomics. …
It is true that many of Hayek’s specific ideas about business cycles vanished from the mainstream discussion under the Keynesian juggernaut but what Krugman and Warsh miss is that Hayek’s vision
of how to think about macroeconomics came back with a vengeance in the 1970s. …
… Hayek was an important inspiration in the modern program to build macroeconomics on microfoundations. The major connecting figure here is Lucas who cites Hayek in some of his key pieces and who
long considered himself a kind of Austrian.
I offer this as a cautionary “what not to do” note to students in particular, but also to all of us. In the piece to which Alex is responding Krugman chooses his definition of “modern macroeconomics”
in a way that clearly maps into his preconceptions and reflects his confirmation bias. Such a rhetorical stratagem is unscientific and anti-intellectual. It’s also easy to critique (no disrespect
intended for Alex’s good, pointed critique) by simply looking at the literature and seeing that modern macro encompasses a breadth of ideas and approaches, many of which are substantially informed by
models and methodological approaches that Krugman chooses to reject.
Thus both on intellectual grounds and with a view toward crafting an argument that is persuasive to those who don’t already agree with you and share your worldview, don’t do this. Being more
ecumenical and treating the contributions of your intellectual opponents with respect will make your arguments more thorough, effective, and persuasive.
On a substantive note, I’d like to echo the recommendation that Jacob Levy made in the comments on Alex’s post; the conclusion of Warsh’s essay is a good one, and suggests that incorporating more of
a complexity approach into macro would enable us to build better models:
That said, it is pleasing to think that Hayek himself may yet turn out to have been a very great economist after all, far more significant than Myrdal or Robinson, when seen against the
background of a broader canvas. The proposition that markets are fundamentally evolutionary mechanisms runs through Hayek’s work. Caldwell, of Duke University, notes that, starting with the
Constitution of Liberty, “the twin ideas of evolution and spontaneous order” become prominent, especially the idea of cultural evolution, with its emphasis on rules, norms, and decentralization.
These are today lively concepts in laboratories and universities around the world. “It could have been that Hayek was running a different race, and the fact that he didn’t do well in the
Walrasian race was that he wasn’t running in it—he was running in the complexity race,” says David Colander, of Middlebury College. Hayek may yet enter history as a prophet of evolutionary
economics, a discipline dreamt of since the days of Thorstein Veblen and Alfred Marshall in the late nineteenth century but not yet forged, whose great days lie ahead.
UPDATE: See also Pete Boettke on this same theme, motivated by Alex’s post.
“Death of a currency”
Lynne Kiesling
One of the great topics of discussion with my in-laws over the holidays was the impending demise of the euro, and whether there was any hope for, or reason to, maintain the euro given the sovereign
fiscal challenges of the member countries. The disastrous German and Italian bond auctions, and Spain’s cancellation of its sovereign bond auction, seems to portend “eurogeddon”. One of the articles
that helped me interpret these events was this column from Jeremy Warner in the Telegraph:
No, what this is about is the markets starting to bet on what was previously a minority view – a complete collapse, or break-up, of the euro. Up until the past few days, it has remained just
about possible to go along with the idea that ultimately Germany would bow to pressure and do whatever might be required to save the single currency.
The prevailing view was that the German Chancellor didn’t really mean what she was saying, or was only saying it to placate German voters. When finally she came to peer over the precipice, she
would retreat from her hard line position and compromise. Self interest alone would force Germany to act.
But there comes a point in every crisis where the consensus suddenly shatters. That’s what has just occurred, and with good reason. In recent days, it has become plain as a pike staff that the
lady’s not for turning.
In addition to the striking parallel images of Merkel and Thatcher as women who are heads of state fighting (almost too late) for fiscal responsibility, Warner’s column does a good job of pointing to
the kind of market and policy movements we can expect in the next couple of weeks. Clearly many parties behaving responsibly have already laid out some contingency plans to mitigate the effects.
But I have a simple-minded question to ask, perhaps one that I should have asked two years ago: why are so many people so worried about contagion from sovereign default in the eurozone? Should they
be worried?
Typically, interconnected financial markets have negative feedback loops that lead to the dampening of propagation; price changes as investors move money around in response to changes in relative
risk are an example of such a negative feedback. But with so many policies designed to insulate, protect, bail out parties, policies that introduce asymmetries by insuring against losses, have these
negative feedback loops been distorted and replaced or outweighed by positive feedback loops that amplify effects? That’s how I’ve been thinking about the bailouts and subsidies and loan guarantees
in both the EU and the US — policies that distort the negative feedback effects that can be equilibrating and introduce asymmetries that create destructive positive feedback effects, whereas before
any disequilibrating events or shocks could have been smaller and dampened by the normal negative feedback effects in markets. So I would normally say that the forces of self-organization exist to
buffer and counter the forces of contagion, but the political rules in operation have stifled the forces of self-organization and exacerbated contagion.
One of those forces of self-organization and negative feedback is bankruptcy and default, both private and sovereign. I wonder if the EU will be able to activate the salutary re-equilibrating
benefits of bankruptcy and default while simultaneously being able to either stem contagion or have the political fortitude to carry on through the pain and cost that is larger than it might have
been otherwise.
Students for Liberty talk: economics and complexity
Lynne Kiesling
UPDATE: I’ve had a report that the link to the slides is not working, but I can’t get it not to work … so if you are having trouble please let me know in the comments and I’ll go bug-stomping.
UPDATED UPDATE: mischief managed, I think!
On Saturday I was honored to be the morning keynote speaker at the Students for Liberty Chicago Regional Conference. In only four years SFL has grown into a large and effective organization for
bringing together students who share an interest in exploring and promoting individual liberty, classical liberal ideas, and public policy that reflects those principles.
My talk, Beneficial Complexity (pdf of slides), took some basic economic concepts and looked at them through the lens of complexity science. My main objective was to encourage the attendees to see
ways to integrate some core classical liberal ideas into their own thinking, their own work, their persuasive discussions with others, and their advocacy activities.
Start with a story: a flower market selling calla lilies grown in Colombia. Lots of buyers and sellers, lots of parties involved in getting the flowers from places like Colombia to your neighborhood
flower shop. Mostly impersonal exchange, but not entirely devoid of personal relationships. And without any one person knowing how to do so in its entirety, the flowers get from Colombia to my flower
shop for me to buy. If you are familiar with Leonard Reed’s I, Pencil essay about the highly decentralized yet coordinated processes that combine to bring pencils to consumers, you recognize this
theme. One of the fundamental economic and epistemological concepts that I, Pencil illustrates is the knowledge problem — no one person knows how to make a pencil, or how to grow and sell calla
lilies globally. We associate this idea primarily with the work of F.A. Hayek, as stated in his famous essay “The Use of Knowledge in Society” in 1945 (which also inspires the title of this blog).
Knowledge is dispersed and, importantly, it is also private and often subjective. Your willingness to use some of your resources to buy a cup of coffee, or a pencil or a calla lily bouquet, is known
only to you and is context dependent, which means that much of the knowledge that goes into individual decisions is local and hard to centralize. Yet calla lilies show up in Chicago shops, as do
pencils, in ways that satisfy the preferences of consumers and profit producers along the supply chain. How does that happen? Markets and prices create networks of dispersed, local, private knowledge
that connects and aggregates that knowledge and sends valuable signals to economic actors about the relative benefits and costs of the decisions they take.
When we think about economic activity taking place in networks, and how exchange and prices connect networks and extend and deepen them, we are using the tools of complexity science to understand
economic behavior. Think back to the flower market and the pencil. As Eric Beinhocker writes in The Origin of Wealth, “The complexity of all this activity is mind-boggling. … all the jobs that must
get done, all the things that need to get coordinated, and the timing and sequence of everything. … there is no one in charge of the global to-do list. … Yet, extraordinarily, these sorts of things
happen every day in a bottom-up, self-organized way. … The economy is a marvel of complexity.” (2006, pp. 5-6)
Technically speaking, what is a complex system? It’s a system or arrangement of many component parts, and those parts interact. These interactions generate outcomes that you could not necessarily
have predicted in advance. In other words, it’s a non-deterministic system. Scholars in many different fields use this general idea to study a range of systems and phenomena, from molecular and
cellular interactions in physics, chemistry, or biology, to the organization of the brain in neuroscience, to species and environment interactions in ecosystems, to cascading network failures in
electric power systems, to networks of co-author collaborations in particular fields of research, and many other applications.
These applications share an interest in three features of a complex system: its structure (how are the parts connected, how do they interact?), its rules (physical or human-generated), and its
behavior. apply this idea of a complex system to our economic and social interactions. Go back to Beinhocker’s description of the market economy as “a marvel of complexity” in which all sorts of
activities get coordinated in a “bottom-up, self-organized way”. Think of the economy as a complex system, in which individuals are the agents, the component parts, with dispersed private knowledge.
We are connected in many ways — social relations, economic exchange, organizations, and so on — and our interactions shape individual decisions, individual behavior, and ultimately overall system
behavior. The profound insights of writers like Ferguson, Smith, Hayek, and others is that individual agents have their own preferences and private knowledge, but we interact, and in so doing we
generate system-level behavior that is generally self-organized — emergent order. In this unplanned order no one person can predict the specific outcome we’ll achieve, but we still experience over
and over and over in human history that order generally does emerge.
But order doesn’t necessarily always emerge, and the order that emerges sometimes isn’t pretty. That observation leads to the elephant in the room with respect to emergent order in social system: the
rules. All exchange takes place within a framework of rules, an institutional context. Those institutions include formal laws enforcing property ownership, contracts, and punishment for theft and
fraud, as well as informal social norms and peer pressure that may, for example, affect how the bargaining and negotiation in the exchange take place. Rules shape how agents interact, and shape their
incentives … and as a result they can also affect system behavior. One important implication of studying economies as complex systems is that when we design institutions, we should model and test and
strive for rules that enable order to emerge, which means an emphasis on process rather than using rules to achieve some specific outcome. That’s one important intersection of classical liberal
principles with complexity science.
I’d like to raise a point that I didn’t in my remarks on Saturday, but builds on a discussion in a later session at the conference. One challenge that we often face as classical liberals is putting
“the human face” on our ideas, countering the perception that a libertarian society would be cold, calculating, and lacking in compassion or personal relationships. Nothing is further from the truth,
and the language of complexity science gives us a way to communicate that reality, because it frames economic/social interactions as relationships and connections. It emphasizes the mutuality of
exchange and the multiple dimensions of our relationships with others in our voluntary associations.
Resiliency comes from more risk of bank failure, not less
Lynne Kiesling
In the always-smart-and-interesting City AM paper from London, Anthony Evans makes an important argument that has been overlooked in financial regulation debates: risk of failure is what creates
system resilience, and regulation creates brittle monocultures. He writes in the context of last week’s Independent Commission on Banking (ICB) recommendations for creating regulatory divisions
between retail banking and investment banking and implementing other structural changes, with the objective of a more resilient financial system. Evans critiques the over-simplified concept of risk
that the report employs:
We can’t say that one thing is more risky than another – only that different activities expose people to different types of risk. Bodies like the ICB needs to shift from trying to – impossibly –
reduce risk to placing responsibility on those who are choosing between different risks.
For example, ordinary depositors should not be protected from risk – they need to confront it. It can seem counterintuitive, but the genuine threat of bank runs is probably the best disciplinary
device to prevent them from happening.
Evans’ argument stems from an assertion that he makes later in his column, that risk cannot be reduced but can only be transferred from one party to another. While I think that assertion is
debatable, the important insight from this part of his argument is that resiliency in complex market systems arises from agents having responsibility for losses associated with taking additional
risks, in addition to their receiving profits associated with taking additional risks. Breaking that connection among risk, profit, and loss is one of the core causes of the brittleness of the
financial system over the past two decades, and the transmission and magnification of those losses.
Evans makes a second important observation: when regulation imposes a higher degree of uniformity in a complex system, it reduces resilience of the overall system by creating separated monocultures:
By making arbitrary decisions about what must stay within fences and what doesn’t, or about the level of equity capital that banks will be required to keep, regulators make banking more
homogenous. Banks are already free to set up their own ring fences, and a competitive system would be one where they can experiment with different ones. …
All regulations create clusters of errors – by their nature they harmonise behaviour and therefore increase systemic dangers. Policy efforts need to focus on reducing barriers to exit, making it
easier for banks to fail, making the costs of failure more visible and ensuring they fall on those who make bad decisions – bankers, regulators, or even the public.
We see this paradox of control in all forms of economic regulation; in this case in financial regulation, but also in the electricity regulation that is the focus of my attention. Regulators believe
that by increasing control, by limiting the range of actions that agents can take in complex systems, they are reducing the risk of bad outcomes. But what they do not realize (or choose to ignore)
is, as Evans points out here, that by imposing more top-down centralized control on their actions and interactions, they reduce the incentives of the agents to develop their own forms of individual
control based on their local knowledge and their own experimentation. Thus regulation makes this complex system more rigid, more brittle, less resilient, and therefore regulation does not achieve its
stated goals.
Note here that I am using the tools and language of complexity science and complexity economics, but you can see in this discussion where moral hazard shows up, where you could talk about the
failures of corporate governance (as does Charlie Calomiris), etc. Framing the objective as a resilient system broadens the focus beyond top-down regulation to include the individual, decentralized
institutions that can keep dangers from becoming systemic. Thinking about regulation in terms of the locus of control and the consequences of the imposition of control in a complex system is more
likely to enable us to incorporate the costs of imposing control into the analysis, and to harness decentralized institutions to enable a more resilient system.
Worried about too much demand elasticity in electric power markets
Michael Giberson
Will electric power consumers facing smart-grid enabled real time prices have the potential to accidentally destabilize the power grid and cause a blackout? A paper presented at a recent IEEE
conference says it is a possibility. The surprising culprit? Too much price elasticity in the market demand function.
It is a surprising culprit because consumer demand for electricity is currently notoriously inelastic (that is to say, not responsive to changing prices) in the short run, in part due to the way
standard regulatory rate structures end up with consumers being presented with relatively unchanging prices reflecting a longer-term average cost of production. Prices don’t change much, so consumers
don’t watch prices much. But this price inelasticity of demand doesn’t mean the quantity of electricity consumers want to consume is unchanging – consumers want more or less electricity throughout
the day in response to ordinary household schedules and in response to outside temperatures and building heating and cooling demands. Consumer demand for power responds to a lot of things, but rarely
to changes in the price of power itself.
Because of the way the current grid is designed, the quantity of energy supplied and demanded must be balanced continuously. Therefore, the grid is typically operated to take the quantity of power
demanded as a given and make whatever adjustments in the quantity supplied to maintain system balance. (In brief, because prices can’t do much work coordinating supply and demand in the short-run,
all of the coordination must be done by adjusting quantities. Grid operators can typically control suppliers but not consumers, so quantity-based supply side adjustment does most of the work of
keeping the market balanced.)
The authors, three engineers at MIT, worry that if too many consumers facing real time prices pick similar high price points at which to cycle off appliances (or low prices as which to charge
electric vehicles), that the market demand function will acquire highly price elastic segments in which quantity demanded will suddenly drop off (or spike up) at rates faster than the supply side can
safely accommodate. Therefore, a blackout risk. To counter this possible risk, the authors suggest diversifying price signals sent to consumers, or employing hourly instead of 5-minute price signals,
or using rolling-average prices to consumers rather than location-specific current marginal price. They admit their safeguards would hamper the efficiency of market results, the efficiency loss
essentially the price paid to mitigate the possibility of a price-responsive demand shock to the system.
In my view, the idea of having so many real-time price-aware consumers responding in the market remains so far-fetched that I’m not willing to worry about that so many of them will coordinate their
home energy management systems on the same price points and unwittingly bring down the system.
And well before this possibility of too-much consumer responsiveness comes about, I suspect most RTOs will be paying suppliers for ramping capability and charging consumers for using it in ways that
will enable sufficient short-run system responsiveness. So I’m not ready to worry now about this problem, and don’t think that I’ll need to worry about it later, either.
(See MIT media relations summary here, HT to Scientific American via Economist’s View.) | {"url":"http://knowledgeproblem.com/category/complexity/","timestamp":"2014-04-17T15:37:34Z","content_type":null,"content_length":"146535","record_id":"<urn:uuid:91aebd1e-a282-4ce1-8d81-e4d15eeef575>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
Top 10 Unknowable Things - Listverse
Top 10 Unknowable Things
There are lots of things we don’t know; personally I’m a veritable cornucopia of ignorance. But there is a difference between things we don’t know and things that can’t be known. For example, no-one
knows when Shakespeare was born (although we do know when he was baptized). However, it’s not impossible that in the future we could find out – a long lost document might be found that mentions his
birth, so Shakespeare’s true date of birth is not unknowable, just unknown. This list contains 10 things that are unknowable in principle. Not only are they unknown now, they can never be known.
Most of these are mathematical; I’ve tried to make it as nontechnical as possible – apart from anything else, I’m no mathematician so I’ve tried to dumb it down enough so that I can understand it.
Unknowable Thing: What’s in a set of sets that don’t contain themselves?
We have to do a little mathematics for several of these items! This is the first on the list because, in a sense, the concept of the “unknowable” starts with this paradox discovered by Bertrand
Russell in 1901.
Let’s start with the idea of a set. A set is a collection of objects – for example, you could have the set of positive even numbers that contains 2, 4, 6, 8… or the set of prime numbers containing 2,
3, 5, 7, 11… so far so good.
Can sets contain other sets? Yes, no problem – you could have a set of sets that contain other sets – and that set would, obviously, contain itself. In fact, you can split sets into two types – those
that contain themselves and those that don’t.
So, consider a set (S, say) of sets that don’t contain themselves. Does S contain itself? If it does, then it shouldn’t be in the set, but if it doesn’t, then it should. So S is continually hopping
in and out of itself.
This paradox caused quite a lot of consternation amongst mathematicians. Imagine someone proving that a number could be simultaneously even and odd, it’s similarly worrisome to that. Ways have been
gotten around the paradox – essentially by redefining set theory.
It has been said that the problem with people’s perception of the universe is that our brains are only used to dealing with small numbers, short distances and brief periods of time. Graham’s number
is big enough to make most people’s brains start to steam; it’s really big; to put it into context, let’s look at some, so-called, big numbers:
Most people have heard of a googol – for most purposes it’s a big number – 10^100 which is 1 followed by 100 zeroes.
There are much bigger numbers out there though; a googolplex is 1 followed by a googol zeroes and the mathematician Stanley Skewes has defined numbers much bigger than a googolplex.
To put these into context, the smallest of them (the googol) is still much bigger than the number of particles in the universe (around 10^87).
However, Graham’s number knocks these “toddlers” out of the ground – it was used by Ronald Graham in his (to me) incomprehensible work on multi-dimensional hypercubes (it’s the upper limit to one of
the solutions). Suffice to say that it is way bigger than Skewes’ numbers and in fact, the universe isn’t big enough to store the printed version. Even if each digit was the size of an electron. Not
even close.
The truly wonderful thing about Graham’s number is that it’s possible to calculate the last few digits and we know it ends in a 7.
Unknowable Thing: What’s the smallest positive integer not definable in under eleven words?
This is a problem in the philosophy of mathematics. Just to make things a little clearer – an integer is a whole number (1, 2, 3 etc), and for smaller integers, it’s easy to define them in words:
“The square of 2”= 4
“One more than 4” = 5
…and so on. Now as a thought experiment – consider how many eleven word sentences there are – obviously there are a lot; but there’s only a finite number of words (around 750,000 in English) so
there’s only a finite number of eleven word sentences – at some point, you’d run out and there would be an integer you couldn’t define. Except, “The smallest positive integer not definable in under
eleven words” only contains ten words, so you can define it in under eleven words.
This is called Berry’s paradox and in fact, it’s a kind of “sleight of hand” with language – we’re subtly moving from naming numbers to describing them, but still no-one can come up with that number!
Unknowable Thing: Will a computer program ever stop?
When I sat through Pure Mathematics classes at school, it was a common complaint that what we were learning was “useless.” Unfortunately, the teacher simply responded with “you’re learning this
because it’s on the syllabus.” The Turing Halting problem sounds like a grade-A useless, entirely academic, waste of time. Except that it led to the development of digital computers.
Alan Turing was an English mathematician and a child prodigy, particularly in mathematics. His work on computing machines was entirely theoretical at first; he was working on the idea of describing
mathematical statements entirely numerically so they could be processed by a theoretical computing machine. He thought up the concept of a general purpose computing machine (now called a Turing
Machine) as a thought experiment – he didn’t envision someone actually building one.
He reasoned that a computer program must either run forever or stop. He proved that it’s impossible to automatically determine which will happen – I know you might argue you could “run the program
and see what happens” – but supposing it only stops after 7 trillion years?
A little more about Turing: his line of reasoning is particularly remarkable because he did it in 1936 – years before the first digital computer was built. World War II started in 1939 but Turing had
been working on code-breaking at Bletchley Park for a year before that; trying to decipher the German Enigma code. It was clear that a “manual” approach was too slow and Turing specified the first
decoding machine (called a Bombe), this led to Colossus – arguably the first programmable, digital computer that could automatically run through many possible “keys.” His work was so important to
decryption that much remained secret long after the war ended; some was only published this year – 60 years after it was written.
Unknowable Thing: There are numbers that can’t be computed.
This is another mind bender proved by Alan Turing. For a start, there is more than one “infinity.” For example, how many positive, whole numbers are there? Why, there are infinity – they never stop.
How many positive, even numbers are there? The same – if you double a positive, whole number you get a corresponding even number, so there must be the same number.
Okay, how many real numbers are there? Real numbers include all the fractions, irrational numbers (such as pi) and whole numbers (positive or negative). Well, there are a lot more than there are
whole numbers – between each whole number, there are an infinite number of real numbers; so the number of real numbers is a much bigger infinity than the number of whole numbers.
With this concept firmly in place; you can reason thus:
Suppose you start writing computer programs to generate real numbers, one for each real number.
You count each program; the first one is “1”, the second, “2” and so on – as you’re counting, you use the positive, whole numbers.
The problem is that although you’re happy to write an infinite number of programs, that infinity is way smaller than the infinite number of real numbers, so there must be many (in fact, most) real
numbers missing – that can’t be calculated.
Unknowable Thing: In mathematics, there are true things that can’t be proved true – and we don’t know what they are.
This brain-hurting theorem was developed by Kurt Gödel. The concept dates back to 1900 when David Gilbert proposed 23 “problems” in mathematics that he would like to see solved in the upcoming
century. One problem was to prove that mathematics was consistent – which would be jolly nice to know. However, in 1901, Gödel blew that out of the water with his incompleteness theorem – I won’t go
through the theorem in detail here, partly because I don’t understand the full detail, but mainly because it took me three separate lectures before I even felt I was getting there, so if you’re
interested: Wikipedia is your friend!
In summary, the theorem shows that you can’t prove mathematics consistent using just mathematics (you’d have to use a “meta-language”). Furthermore, he also showed that there are true things in
mathematics that can’t be proved true.
When I learnt the theorem, it was suggested that the famous Fermat’s Last Theorem might be such a “true thing that can’t be proved true,” but that was spoiled as an example when Andrew Wiles proved
it true in 1995. However, here are a couple of things that might be true, but not provable:
“There’s no odd perfect number.”
A perfect number is a positive, whole number whose divisors add up to itself. For example, 6 is a perfect number – 1 + 2 + 3 = 1 * 2 * 3 = 6.
28 is the next perfect number. Perfect numbers occur rarely and up to now only 41 perfect numbers have been found. No-one knows how many there are – but it’s between 41 and infinity!
So far, all the perfect numbers have been even but, again, no-one knows if there is an odd one yet to be found, but if there is one it’s a very big number; bigger than 10^1500 – (1 with 1500 zeroes
after it).
“Every even number is the sum of two primes.”
A prime number is only divisible by itself or 1 and it’s a curious fact that, so far, every even number that’s been tested is the sum of two of them – for example: 8 = 5+3 or 82 = 51 + 31. Again,
it’s known to be true for a lot of numbers (up to around 10^17) and it’s also known that the higher a number is, the more likely it is to be a prime, so it seems more likely the higher you get, but
who’s to say there isn’t a really big even number out there where it isn’t true?
Still in the world of provability, we come to Tarksi’s undefinability theorem, but to tantalize, here is something on the background of Tarksi.
He was the son of Jewish parents born in pre-war Poland, and he was very lucky. He was born Alfred Teitelbaum in 1901. There was widespread antisemitism in pre-war Poland and in 1923 Alfred and his
brother changed their surname to “Tarski” – a name they made up because it “sounded more Polish.” They also changed their religion from Jewish to Roman Catholic – although Alfred was actually an
In the late 1930s, Tarski applied for several professorships in Poland but was turned down – luckily, as it turned out. In 1939 he was invited to address a conference in America which he probably
wouldn’t have attended if he’d recently taken up a professorship. Tarski caught the last ship to leave Poland before the German invasion the following month. He had no thought that he was “escaping”
from Poland – he left his children behind thinking he would be returning soon. His children survived the war and they were reunited in 1946, although most of his extended family were killed by the
German occupiers.
Back to the theorem: Tarski proved that arithmetical truth cannot be defined in arithmetic. He also extended this to any formal system; “truth” for that system cannot be defined within the system.
It is possible to define truth for one system in a stronger system; but of course, you can’t define truth in that stronger system, you’d have to move on to a still stronger system – and so on,
indefinitely searching for the unreachable truth.
Unknowable Thing: Where is that particle, and how fast is it going?
We leave the brain-hurting world of mathematics, but alas we enter the even more cortex-boggling world of quantum physics. The uncertainty principle arose when studying sub-atomic particles and
changed how we view the universe. When I was at school, we were taught that an atom was like a mini solar system with a sun-like nucleus in the middle with electrons orbiting, and the electrons were
like tiny marbles.
That is so wrong – and one of the key discoveries along the way to showing that was Heisenberg’s uncertainty principle. Werner Heisenberg was a German theoretical physicist who worked closely with
the Danish physicist Niels Bohr in the 1920s. Heisenberg’s reasoning goes like this:
How do I find out where a particle is? I have to look at it, and to look at it I have to illuminate it. To illuminate it, I have to fire photons at it, when a photon hits the particle, the particle
will be moved by the photons – so by trying to measure it’s position, I change it’s position.
Technically, the principle says that you can’t know the position and momentum of a particle simultaneously. This is similar, but not the same as the “observer” effect in experimentation where there
are some experiments whose outcomes change depending on how they are observed. The uncertainty principle is on much firmer mathematical footings and, as I mentioned, changed the way the universe is
viewed (or how the universe of the very small is viewed). Electrons are now thought of as probability functions rather than particles; we can calculate where they are likely to be, but not where they
are – they could actually be anywhere.
The uncertainty principle was quite controversial when it was announced; Einstein famously said that “God does not play dice with the universe,” and it was around this time that the split in physics
that separated quantum mechanics – which studies the very small and the macro physics that studies larger objects and forces started. That split is still to be resolved.
Chaitin’s constant is an example of what seems normal and sensible to a mathematician, but crazy sounding to the rest of us. Chaitin’s constant is the probability that a random computer program will
halt. What’s crazy about it (actually, one of a few things), is that there is a different constant for each program, so there is an infinite number of values for this “constant” – which is usually
shown as a Greek omega (Ω). The other slightly crazy thing about it is that it’s not possible to determine what Ω is – it’s an uncomputable number, which is a real shame – if we could compute Ω, then
it’s been shown that most unproven problems in mathematics could actually be proved (or disproved).
So far, we’ve described things we know to be unknowable (if that makes sense). However, the final item describes things that might be true but that can’t be known. You might think I’d struggle to
find an example, but consider the following:
We live in an expanding universe; when we look at other galaxies they are moving away from us and accelerating. Now, in a distant future (around 2 trillion years from now) all the other galaxies will
be so far away that they won’t be observable (technically, they’ll be moving so fast that the light will be stretched into gamma rays with wavelengths longer than the universe is wide). So, if you
were an astronomer in 2 trillion years, there would be no way of knowing that there were billions of other galaxies in the universe – and if anyone suggested it, you’d laugh derisively and say “show
me the evidence; you have nothing.”
So, bearing this in mind, come back to the present day – there might be true things about the universe that we can never know. Gulp!
Unknowable Thing: Are there any uninteresting people?
It’s fairly easy to argue that there are no uninteresting people:
Consider making a list of uninteresting people; one of those people will be the youngest – and being the youngest uninteresting person is, itself, interesting – so they should be removed from the
list. Now there is a new youngest uninteresting person, and they can also be removed from the list, and so on – until the list must be empty. So, if you meet anyone you think is uninteresting, you
must have got it wrong. | {"url":"http://listverse.com/2012/07/13/top-10-unknowable-things/","timestamp":"2014-04-16T05:31:52Z","content_type":null,"content_length":"79165","record_id":"<urn:uuid:bee017fd-a3a2-4746-a6a3-638119bdae7c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Beltsville Science Tutor
...I look forward to working with you and helping you achieve your goals.I have a been a tutor in math subjects at the college level at my university. I also have a minor in mathematics. I have a
Masters in Chemistry from the University of Maryland.
6 Subjects: including chemistry, algebra 2, organic chemistry, prealgebra
...I have taught high school AP science classes, including labs, and have worked as a technical trainer in information technology. As a graduate student, I discovered the mechanism by which
repetitive DNA sequences evolve. I have many years of experience in research and clinical laboratories, incl...
25 Subjects: including biology, physical science, genetics, ACT Science
...I took the Praxis tests to become an elementary teacher. There were three tests and then I had to take Elementary education test along with it. I have bought a Praxis study book to help me get
a better idea what I will be tested on.
43 Subjects: including nutrition, biology, GRE, astronomy
...I find however, that some students - even some with good grades in math - just get lost in a "forest" of algebra and cannot see the physics ideas that are the essence of the material. I have
developed tutoring approaches to working through the math to the physics that have been successful. Quit...
13 Subjects: including physics, ACT Science, chemistry, calculus
...It is my hope to be able to help more students achieve high grades in their math courses. I have a bachelor and masters degree in engineering, and scored 740 (out of 800) on the GRE
quantitative. I took AP calculus in high school and got a score of 5 (maximum) on the AP exam.
34 Subjects: including electrical engineering, physics, calculus, statistics | {"url":"http://www.purplemath.com/beltsville_science_tutors.php","timestamp":"2014-04-16T10:24:49Z","content_type":null,"content_length":"23919","record_id":"<urn:uuid:7e956d37-8878-48c3-9846-4b9eef67b99b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Continuity of Functions
When we are given problems asking whether a function f is continuous on a given interval, a good strategy is to assume it isn't. Try to find values of x where f might be discontinuous.
If we're asked about the continuity of one function on several different intervals, find all the problem spots first and worry about which intervals they're in later.
If there aren't any such values in the interval, then the function is continuous on that interval. | {"url":"http://www.shmoop.com/continuity-function/continuity-interval-formula.html","timestamp":"2014-04-16T20:45:10Z","content_type":null,"content_length":"26627","record_id":"<urn:uuid:ae55bb01-3881-4f67-9360-28adcf395d8c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
Data Interpretation
The bar graph given below shows the sales of books (in thousand number) from six branches of a publishing company during two consecutive years 2000 and 2001.
Sales of Books (in thousand numbers) from Six Branches - B1, B2, B3, B4, B5 and B6 of a publishing Company in 2000 and 2001. | {"url":"http://www.indiabix.com/data-interpretation/bar-charts/","timestamp":"2014-04-18T23:33:49Z","content_type":null,"content_length":"50770","record_id":"<urn:uuid:e4382559-485a-4c3b-9ec1-f890e324bd7b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Portability GHC
Stability unstable
Maintainer stephen.tetley@gmail.com
Drawing with trace - a Writer like monad collecting intermediate graphics - and drawing context - a reader monad of attributes - font_face, fill_colour etc.
Collect primitives (writer-like monad)
data GenTraceDrawing st u a Source
Monad (GenTraceDrawing st u)
Functor (GenTraceDrawing st u)
Applicative (GenTraceDrawing st u)
DrawingCtxM (GenTraceDrawing st u)
UserStateM (GenTraceDrawing st u)
evalTraceDrawing :: DrawingContext -> TraceDrawing u a -> aSource
Run the drawing ignoring the output it produces, return the answer from the monadic computation.
Note - this useful for testing, generally one would want the opposite behaviour (return the drawing, ignore than the answer).
mbPictureU :: Maybe Picture -> PictureSource
Unsafe promotion of (Maybe Picture) to Picture.
This is equivalent to:
fromMaybe (error "empty") $ pic
This function is solely a convenience, using it saves one import and a few characters.
If the supplied value is Nothing a run-time error is thrown.
draw :: Image u a -> GenTraceDrawing st u ()Source
Draw a Graphic taking the drawing style from the drawing context.
This function is the forgetful version of drawi. Commonly, it is used to draw Graphic objects which have no answer.
drawi :: Image u a -> GenTraceDrawing st u aSource
Draw an Image taking the drawing style from the drawing context.
The graphic representation of the Image is drawn in the Trace monad, and the result is returned.
drawl :: InterpretUnit u => Anchor u -> LocImage u a -> GenTraceDrawing st u ()Source
Draw a LocImage at the supplied Anchor taking the drawing style from the drawing context.
This function is the forgetful version of drawli. Commonly, it is used to draw LocGraphic objects which have no answer.
drawli :: InterpretUnit u => Anchor u -> LocImage u a -> GenTraceDrawing st u aSource
Draw a LocImage at the supplied Point taking the drawing style from the drawing context.
The graphic representation of the Image is drawn in the Trace monad, and the result is returned.
drawc :: InterpretUnit u => Anchor u -> Anchor u -> ConnectorImage u a -> GenTraceDrawing st u ()Source
Draw a ConnectorGraphic with the supplied Anchors taking the drawing style from the drawing context.
This function is the forgetful version of drawci. Commonly, it is used to draw ConnectorGraphic objects which have no answer.
drawci :: InterpretUnit u => Anchor u -> Anchor u -> ConnectorImage u a -> GenTraceDrawing st u aSource
Draw a ConnectorImage with the supplied Points taking the drawing style from the drawing context.
The graphic representation of the Image is drawn in the Trace monad, and the result is returned.
node :: (Fractional u, InterpretUnit u) => (Int, Int) -> LocImage u a -> GenTraceDrawing st u ()Source
Draw the object with the supplied grid coordinate. The actual position is scaled according to the snap_grid_factors in the drawing context.
This function is the forgetful version of nodei. Commonly, it is used to draw LocGraphic objects which have no answer.
nodei :: (Fractional u, InterpretUnit u) => (Int, Int) -> LocImage u a -> GenTraceDrawing st u aSource
Draw the object with the supplied grid coordinate. The actual position is scaled according to the snap_grid_factors in the drawing context.
drawrc :: (Real u, Floating u, InterpretUnit u, CenterAnchor a1, RadialAnchor a1, CenterAnchor a2, RadialAnchor a2, u ~ DUnit a1, u ~ DUnit a2) => a1 -> a2 -> ConnectorImage u a -> GenTraceDrawing st
u ()Source
Draw a connector between two objects. The projection of the connector line is drawn on the line from center to center of the objects, the actual start and end points of the drawn line are the radial
points on the objects borders that cross the projected line.
This function is the forgetful version of drawrci. Commonly, it is used to draw LocGraphic objects which have no answer.
drawrci :: (Real u, Floating u, InterpretUnit u, CenterAnchor a1, RadialAnchor a1, CenterAnchor a2, RadialAnchor a2, u ~ DUnit a1, u ~ DUnit a2) => a1 -> a2 -> ConnectorImage u a -> GenTraceDrawing
st u aSource
Draw a connector between two objects. The projection of the connector line is drawn on the line from center to center of the objects, the actual start and end points of the drawn line are the radial
points on the objects borders that cross the projected line. | {"url":"http://hackage.haskell.org/package/wumpus-basic-0.22.0/docs/Wumpus-Basic-Kernel-Drawing-TraceDrawing.html","timestamp":"2014-04-17T20:17:21Z","content_type":null,"content_length":"31463","record_id":"<urn:uuid:6e96f202-10ca-4550-b1ff-ff07c16f2220>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/mr.math/medals","timestamp":"2014-04-19T19:49:25Z","content_type":null,"content_length":"93598","record_id":"<urn:uuid:0c625820-22f0-4edf-8943-68906d08a2c2>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/denebel/medals/1","timestamp":"2014-04-21T15:29:56Z","content_type":null,"content_length":"87056","record_id":"<urn:uuid:2ee684eb-ae64-42d6-acaf-3d51a8f5aef5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
What kind of Math did they teach you?
It’s been so long since I took math courses, the memory is hazy. During elementary school in the early 1960s, in El Dorado, Kansas, I remember teachers receiving training in something called “New
Math.” Far as I knew, 1-plus-1 still equaled 2.
Come high school, I took a couple of algebra courses, geometry, trigonometry and college algebra. The last two were electives, I think, about 35 years ago. Calculus had to wait until college.
My high school teacher totally ruled. He taught up through complex numbers and matrices in my junior year. My senior year he let me teach myself calculus and even a little set theory and
propositional logic…
He was so great.
When I was in high school in Ontario, Canada, all the materials required to solve your problems, plus some calculus (mainly how to do derivatives and integration), were part of the province’s
curriculum. The only problem was, the math courses were optional beyond grade 10/11. On the bright side, the grade 13 calculus course was almost always required for admission to university programs,
including many business-related programs such as accounting, economics, and commerce.
After the province scrapped grade 13 and totally revamped the high school curriculum a few years back, I’m not sure about the current state of affairs, but calculus is still taught at high school
level, I think.
Thanks for your comment, neop. That’s the kind of response I was hoping and looking for.
• Hi, I studied at the Lycee Francais in Mexico City, many years ago. When I was 15, I could solve all of the problems you posted, and some more. The math proficency of the group of students I was
in was considered above average, and that involved lots of intellectual as well as emotional pressure . Hence, even if back then as a High School student I was able to solve a few math problems,
I was terrified. This was a french school, and in France, if you wish to succeed professionally, you must pass a selection based on math problems solving skills. This selection starts quite early
in life… I finally ended up hating the subject, and opted for a “non-math-degree” in my final High School year (Philosophy). Then I became a Psychologist. Now, at 45, my math skills are lousy…
You see, I do not think the only question here is “what kind of math did they teach you”, but also “how did they teach you”: According too which pedagogical standards? Our teacher might have been
a great mathematician, but he was an awfully bad teacher… I think this can be often the case while teaching science: You can have a great scientist in front of the classroom, but a terrible
teacher indeed… When I wanted to be a language teacher, I was asked to learn “how to teach languages” (Teacher´s Diploma Course), INDIPENDENTLY of my own language skills. Is there such a
“teaching degree” for math teachers, focused on developing “math teaching skills” ?
In my high school (private Mexican high school) we covered a wide variety of topics,but many of them are not on the official study plan. We covered all of the analytic geometry and precalculus and
calculus stuff that pretty much everyone has to go through. In particular we covered everything necessary to solve the questions in your “Refresh your High School Math” article. However we also
covered topics like set theory, combinatorics and probability, linear programming and even Newton-Raphson approximations and Taylor series.
Personally, I think teaching all of those topics was a very good idea since we got a glimpse at what Mathematics is all about. After high school most people tend to think that the job of a
Mathematician is to solve equations, factor polynomials, and find derivatives or integrals. In reality however this is far from true, and in fact, compared to engineers or physicists, Mathematicians
rarely do any of those things.
I wnet through a state high school in New Zealand in the early 1980s. Memory is a bit hazy but everything you mentioned in the Refresher was covered.
Here are the few distinct memories I have of maths back then:
1) In 3rd form / year 9 we were introduced to graphs and did some simple algebra (slopes of graphs, crossing points of axes, etc). We’d never been taught anything about graphs prior to that. Basic
algebra had been introduced in primary school (year 7 or 8, something like that). I remember that we already knew E = 1/2 mv^2 and E = mgh (from primary school science) before we got to high school.
2) In year 10 we were introduced to logs and calculations using log tables. Even back then log tables were old hat but it was thought useful to give us a brief introduction to such things. We also
used slide rules briefly (once again, old hat even at that time). An aside: Calculators were not allowed at school until 6th form / year 12. We did mental maths (simple calculations in our heads, no
writing allowed) for 5 or 10 minutes in each class.
3) Calculus was introduced in year 11, I think (possibly year 12 but I think it was earlier).
4) By year 12 we’d come across matrices. I remember doing translations and scaling involving matrices that year.
5) In year 13 maths was split into two subjects – pure and applied maths. Applied maths included statistics and probability, as well as some computer science. I think some of the basics were
introduced earlier – I think we first came across normal distributions in year 10. I remember doing regression. In pure maths we were doing double derivatives and integration. I vaguely remember the
chain rule for derivatives and something similar for integration (integration by parts?). I think they may have been introduced in year 12.
By the way, I was surprised you said that calculus isn’t high school material. What year groups do high schools in the US cover? In NZ they’re 3rd form to 7th form, years 9 – 13.
In addition to the subjects mentioned above there was plenty of other stuff, such as trigonometry and geometry, but I can’t remember exactly when we got into that.
Public high school in suburban Philadelphia (Class of ’98):
Freshman: “Geometry” and “Finite Math” – geometry is self-explanatory. Finite math taught algebra-based applications
in statistics, probability, matrices, trigonometry, and sequence and series
Sophomore: “Algebra II” – just rounding out the algebra taught in middle school
Junior: “Pre-calculus BC” – polynomial and rational functions, graphing techniques, conic sections, exponential and logarithmic functions, sequences and series, limits, and basic trigonometry
Senior: “Calculus BC” – Equivalent to two semesters of college calculus–that is, differentiation and integration of a single variable
If I recall correctly, this was about all the math my high school offered at the time. If I had wanted to take more math, I’d have had to go to Villanova University down the street for stats,
computer science, linear algebra, etc. I regret not. Nowadays, I believe they offer substantially more mathematics at my alma mater.
I go to school right now in Minnesota and it doesn’t seem as though we cover near as much as the others in these comments. If you’re one of the smart ones in the class (the top third or half) you
Algebra 1 8th grade
Geometry 9th
Algebra 2
AB Calculus
Everyone else is a class or two behind but in the same order. In fact, you could even start pre-algebra in 9th grade and only take Geometry by graduation while fulfilling the 3 required math courses.
I was lucky enough to be able to work independently and finish Algebra 2 and Pre-Calculus in one year and take Calculus my junior year. Our calculus class is nothing to brag about, though. On the
national AP exam, no one aside from foreign exchange students has gotten anything but a 1 in years (hopefully our class changed that this year).
I was able to figure out (almost) all of the questions on the quiz but I’m sure that over half the people leaving out school couldn’t. We also are never taught most of the terms in questions 8 and 9.
We learned how to find the vertex of parabolas in A2, but none of the others.
My high school in north central Texas had a sequence similar to Cade’s. Importantly, though, there were massive personnel problems, and in the end we were advised not to waste our money on the
Calculus AP Exam. Sadly, I can also echo Cade’s experience with regard to the Exam – a foreign exchange student was the only person who decided to take it that year, and she of course received a 5.
None of us would have fared as well.
Also, now – a full 10 years after Algebra II and Geometry – I’m learning that my sequence (the “advanced” one!) somehow skipped both geometric proofs and matrices. Which is a shame, since I would
have gotten interested in math far sooner, especially if proofs had been a major component of my high school program.
Antofagasta, Chile, class 2000: Matrices, analytic geometry, complex nubers, calculus (derivatives and a little of integration).
I must say that my teacher did so well that I wanted to keep studying math after high school, so I got a BS 4 years later
Ah! and basic trigonometry
I went through the pre-Harris Ontario curriculum.
Grade 9 and 10:
- Quadratic equations
- Factoring
- Plotting
Grade 11 and 12:
- Geometric proofs
- Trigonometry
- Logarithms
Grade 13 Calculus:
- Differentiation
- Integration
- Max/Min
Grade 13 Finite:
- Matrices
- Systems of linear equations
- Combinatronics
- Probability
Grade 13 Algebra & Geometry:
- Systems of linear equations
- Geometric proofs
- Hyperbolas and ellipses
I think that the new curriculum introduces things faster rather than slower. My girlfriend teaches grades 7 & 8 and they do things I didn’t get to touch until grade 9.
I went to school in western Pennsylvania in the late 60′s. We got algebra in the 9th and 10th grade, geometry in 11th grade and trigonometry in 12th grade (last year of high school). I did poorly in
arithmetic and algebra because I was too lazy to check my work. But I fell in love with the elegance of geometry and did well. That continued into trigonometry. Some of the better students were
offered the chance to take calculus in our 12th grade but I was not one of them. When I finally took calculus in college (1969), everyone else had already had it and I was the “dummy” of the class. I
never recovered after that. I still love math but have no confidence now.
I am currently in high school in Colorado as a senior(12th year), and compared to what the rest of you are saying, my school is doing quite poorly(or at least, it’s worse than it was in 1960. Heh).
For normal students, it is usually Algebra 1-2, Geometry, then Integrated Algebra 3-4(basically a re-hash of the last 2 classes…) and then 12th year is optional(The students usually don’t take any).
I was in the normal classes until last year, because they’ve been too easy and repetitive, so I am starting to take calculus on my own(and getting my own books, the school textbooks and classes are
at best mediocre, and at worst downright pitiful..).
I love math, and I’d like to thank you for your book recommendations. I’m going to buy some of those books myself(unless I can find them at the library, but the good ones usually aren’t there.
Usually they have ones that were used for school.. You know, the sad ones that make your eyes bleed.).
My school has omitted a large amount of information… In the refresher, I was only able to do 1,3 and 10, so I’m going to get a real precalculus book(How do schools manage to get such shitty textbooks
and teaching?! Agh!), and actually learn this.
(I’d write something about how much schools suck nowadays(at least in America), but that’s for other sites)
Thanks for the links and books, I’ve put your site on my favorites.
~John(Colorado, America. 17 years old in 12th grade(senior))
I attended a school in a suburb outside of Detroit Michigan, and am currently a middle school math teacher.
These are the following courses I took while in school:
9th Grade- Honors Algebra
10th Grade- Honors Geometry/ Honors Trigonometry
11th Grade- Honors Pre-Calculus
12th Grade- AP Calculus
I took your refresh your math skills quiz, and because I don’t use all of these skills now, there were a few questions I was unable to answer. However, if I looked over a book to refresh my skills, I
know I wouldn’t have any problem answering them.
In this article you mentioned that some people believe that these skills are middle school level, and others believed they were above high school. Being a middle school teacher, I know that many and
almost MOST of these concepts are above the level of all my students. (I teach grade level, and no advanced courses). I think that all of these concepts are suited for high school students.
Something that might be useful to know, every child in the state of Michigan must take Algebra I, Algebra II, Geometry, and another elective math course to graduate from high school!
I’ve enjoyed reading a lot of your blogs!
I went to a private middle and elementary school near Cleveland, Ohio before transferring to public school in 8th grade. My math education was as follows:
7: Algebra – didn’t really understand it or get much out of it, did badly in the class
8: Algebra – this time I did very well
9: Geometry (learned a few theorems, did some “proofs”) and Honors Algebra II (basic linear algebra and analytic geometry)
10: Honors FST (Functions, Statistics, Trigonometry)
11: Honors Precalculus (basic logic/set theory, proofs, basic differentiation/integration), AP Statistics
12: AP BC Calculus (more differentiation/integration, infinite series) – I also borrowed my teacher’s book for a few weeks at the end of the course to teach myself multivariable calculus
aaand in college so far (started in engineering, switched in my second year to applied math/physics):
13: Calculus AGAIN (had to do it, despite a 5 on the AP BC exam, but on the bright side, did more multivariable/vector calculus), set theory/combinatorics, number theory (self-taught)
14: Probability theory, differential equations, linear algebra, complex analysis, more combinatorics (discrete mathematical structures)
15 (anticipated): Partial differential equations and boundary value problems, abstract algebra, more linear algebra, more combinatorics (discrete mathematical models), more statistics
16 (anticipated): Numerical analysis
That just about wraps up my undergraduate maths education – not very much math my senior year, since I am taking a bunch of physics and general education required classes.
I was asked to compare the elementary and high school mathematics curricula of the Philippines and Singapore, and the result is the following paper: | {"url":"http://math-blog.com/2007/05/26/what-kind-of-math-did-they-teach-you/comment-page-1/","timestamp":"2014-04-20T20:55:07Z","content_type":null,"content_length":"68027","record_id":"<urn:uuid:42fe200e-110a-4805-99aa-5944d48042b2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rotation Transformation (With Worked Solutions & Videos)
Rotation Transformation
In this lesson, we will learn
• what is rotation
• how to draw the rotated image of an object given the center, the angle and the direction of rotation.
• how to find the angle of rotation given the object, its image and the center of rotation.
• how to rotate points and shapes on the coordinate plane about the origin.
Related Topics:
More Geometry Lessons
What is Rotation?
A rotation is a transformation in which the object is rotated about a fixed point. The direction of rotation can be clockwise or anticlockwise.
The fixed point in which the rotation takes place is called the centre of rotation. The amount of rotation made is called the angle of rotation.
For any rotation, we need to specify the center, the angle and the direction of rotation.
Drawing The Rotated Image
Given the center of rotation and the angle of rotation we can determine the rotated image of an object.
Determine the image of the straight line XY under an anticlockwise rotation of 90˚ about O.
Step 1: Join point X to O.
Step 2: Using a protractor, draw a line 90˚ anticlockwise from the line OX. Mark on the line the point X such that the line of OX = OX
Step 3: Repeat steps 1 and 2 for point Y. Join the points X and Y to form the line X Y .
The Angle Of Rotation
Given an object, its image and the centre of rotation, we can find the angle of rotation using the following steps.
Step 1 : Choose any point in the given figure and join the chosen point to the centre of rotation.
Step 2 : Find the image of the chosen point and join it to the centre of rotation.
Step 3 : Measure the angle between the two lines. The sign of the angle depends on the direction of rotation. Anti-clockwise rotation is positive and clockwise rotation is negative.
Example :
Figure A B C is the image of figure ABC. O is the centre of rotation. Find the angle of rotation.
Solution :
Step 1: Join A to O
Step 2: Join A to O.
Step 3: Measure the angle AOA
The angle of rotation is 62˚ anticlockwise or +62˚
This video covers the manual rotation of a polygon about a given point at a given angle. You will need a straightedge, a protractor, and a compass. We will perform rotations about a point inside the
figure, one outside the figure and one on the figure.
How to rotate a figure around a fixed point using a compass and protractor.
Rotate points on the coordinate plane
We will now look at how points and shapes are rotated on the coordinate plane. It will be helpful to note the patterns of the coordinates when the points are rotated about the origin at different
Geometry Rotation
A rotation is an isometric transformation: the original figure and the image are congruent. The orientation of the image also stays the same, unlike reflections. To perform a geometry rotation, we
first need to know the point of rotation, the angle of rotation, and a direction (either clockwise or counterclockwise). A rotation is also the same as a composition of reflections over intersecting
The following videos show clockwise and anticlockwise rotation of 0˚, 90˚, 180˚ and 270˚about the origin (0, 0). The pattern of the coordinates are also explored.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. | {"url":"http://www.onlinemathlearning.com/rotation-transformation.html","timestamp":"2014-04-21T02:24:54Z","content_type":null,"content_length":"45739","record_id":"<urn:uuid:8ddc17d6-c407-45dd-853c-f9b78d489291>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
The classical theory of invariants and object recognition using algebraic curve and surfaces
Çivi, Hakan and Christopher, Colin and Erçil, Aytül (2003) The classical theory of invariants and object recognition using algebraic curve and surfaces. Journal of mathematical imaging and vision, 19
(3). pp. 237-253. ISSN 0924-9907 (Print) 1573-7683 (Online)
Full text not available from this repository.
Official URL: http://dx.doi.org/10.1023/A:1026233121583
Combining implicit polynomials and algebraic invariants for representing and recognizing complicated objects proves to be a powerful technique. In this paper, we explore the findings of the classical
theory of invariants for the calculation of algebraic invariants of implicit curves and surfaces, a theory largely disregarded in the computer vision community by a shadow of skepticism. Here, the
symbolic method of the classical theory is described, and its results are extended and implemented as an algorithm for computing algebraic invariants of projective, affine, and Euclidean
transformations. A list of some affine invariants of 4th degree implicit polynomials generated by the proposed algorithm is presented along with the corresponding symbolic representations, and their
use in recognizing objects represented by implicit polynomials is illustrated through experiments. An affine invariant fitting algorithm is also proposed and the performance is studied.
Item Type: Article
Subjects: Q Science > QA Mathematics
ID Code: 391
Deposited By: Aytül Erçil
Deposited On: 17 Feb 2007 02:00
Last Modified: 25 May 2011 14:08
Repository Staff Only: item control page | {"url":"http://research.sabanciuniv.edu/391/","timestamp":"2014-04-21T02:02:02Z","content_type":null,"content_length":"16045","record_id":"<urn:uuid:fe4bff14-fe7b-4677-add2-6b26115e94f3>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relationship Set
A relationship set is a set of relationships, all of which
• have the same arity,
• connect to the same object classes, and
• express the same logical connection among objects.
In an ORM diagram, a relationship set is represented by a diamond with lines connecting associated object classes. There is a line for each connection to an object class.
For instance:
Figure 1. Binary relationship set
The number of connections to object classes in a relationship set is called the arity of the relationship set. If there are two connections, the relationship set is binary. If there are three, it is
ternary. For four object classes the relationship set is quaternary. Relationship sets with five or more are 5-ary, 6-ary, and so on. It is common to refer to relationship sets of arity three or more
as n-ary relationship sets.
For example:
Figure 2. Ternary relationship set
In the shorthand notation for binary relationship set names, we omit the diamond and the object class name, and add a directional arrow. In the shorthand notation, we thus represent the relationship
set in Figure 1 as shown in Figure 3. We provide no shorthand notation for n-ary relationship sets.
Figure 3. Relationship set with abbreviated name
The analysts can express several different types of constraints on relationship sets.
Special Relationship Sets
Certain types of relationship sets appear so often in ORM diagrams that we have chosen special symbols to represent them. These include:
Here is a short quiz so you can test your understanding of relationship sets.
Last updated 3 Nov 1994. by Mingkang Xu (xmk@osm7.cs.byu.edu), Lei Cao(caol@bert.cs.byu.edu) | {"url":"http://osm7.cs.byu.edu/OSA/relationshipSet.html","timestamp":"2014-04-21T14:40:51Z","content_type":null,"content_length":"3962","record_id":"<urn:uuid:6c0486c2-8e50-438f-bf83-af6751ae1ab7>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Lagrangian for black hole
I have lagrangian like this:
And I am going to consider the equations of motion of the fields by varying the action related to this lagrangian with respect to
But I am still confused with these things:
From eq. (1) I can see that the Planck Mass is set to 1, but what does this mean?
What will happen if scalar potensial is zero,
What's the difference between scalar Ricci
After getting the equations of motion of the fields, I have to solve them (Einstein equation and scalar fields), but I have no clue why I picked this lagrangian, does anyone can explain me why? | {"url":"http://www.physicsforums.com/showpost.php?p=3380468&postcount=1","timestamp":"2014-04-21T15:02:04Z","content_type":null,"content_length":"9604","record_id":"<urn:uuid:878de5c1-302f-4bc8-be22-2ee28a087533>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
closed loop of classical string
Actually, there is a good treatment of this in Zwiebach's string theory book. Just check out the earlier chapters, and forget about trying to quantize the string.
Basically, you take the wave equation for a non-closed string, and give it periodic boundary conditions, that's all. | {"url":"http://www.physicsforums.com/showthread.php?t=421552","timestamp":"2014-04-18T10:47:16Z","content_type":null,"content_length":"24081","record_id":"<urn:uuid:416b3c60-3532-434b-af4e-769d3f3fc836>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Tutors
Spring, TX 77389
Young, effective tutor - improved performance in any subject
...I'm an experienced tutor and will effectively teach all subjects in a way that is easily understood. I specialize in tutoring math (elementary math, geometry, pre
1 & 2, trigonometry, precalculus, etc.), Microsoft Word, Excel, PowerPoint, and VBA...
Offering 10+ subjects including algebra 1 and algebra 2 | {"url":"http://www.wyzant.com/Humble_algebra_tutors.aspx","timestamp":"2014-04-19T11:07:07Z","content_type":null,"content_length":"60510","record_id":"<urn:uuid:f3ad06d1-dad1-4ae0-956b-a0240e899623>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
One family of fermions SU(2)LXU(1)
Almost all the interactions in the SM, including those mediated by the Z, photon and gluons ( strong, electromagnetic and weak neutral current) don't mix between the different generations ( families)
of fermions. For these interactions, the lagrangain is simply a sum of an identical lagrangian for each generation ( up to different masses, of course) with the same feynman rules for each
Only the interaction mediated by the W boson (charged neutral current) mixes between the generations with coefficents given by the CKM matrix. This is where the fact that we have three generations
and not one come to play. | {"url":"http://www.physicsforums.com/showthread.php?s=6d80cb3f5caf01dd0b1c96705fafb29d&p=4627075","timestamp":"2014-04-21T04:53:06Z","content_type":null,"content_length":"22820","record_id":"<urn:uuid:0d821d12-5649-4c2b-8cf3-f5a7f7289150>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
Effect of emulsifier quantity on emulsion droplet size?
(I hope this is the right forum..)
Given all other factor are the same, what would be the effect of the amount of emulsifier on the size of droplets in the emulsion? I would expect that more emulsifier would mean more interfacial
surface area, which would in turn mean smaller droplets. Would this be a correct assumption?
Also, is it possible that some of the emulsifier will not enclose any of the dispersed phase (maybe forming some empty micelles or something..)? | {"url":"http://www.physicsforums.com/showthread.php?t=197280","timestamp":"2014-04-18T15:39:54Z","content_type":null,"content_length":"19505","record_id":"<urn:uuid:3d62cf7d-3236-4265-b4c1-39e7ffc21305>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'Cup of Coffee' Brain Teaser
Cup of Coffee
Science brain teasers require understanding of the physical or biological world and the laws that govern it.
Puzzle ID: #4599
Category: Science
Submitted By: bobbrt
You are served a hot cup of coffee and room-temperature cream at a restaurant. You want to wait a few minutes before you drink the coffee, and you want it to be as hot as possible when you drink it.
Should you pour the cream in the coffee:
a) Immediately
b) Just before you drink it
c) It doesn't matter
( "don't add any cream" is not an option)
Show Answer
What Next?
cathalmccabe I'm not sure about where you say "If the cream is added immediately, then the temperature will drop initially but will then drop at a slower rate since the coffee with cream is cooler
Jun 18, 2002 than the coffee alone and therefore the driving force for heat transfer is less."
There might be less driving force, but the coffee and cream is already at a much lower temperature. Would it not be true that when the coffe on it's only gets to the same temperature
as the coffe and cream drops to initially that its rate of change would be less too, because it is at a lower temp, but during this time the coffee and cream has dropped even further.
I'm wonderinging what would happen depending on the volumes involved, for example 1 litre coffee, 1ml of cream, or how hot the coffee is, although the size of a cup of coffee and cream
added are obviously more specific!
I agree with your answer, just not fully with your reasoning! I think its a brilliant teaser by the way! Good work!
bobbrt I tried to come up with a better explanation for this, but I'm not having much luck. I'm afraid that any non-believers will just have to go on faith (unless someone else can explain it
Jun 18, 2002 better). By the way Cathal, thanks for the compliment but this teaser is far from being an original. You'll have to thank the good people at - oh, wait - I get what's going on here!
You're trying to get me to reveal my sources!! Well you can just wait until my pet pig "Muffin" flies to China under his own power!! HA HA HA...I'll NEVER TELL! NEVER!!!!!!!
Dazza I saw this problem on a kids science show about 20 years ago. From memory, I think the presenter said that there would be no difference, but I could be wrong. However, he did say that
Jun 18, 2002 if the cream was high in fat it would leave a thin film of insulating oil in the top of the coffee. Hence the coffee would be slightly warmer if you added the cream first.
WizardMagus Heat loss is an exponential decay graph. The hotter the object is, the faster it cools. As it cools more and more, it starts to cool less rapidly. When the cream is added immediately,
Jun 19, 2002 it brings the temperature down to a lower level, meaning the heat will be lost less rapidly. If you wait for the coffee to cool first, and then add the cream, the temperature change of
the coffee would be less because the two substances are already closer together in temperature scale. Therefore, it shouldn't really matter, because one way you lose heat in the
mixture, one way you lose heat into the air. The ammount of heat lost in each option should be equal, if you wait the same ammount of time with each trial.
Mogmatt16 who would really do all that thinking about when to put in their cream? it doesn't seem like much of a diffrence, but I may be wrong...
Jun 19, 2002
cathalmccabe It was the great thinkers of ancient times who developed simple games for the most complicated problems they came across as a method to solve them. John Nashes @Game Theory@ is a
Jun 21, 2002 modern day example (film: A beautiful mind based on him) Its little trivial things like this that can solve much more complicated issues, and its a lot easier to experiment on a cup of
coffe for temperature than say a Power Plant, or whatever
dewtell Actually, I think the "driving force" aspect of the answer
Aug 08, 2002 is wrong. Suppose that the coffee loses half the temperature difference
between itself and the room every minute (a nice exponential decay curve),
and that it starts out with a 128-degree difference (coffee at 198 degrees, room at 70).
The temperature of the cream isn't going to be changing, so suppose that it reduces the
coffee temperature difference by 1/4th (there is 3 times as much coffee as cream).
Then if you let the coffee cool first, after 4 minutes the difference is 1/16th what it
started with, so it's an 8 degree difference (78 degrees). Adding the cream now reduces
the difference by 1/4, to 6 degrees. But if you added the cream first, you would have dropped
the temperature difference to 96 degrees difference. Letting this cool for 4 minutes likewise
drops the difference to 1/16th - to the same six degree difference. The temperature loss by cooling
is less, but the final temperature is the same.
However, the "added bonus" part probably still applies - the surface area through which the heat is
lost (primarily the top) remains the same, but the volumn holding the heat has increased. So you
probably do get some effect due to this.
angel11 I got it right, but I didn't even know why that was the answer. You all are geniuses.
Oct 03, 2002
jimbo The answer supplied is quite correct. The puzzle as I heard originally came from a series called "Why is it so?" by Professor Julius Sumner Miller. (I think he is a physicist). It can
Nov 08, 2003 be modelled with differential equations but it is probably not necessary if you consider the answer given that heat loss is greatest when temperature difference is greatest. Therefore
the sooner the coffee is brought closer to room temperature, the slower will be the heat loss.
curtiss82 Dewtell, your math is totally wrong. I will show you the correct math. Start off with an overall energy balance and simplify.
Dec 25, 2003
In - Out + Source -Sink = Accumulation.
=> -Out = Accumulation
In this case, I will assume that the Biot number is less than .1 and thus I won't have to worry about any temperature gradients in the coffee. To also simplify the problem, I will
assume that heat will only be convected away from the cup at the liquid surface and I will assume that the surface area doesn't change when the cream is added.
Going back to the equation, after adding in Newton's Law of Cooling and the accumulation term, it becomes:
-A*h*(T-Tc) = V*ro*Cp*dT/dt
A = Area of heat transfer
h= heat transfer coefficient
T= Temperature of coffee
Tc = Air temperature
V= Volume of Coffee
ro= Density
Cp= Heat capacity
dT/dt= Rate of change of Temperature with respect to time
This differential equation can be separated and integrated resulting in an equation that gives the temperature versus time:
T = Tc + (Th-Tc)*exp((-A*h*t)/(V*ro*Cp))
Th= Initial temp. of coffee
I will assume that all the physical properties of coffee are the same as water and I will make a logical guess as to the size of the cup, the initial temperature, and the magnitude of
the heat transfer coefficient.
I will also assume ideal mixing so I can calculate the temperature of the coffee after mixing.
No matter what time I plug in I get that the coffee that had the cream added immediately was slightly warmer than the coffee that had the cream added right before drinking.
joanie Hey am I the only one who knew the answer just from years of actually drinking coffee with cream?
Dec 26, 2003
vikingboy While no one may ever read this as I'm responding two years later than the last response, there is one fundamental issue that many of you are missing.
Sep 02, 2005 Stirring a liquid is one of the fastest ways to cool it down. By stirring at the start you lose a lot of heat.
Add the cream, but don't stir.
beegum Bassically, since the coffee will need to use its energy on the cream anyway, add the cream immediatelly...
Dec 13, 2005
JasonD Here's an explanation that's not as precise as dewtell's, but it might be helpful.
Aug 12, 2008
- In terms of mass and chemical composition, the coffee + cream mixture is identical regardless of how it's done.
- The total energy of the system starts out the same in either scenario (obviously).
- The cream, at room temperature, neither gains nor loses thermal energy.
Therefore: We need only consider the net energy loss from whatever is in the coffee cup.
The hotter coffee (waiting to add the cream) loses energy at a faster rate, because it has a greater temperature difference compared to the surrounding air.
The imprecision in this scenario comes from ignoring the increased convection area of the coffee + cream system. I.e., increasing the volume raises the fluid level, thus increasing the
surface area of the cylinder formed by the liquid.
This is likely to be neglible because of the insulating effects of coffee containers -- most all of the energy is being lost from the disc-shaped top of the cylinder, and that disc
does not increase in size. | {"url":"http://www.braingle.com/brainteasers/teaser.php?id=4599&op=0&comm=1","timestamp":"2014-04-19T07:13:52Z","content_type":null,"content_length":"39168","record_id":"<urn:uuid:2963dc2a-f004-480c-a12c-0afe0cf89bb7>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stone Mountain Math Tutor
...She is very intuitive to what I am having trouble grasping and what I need more help with. She is really my one weapon in my arsenal I couldn't do without. I wouldn't have passed calculus
without her!
22 Subjects: including algebra 2, reading, differential equations, ACT Math
...I am currently a high school science teacher who loves science and math. I have helped students prepare for both the ACT and the SAT. As a high school junior, I received a score of over 700 on
the math SAT.
15 Subjects: including algebra 1, algebra 2, biology, chemistry
...After working a few years, I then decided to pursue my Master's in Biology at Chatham University. My experience in tutoring has spanned from my high school years into my college years and into
my personal life. I have tutored people in most subjects from SAT (math and verbal prep) to children just learning their alphabets and spatial recognition.
29 Subjects: including algebra 2, geometry, Microsoft Word, physics
...Since returning to Georgia from Singapore last year, I have taken up work as a tutor, freelance editor and Ph.D. consultant. You can see that I have been involved in teaching and mentoring
since I graduated from UGA in 2005, developing my own curriculum, mentoring and tutoring students, grading ...
37 Subjects: including algebra 2, probability, English, statistics
I have concentrated on preparing students for the ACT for the last year. I have worked with them on time management as well as test taking strategies. Student scores have improved upon their
21 Subjects: including calculus, precalculus, elementary (k-6th), ACT Math | {"url":"http://www.purplemath.com/Stone_Mountain_Math_tutors.php","timestamp":"2014-04-19T23:40:06Z","content_type":null,"content_length":"23768","record_id":"<urn:uuid:375e575e-833e-4751-8013-becb4b315e67>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with math homework
January 22nd 2009, 01:17 PM #1
Jan 2009
Hello everyone, this is my first post here. I have been trying to figure this out for a while now, and i have no idea what i am doing.
A farmer has $3600 to spend on fencing for three adjoining rectangular pastures, all with the same dimensions. A local contracting company tells the farmer that they can build the fence for $6.25
/m. What is the largest total area that the farmer can have fenced for that price?
I could really use some help on this problem, please help!
Hello everyone, this is my first post here. I have been trying to figure this out for a while now, and i have no idea what i am doing.
A farmer has $3600 to spend on fencing for three adjoining rectangular pastures, all with the same dimensions. A local contracting company tells the farmer that they can build the fence for $6.25
/m. What is the largest total area that the farmer can have fenced for that price?
I could really use some help on this problem, please help!
make a sketch, and label the dimensions ...
x x x
* * * *
* * * * y
* * * *
cost of fence ...
$3600 = (6x + 4y)($6.25)
area of enclosures ...
A = 3xy
using the cost equation, solve for one variable in terms of another and substitute into the area equation in order to get area in terms of a single variable, then find the maximum area using the
method(s) taught to you in class.
thanks a lot man
January 22nd 2009, 02:00 PM #2
January 22nd 2009, 02:14 PM #3
Jan 2009 | {"url":"http://mathhelpforum.com/calculus/69456-help-math-homework.html","timestamp":"2014-04-19T18:56:35Z","content_type":null,"content_length":"35915","record_id":"<urn:uuid:c9568d14-1517-4295-963f-606f15fd8d1e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sea Cliff Calculus Tutor
Find a Sea Cliff Calculus Tutor
...As an experienced teacher of high school and college level physics courses, I know what your teachers are looking for and I bring all the tools you'll need to succeed! Of course, a big part of
physics is math, and I am experienced and well qualified to tutor math from elementary school up throug...
18 Subjects: including calculus, reading, GRE, physics
...I am an experienced computer programmer of many years, and I have developed and manage several websites. I developed techniques that help students raise their scores several hundred points! I
work with students to develop a custom study plan that attacks their weaknesses and enhances their strengths.
34 Subjects: including calculus, writing, geometry, statistics
...Topics include mean, variance, standard deviation, probability, normal distribution and probability, binomial distribution and probability, permutations and combinations, counting
probabilities, simple linear regression, multiple regression, hypothesis testing and confidence intervals for means a...
22 Subjects: including calculus, physics, geometry, statistics
...I'm proficient in Python, Java, and command-line utilities for automated data processing, audiovisual media programming, and algorithmic composition. I'm also proficient in SuperCollider, Max/
MSP, CSound, and other audio programming languages for sound synthesis, editing, and mastering. I've ta...
30 Subjects: including calculus, reading, Spanish, writing
...This eventually became my full time job. I considered myself blessed to be involved with my two greatest passions...music and math. Teaching music turned out to be an extremely rewarding
17 Subjects: including calculus, geometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/sea_cliff_calculus_tutors.php","timestamp":"2014-04-16T19:26:02Z","content_type":null,"content_length":"23977","record_id":"<urn:uuid:080ca5da-3da6-406d-8543-632595b119ca>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Avoid a common S-parameter problem
│ A version of this article also appeared in the May 2012 issue of Test & Measurement World. See the PDF. │
S-parameters, or scattering parameters, have become the de facto standard for describing the electrical properties of interconnects. They are one of the few concepts that bridge the worlds of the
microwave designer, who typically lives in the frequency domain, and the digital designer, who typically lives in the time domain.
Everything you ever wanted to know about the electrical properties of an interconnect—a connector, a scope probe, a circuit-board trace, a circuit-board via, or a cable—is contained in the
interconnect’s S-parameters. But you need to use a consistent method for assigning the port index labels to inputs and outputs or you risk obtaining misleading S-parameter values, which will lead to
incorrect interpretations.
Regardless of whether S-parameters come from measurements, circuit simulations, or electromagnetic simulations, the same formalism applies and the S-parameters behave the same. S-parameters describe
how sine waves interact with and “scatter” from an interconnect. Each interconnect has “ports,” defined as the ends of the interconnect into which signals enter and from which they leave. Each port
has connections to the signal conductor and its return path. Index numbers label the ports into which a signal enters and from which it scatters.
Consistency is paramount when you are labeling these ports. Software used to calculate S-parameters uses a defined scheme to assign port designations, and you need to be consistent with that scheme.
If you create S-parameter data-files based on one port-labeling scheme and use a data file that assumes a different labeling, the interpretation of the S-parameters and the results obtained using
them will be wrong. This very basic issue of port assignment causes the most common problem when using S-parameter models: incorrect interpretation of the data.
By following one simple guideline, you can eliminate this problem. You will also be able to look at an S-parameter model and immediately determine if it assumed the incorrect port assignment.
Return loss and insertion loss
Each S-parameter is the ratio of the wave coming out of a port to the wave going into a port (
Figure 1
). The formalism of S-parameters describes the combination of sine waves scattered from the ports of an interconnect. Every combination of this input-output port ratio makes up an S-parameter’s
matrix elements. Each matrix element is defined by the input port number (the stimulus) and the output port number (the response). This formalism applies regardless of whether the interconnect has
just one port or 100 ports.
│ │
│ │
│ FIGURE 1. Each S-parameter is the ratio of a scattered sine wave from a port to an incident sine wave into a port. │
In a two-port interconnect such as a PCB (printed-circuit board) trace or a cable, there’s only one way to assign the index port labels: port 1 on one side and port 2 on the other side. The
S-parameter matrix element corresponding to a wave that goes into port 1 and reflects back out of port 1 is labeled as S
. For historical reasons, S
is also referred to as return loss. Because impedance changes along the interconnect cause reflected waves, return loss is very sensitive to the interconnect’s impedance profile. The S-parameter
corresponding to the wave going into port 1 and coming out port 2 is labeled S
and is referred to, for historical reasons, as the insertion loss. It has information about reflections and is also sensitive to the losses in the interconnect.
One confusing aspect of S-parameters is the order of the index numbers used to label each S-parameter matrix element. If a signal were to go into port 1 and come out port 2, you might assume its
label would be “S
.” The label would be easy to remember at a glance: The signal goes into port 1 and comes out port 2.
Unfortunately, as a consequence of the matrix math formalism, the labeling scheme follows the opposite structure. The S-parameter matrix element containing information about the wave going into port
1 and coming out port 2 is actually S
At the lowest frequency, where the physical length of the interconnect is really short compared to ¼ of a wavelength, the reflection off the front of the interconnect and the reflection from the back
end of the interconnect mostly cancel out one another, so the return loss, S
, is nearly zero. In decibels (dB), the return loss for a through interconnect at low frequency is almost always a large negative decibel value.
The transmitted signal, described by S
, is due to the initial transmitted signal, and a small contribution from the signal reflects off port 2 to port 1, then reflects back to port 2 and, finally, out port 2. At the lowest frequency, all
of the signal gets through and comes out port 2.
The insertion loss of a through-interconnect at low frequency will be close to 0 dB.
As frequency increases, the losses in all interconnects cause the insertion loss to fall, which means a larger and more negative insertion loss in decibels. An example of the measured return and
insertion loss of a typical 50-Ω trace on a circuit board is shown in
Figure 2
│ │
│ │
│ FIGURE 2. The return loss (red, S[11]) and insertion loss (yellow, S[21]) of a circuit board’s transmission-line measurements show the characteristic behavior of return loss starting with a │
│ large, negative decibel value and an insertion loss starting with 0 dB at low frequency. │
This is an important observation: For virtually all interconnects, at the lowest frequency, you can expect the insertion loss to be nearly 0 dB. This is an easy and direct way to determine which
matrix element is really the insertion loss, independent of the port labeling.
More than two-port S-parameters
│ │
│ │
│ FIGURE 3. There are two approaches for assigning port labels to transmission lines. Case 1, the top approach, is the recommended approach. Case 2, although commonly used, is not recommended. │
Now comes the confusing part. If there are multiple interconnects, such as two adjacent transmission lines on a circuit board, there are two equivalent ways of labeling the port index numbers (
Figure 3
). In case 1, the opposite ends of one line are labeled port 1 and port 2, and the opposite ends of the other line are labeled port 3 and port 4. In this labeling scheme, the insertion loss of one
line is still the S
matrix element.
We recommend that you use the case 1 labeling scheme. It’s consistent with the intuition we built up connecting insertion loss with the S
matrix element, and it easily scales to more ports.
In case 2, port 1 and port 2 are the labels on the left side of the pair of lines and port 3 and port 4 are the labels on the right side of the pair. In this labeling scheme, the insertion loss of
the first line is actually the S
matrix element, and the near-end crosstalk is S
Both labeling approaches are legal and used in the industry. Both ways are correct. The interpretation of the same-labeled S-parameter matrix element, however, is obviously different depending on
which port assignment you use.
In the first port assignment, the insertion loss is S
and you would expect it to be nearly 0 dB at low frequency. The S
matrix element relates to the near-end crosstalk between the two lines and should always be very small, or a large negative decibel value at low frequency.
In the second port assignment, the insertion loss is the matrix element S
. The matrix element S
is the near-end crosstalk. These S-parameters are just as valid and just as well-defined as when labeled with the index port assignment of case 1. But if you use the S-parameter model created with
one labeling scheme in an application that has a different labeling scheme, the result will be the same as if you had a bad model.
The way to tell which port assignment was used in an S-parameter file is to look at the S
matrix element. If S
looks like an insertion loss, starting out with a nearly 0 dB value at low frequency, then the port assignments were labeled as in case 1. If S
looks like an insertion loss and has a nearly 0 dB value at low frequency, then the port assignments were labeled as in case 2.
As an example,
Figure 4
shows the measured S
and S
matrix elements from a pair of stripline traces. S
looks like an insertion loss, starting out at low frequency with 0 dB. This S-parameter measurement used the second case as its port assignment. The S
matrix element, looking like near-end crosstalk, is confirmation.
│ │
│ FIGURE 4. The measured S[21] and S[31] matrix elements for two stripline traces show that the S[31] element is clearly an insertion loss, confirming that the case 2 port-labeling scheme was used │
│ for this S-parameter matrix. │
Knowing which port assignment was used is critical for two reasons. The end user of the model usually connects the S-parameter model into a circuit by connecting circuit nodes to ports. If the port
assignments are not as expected, the circuit will still simulate and you will get a resulting waveform, but it will be a completely wrong result.
In addition, it is increasingly common for two single-ended transmission lines to be used as one differential pair. The differential insertion and return loss of the differential pair, designated by
matrix elements SDD21 and SDD11, are created from linear combinations of the single-ended S-parameter matrix elements. If you assume the incorrect port assignments when calculating the differential
S-parameters, the resulting differential S-parameters will be wrong.
To illustrate this problem, we measured the S-parameters from two stripline traces and stored them in a four-port S-parameter matrix using the case 1 port-labeling scheme. We then calculated the
differential S-parameters in two ways: the first correctly assumed case 1 labeling; the second incorrectly assumed case 2 labeling.
Figure 5
shows the resulting differential insertion and return loss for each assumption.
│ │
│ FIGURE 5. When differential insertion loss and return loss are calculated from the same S-parameter file with two different port-labeling approaches, the results will differ. a) When the correct │
│ port assignments are assumed for the calculations, the insertion and return loss are consistent with expectations. b) When the incorrect port assignments are assumed, the insertion and return │
│ loss are clearly not correct. │
An insertion loss, whether single-ended or differential, will always start near 0 dB at low frequency. Clearly, the differential insertion loss assuming the wrong port assignment results in an
insertion loss that is not consistent with our expectation, as it starts out with a large negative decibel value.
Recommendations for port assignments
Unfortunately, S-parameter files rarely note which labeling scheme was used to create the file, and you might forget to write down which scheme you used. If you deal with S-parameters from numerous
sources, different files could have been created with different labeling schemes. This mix-up in the labeling scheme for the ports is the number-one source of confusion and the root cause of wrong
results when using S-parameter models. (S-parameters are confusing enough without adding another opportunity for confusion.)
To avoid this common source of confusion, we strongly recommend you adopt the habit of labeling the port index numbers with odd port numbers on the left side and even port numbers on the right. This
approach has two important
• It is consistent with the labeling of two-port interconnects. Insertion loss is still S[21].
• It is scalable, so for four ports, you just need to add the additional lines and continue with the labeling of 3 to 4, 5 to 6, 7 to 8, and so forth.
Regardless of which labeling approach you use, the first thing you should do when you get a new S-parameter data file is look at the S
and S
terms. If S
looks like an insertion loss, you know the case 1 port-labeling scheme was used. If S
looks like an insertion loss, you know the case 2 port assignment was used. If it is case 2, you can ask the engineer who created the S-parameter file to consider using a less-confusing port
assignment in the future.
T&MWEric Bogatin
is a signal-integrity evangelist at Bogatin Enterprises, a LeCroy company. He holds an SB degree in physics from MIT and a PhD in physics from the University of Arizona in Tucson. He has been active
in the signal-integrity industry for more than 30 years, writing articles and books and teaching classes.
Alan Blankman
is the technical product marketing manager for signal integrity at LeCroy. He holds a PhD in physics from the University of Pennsylvania and an MBA from the New York University Stern School of
Business. He has more than 20 years of experience developing instrumentation and software for high-energy physicists and electrical engineers.
Loading comments...
Write a Comment | {"url":"http://www.edn.com/design/test-and-measurement/4389655/Avoid-a-common-S-parameter-problem","timestamp":"2014-04-21T07:19:12Z","content_type":null,"content_length":"75651","record_id":"<urn:uuid:677ca1b4-7494-4e63-918e-96ff52cc2a91>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Enumerating/counting paths of a given length on a 2D lattice
up vote 6 down vote favorite
I'm wondering if anyone can point me to a reference on how to address the following problem. In my thesis work on lattice QCD many years ago I had to enumerate all possible paths of a given length on
a 2D lattice, up to the symmetries of reflection and rotation (plus some other internal symmetries we won't concern ourselves with here). I've been wondering whether it would be possible to compute
the number of such paths without explicitly listing them all, for example using generating-function techniques. The paths were used to generate coherent-states for fermionic operators. Each path had
a quark at one end and an anti-quark at the other end.
To make this more concrete, I'll give some examples. Write the link in the plus and minus $x$ directions as 'x' and 'X', likewise use 'y' and 'Y' for the plus and minus $y$ directions, respectively.
Note that these 'links' represent unitary operators so combinations like $xX$ and $Yy$ cancel, so are not counted. Then the first few examples are:
Length 1: $x$
Length 2: $xx$, $xy$
(note here, for example, that the other paths $yy$, $YY$, $XX$, $xY$, $yX$, $yx$, etc, are related to the two that I listed by symmetry, so are not counted).
Length 3: $xxx$, $xxy$, $xyx$, $xyX$
Length 4: $xxxx$, $xxxy$, $xxyx$, $xxyy$, $xxyX$, $xyxY$, $xyxy$, $xyyx$, $xyyX$, $xyXY$
and so on.
Let $A_n$ be the number of paths of length $n$. As $n$ gets large, it seems like there could be a simple asymptotic relation for $A_{n+1}/A_n$, since most paths would be extended by adjoining the
three possible directional links (that don't cancel) to the end of the path, so maybe the ratio would go to 3 (NOTE - which it seems to from Liviu's result)?
Again, this work was done years ago, but this problem has stuck in my mind. I never had any formal training in combinatorics or graphs, but from what I've read this seems like it could be a tractable
problem. For me, the issue of having to not count paths that are related by symmetries makes it quite difficult.
Thanks for any information/references/thoughts on this! I plan to write some code to numerically check the behavior of $A_n$. My goal is to have someone point me in the right direction to get started
on an 'analytic' solution or asymptotic estimate. By the way, the same problem arises in 3D, but I will stick to 2D first.
EDIT: Using Liviu's solution I found this sequence in OEIS, related to bending of a wire in 2D (the same problem). It is here: link text
Regards, Tom
When you say "2D lattice" you mean explicitly $\mathbb{Z}^2$? – David Benson-Putnins May 6 '13 at 21:45
It seems to me that this should be solvable by a straightforward application of Burnside's lemma. – Ira Gessel May 6 '13 at 21:49
@Liviu: I was afraid that I did not describe the problem well enough! The symmetries are reflection in $x$ and $y$, and rotations by any multiple of $\pi /2$. So the symmetry group you wrote down
is correct, but as Gerhard noted reversals are also identified with each other (This corresponds to 'charge conjugation.') Any path that can converted to another by these symmetries is considered
identical. For example, $xxyX$ and $YYXY$ are the same; take the second one and reflect about the $y$ axis, then rotate by $\pi / 2$. I will look up Burnside's lemma, thanks for the information! –
Tom Dickens May 6 '13 at 23:13
If you add the extra global symmetry $C$, path reversal,then clearly $C^2=1$ and $C$ commutes with the other symmetries $R_x,R_y,J$. The symmetry group has order $16$ and you need to count the
number of admissible paths invariant under each of the sixteen symmetries. This looks like an enjoyable and doable problem. – Liviu Nicolaescu May 7 '13 at 9:26
Hint: There is more then 7 paths of length 4. After you count the ones of length 5, you can probably look the sequence up in OEIS. – JHI May 8 '13 at 18:06
show 3 more comments
1 Answer
active oldest votes
I am trying to make sense of your problem. From what I can gather you are talking about walks with steps of size $1$ in each of the cardinal directions East, West, North, South,
(E,W,N,S). To use your notation $E=x$, $W=X$, $N=y$, $S=Y$. You are not allowed to immediately backtrack, i.e., if say you take an $E$-step, then your next step cannot be a $W$-step. I
will refer to such paths as admissible. I think that you need to fix the starting point. Assume it is the origin.
The number of admissible paths of length $n$ starting at the origin is $4\cdot 3^{n-1}$.
Alas, there is a symmetry in the problem and this is where I am a bit confused. Its looks to me that the symmetry group is the group $G$ of symmetries of the square with vertices $(\pm
1,0)$, $(0,\pm 1)$. (I could be wrong, but that is what I am getting from your description.) The center of this square is the origin, an the vectors obtained by joining the center with
the vertices are the unit vectors $\vec{E},\vec{W},\vec{N},\vec{S}$ pointing in the four cardinal directions.
The group has eight elements and it is generated by the reflection $R_x$ in the $x$-axis and the counterclockwise rotation $J$ by ninety degrees. The elements of this group are
$$ 1, J, J^2, J^3, R_x, R_xJ, R_xJ^2, R_xJ^3. $$
Assuming that my guess is correct you need to count admissible paths, where two paths that are related by one of the above eight symmetries are considered identical. Denote by $Q_n$
the number of such paths of length $n$.
Here you need to invoke Burnside's theorem. Here is what it says in this case.
For each $g\in G$ denote by $P_n(g)$ the number of admissible paths of length $n$ that admit $g$ as symmetry. More precisely a path
$$v_1\dotsc,v_n, \;\;v_1,\dotsc,v_n\in \lbrace E,W,N,S\rbrace $$
admits $g$ as symmetry if $g(v_1)\dotsc g(v_n)=v_1\dotsc v_n$. Then Burnside's theorem states that
up vote 4 down
vote accepted $$Q_n= \frac{1}{|G|}\sum_{g\in G} P_n(g). $$
There are only three elements $g\in G$ for which $P_n(g)\neq 0$. They are $1, R_x, R_y$, where $R_y$ denotes the reflection in the $y$-axis. Putting all the above together we deduce
$$Q_n= \frac{1}{8}\Bigl( 4\cdot 3^{n-1}+ 2+2)=\frac{1}{2}\bigl(3^{n-1}+1\bigr). $$
Update. I have worked out the details taking into account the correct symmetry group.
Here is what I found. If $n$ is odd, $n=2m-1$, then
$$ Q_n=\frac{1}{4}\bigl(3^{n-1}+2\cdot 3^{m-1}+1\bigr). $$
If $n$ is even, $n=2m$, then
$$ Q_n = \frac{1}{4}\bigl( 3^{n-1}+4\cdot 3^{m-1}+1\bigr). $$
Here are a few values.
$$Q_1=1, \;\; Q_2=2,\;\; Q_3=4, \;\; Q_4= 10,\;\; Q_5=25,\;\; Q_6=70. $$
Note that I get $Q_4=10\neq 8,9$. In any case, the details can be found here. Maybe somebody can explain the discrepancy involving $Q_4$.
Not quite. v1v2 and v2v1 are identified, as are a path and its reversal. Also, your formula does not match the case n=3. Gerhard "Ask Me About System Design" Paseman, 2013.05.06 –
Gerhard Paseman May 6 '13 at 22:37
I guess I screwed up the symmetry group. – Liviu Nicolaescu May 6 '13 at 22:48
Liviu, thanks for the answer. I don't even have around any of the results from this work anymore, I hope to write a simple routine to reconstruct some of it and compare to your
formula. (Sorry for the mistakes for n=4.) If anyone is interested the original paper on the lattice gauge theory problem is here: sciencedirect.com/science/article/pii/
0550321388902349?np=y – Tom Dickens May 8 '13 at 21:13
@Liviu - Once I listed the correct paths, your solution matches for $n=4$. Thanks for this, I'll take a look at your detailed derivation. I've certainly learned something new! – Tom
Dickens May 9 '13 at 1:07
@Liviu - Using your solution I computed several terms and found the sequence in OEIS. See edit to my question for the link. – Tom Dickens May 16 '13 at 15:38
show 1 more comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics or ask your own question. | {"url":"http://mathoverflow.net/questions/129886/enumerating-counting-paths-of-a-given-length-on-a-2d-lattice","timestamp":"2014-04-19T20:13:09Z","content_type":null,"content_length":"68659","record_id":"<urn:uuid:bf18b228-ec61-4bf5-b248-5c574c68f870>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the inverse of the function f(x)=1/3x-2?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
y=(1/3)x-2 Switch y&x and solve x=(1/3)y-2
Best Response
You've already chosen the best response.
So what is the next step you would do?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
So we want to get y by itself. Lets add 2 to each side.
Best Response
You've already chosen the best response.
\[x+2 = \frac{ 1 }{ 3 }y\]
Best Response
You've already chosen the best response.
Does that make sense?
Best Response
You've already chosen the best response.
So how will we now "move" the 1/3?
Best Response
You've already chosen the best response.
Okay. f(x)=1/3x-2 Take f(x) = y y = 1/(3x - 2) y*(3x - 2) = 1 3xy - 2y = 1 -1 - 2y= - 3xy -(1 + 2y) = -3xy 1 + 2y = 3xy (1 + 2y)/y = 3x (1 + 2y)/3y = x Lastly, replace y with x, we have: (1 + 2x)
/3x = y Which is the inverse.
Best Response
You've already chosen the best response.
Wait, is it \[F(x) = \frac{ 1 }{ 3x - 2 }?\]
Best Response
You've already chosen the best response.
Or is it \[F(x) = \frac{ 1 }{ 3 }x - 2?\]
Best Response
You've already chosen the best response.
Wouldn't it be 3(x+2)=y y=3x+6 \[f ^{-1}(x) = 3x+6\]
Best Response
You've already chosen the best response.
thats what i got but i got -3x but i see what i did wrong, thank you
Best Response
You've already chosen the best response.
Thank you.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c42c4de4b0c673d53dca2b","timestamp":"2014-04-19T20:04:25Z","content_type":null,"content_length":"56206","record_id":"<urn:uuid:dbcb015e-3d2d-498e-b8fe-af016958f174>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Re: 5th grade activity
Replies: 0
Re: 5th grade activity
Posted: Apr 19, 1995 7:50 PM
The Moebius strip helped me make a connection to the way some rational
graphs behave near their asymptotes. Take the function 1/x ...to the right
of the y-axis it's reaching up into positive infinity and to the left of the
y-axis it is plummeting into negative infinity. The asymptote is rather like
the line one draws down the middle of the moebius strip with the twist
happening out there in infinity. I might be all wet but it works for me.
Also, I thought I read somewhere that some conveyor =type belts have a
moebius twist in them so that the surface on both sides is worn down evenly.
Does that make any sense or was I just taken in by one of those Paul Bunyan
stories about the moebius strip?
>On Wed, 19 Apr 1995 roitman@oberon.math.ukans.edu wrote:
>> Re: Cathy Brady's "where's the math?": If surfaces and edges aren't math,
>> what is? If geometric reasoning isn't math (see comment in first
>> paragraph), what is?
>Somehow I don't think that's the answer that will sell someone without a
>graduate degree in mathematics.
> Cathy Brady Math Specialist/Education
>cbrady@umd5.umd.edu Maryland Science Center
>Opinions are my own "Beyond Numbers" exhibit
>or something I overheard Baltimore's Inner Harbor
Linda Dodge
Math Consultant
Frontier Regional High School
South Deerfield, MA | {"url":"http://mathforum.org/kb/thread.jspa?threadID=480375&messageID=1471793","timestamp":"2014-04-19T19:33:36Z","content_type":null,"content_length":"15110","record_id":"<urn:uuid:568f963c-2467-424c-b178-d38e92717d6a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Matheology § 224
Date: Mar 24, 2013 5:49 PM
Author: fom
Subject: Re: Matheology § 224
On 3/24/2013 4:33 PM, WM wrote:
> On 24 Mrz., 21:29, Virgil <vir...@ligriv.com> wrote:
>> A binary tree that contains only "all finite paths" cannot exist,
> The set of all finite paths does not exist? Neither does the set of
> all rational numbers, I presume?
> If you construct it, up to a certain
> point, suddenly all reals are there. A fascinating position.
You have been asked to define the terms of
your constructive mathematics.
You refuse.
You confuse a theory of inclusive monotonic crayon marks
with mathematics. | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8750899","timestamp":"2014-04-20T06:10:58Z","content_type":null,"content_length":"1793","record_id":"<urn:uuid:05a08bac-af1d-4c9b-b266-61f25a2d422f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Post a New Question | Current Questions
Math Word Problem
During the summer throughout high school and for a year after, Lewis worked as a lifeguard at the beach. In that 5-year period, he saw a total of 461 dolphins swimming in the distance. About how many
did Lewis spot per summer in the 5 summers he worked?
Thursday, October 10, 2013 at 7:57pm
1. If 29.0 L of methane, CH4, undergoes complete combustion at 0.961 atm and 140°C, how many liters of each product would be present at the same temperature and pressure? 2 If air is 20.9% oxygen by
volume, a. how many liters of air are needed for complete combustion of 25...
Thursday, October 10, 2013 at 7:06pm
what are some arrays that add up to 24 are at least 2 high but no taller than 8 high?
Thursday, October 10, 2013 at 4:44pm
Convert 5.45E14 Hz to wavelength in meters and convert to energy required. c = freq x wavelength Then convert the low end and the high end colors (4000 Angstroms low end and 7000 Angstroms high end)
to energy and compare.
Thursday, October 10, 2013 at 3:07pm
Hey nay nay, it's spelled high horse.
Thursday, October 10, 2013 at 12:49pm
High-speed motion pictures (3500frames/second) of a jumping 280ìg flea yielded the data to plot the flea's acceleration as a function of time as shown in the figure (Figure 1). This flea was about 2
mm long and jumped at a nearly vertical takeoff angle. Find the ...
Thursday, October 10, 2013 at 9:45am
British Literature
I am also stuck on these exams. I am taking british literature from American School. Please send me the answer if you are done with it. Thank u.
Thursday, October 10, 2013 at 5:05am
1. I went on a school field trip to Jeju Island. 2. I went on a school trip to Jeju Island. 3. I went on a study trip to Jeju Island. 4. I went on a study tour to Jeju Island. 5. I went on a study
travel to Jeju Island. 6. I went on a field trip to Jeju Island. (Which ...
Thursday, October 10, 2013 at 4:55am
i TRIED TO DO IT, BUT MY ANSWERES ARE WRONG. Please show work and explain. 1. If 29.0 L of methane, CH4, undergoes complete combustion at 0.961 atm and 140°C, how many liters of each product would be
present at the same temperature and pressure? 2 If air is 20.9% oxygen by...
Thursday, October 10, 2013 at 1:03am
Algebra 2
At the ruins of Caesarea, archaeologists discovered a huge hydraulic concrete block with a volume of 945 cubic meters. The block's dimensions are x meters high by 12x - 15 meters long by 12x - 21
meters wide. What is the height of the block? Can someone please show me how ...
Wednesday, October 9, 2013 at 9:18pm
remember , volume = lwh x(12x-15)(12x-21) = 945 simplifying a bit x(3)(4x - 5)(3)(4x-7) = 945 x(4x-5)(4x-7) = 105 16x^3 -48x^2 +35x - 105=0 by grouping ... 16x^2(x - 3) + 35(x-3) = 0 (x-3)(16x^2 +
35) = 0 x = 3 or x^2 = -35/16 , which is not real so the only real solution is x...
Wednesday, October 9, 2013 at 9:17pm
At the ruins of Caesarea, archaeologists discovered a huge hydraulic concrete block with a volume of 945 cubic meters. The block's dimensions are x meters high by 12x - 15 meters long by 12x - 21
meters wide. What is the height of the block? I have no idea how to do this. ...
Wednesday, October 9, 2013 at 8:30pm
Language Arts
2) Read the following passage from Icarus and Daedalus : He fell like a leaf tossed down the wind, down, down, with one cry that overtook Daedalus far away. When he returned, and sought high and low
for the poor boy, he saw nothing but the bird-like feathers afloat ...
Wednesday, October 9, 2013 at 7:21pm
The average high temperature in Alaska in January is -37F. If the average high temperature in Siberia for January is -83F how much warmer is it in Alaska compared to Siberia? A.120 B.54 C.46 D.36 C
Wednesday, October 9, 2013 at 6:29pm
Gravel is being dumped from a conveyor belt at a rate of 10 cubic feet per minute. It forms a pile in the shape of a right circular cone whose base diameter and height are always equal. How fast is
the height of the pile increasing when the pile is 18 feet high? (Recall that ...
Wednesday, October 9, 2013 at 3:16pm
India government
(Please put your school subject in the correct space ... not the name of your school.) http://www.google.com/search?q=effective+governance+india&oq=effective+governance+india&aqs=
Wednesday, October 9, 2013 at 8:04am
health and fitness
CASE STUDY: CHRIS DOUBLE Chris is a high school student and is eager to gain some muscle mass. He eats a diet that consists primarily of processed foods (fast food, cafeteria, etc.). Chris is anxious
to get stronger so that he can go out for the wrestling team. CLIENT ...
Wednesday, October 9, 2013 at 12:29am
In Mr. Woo's class, 80% of students live more than half mile from school. Of those students 80% come to school by public transportation. Of those students taking public transportation, 75% takes the
bus, and 75% of those students buy monthly bus passes. Nine students buy ...
Wednesday, October 9, 2013 at 12:25am
A certain school s enrollment increased 5% this year over last year s enrollment. If the school now has 1,260 students enrolled, how many students were enrolled last year? A. 1,020 B. 1,197 C. 1,200
D. 1,255 E. 1,323 I got the answer to be 1200, but that's after ...
Wednesday, October 9, 2013 at 12:25am
A certain school s enrollment increased 5% this year over last year s enrollment. If the school now has 1,260 students enrolled, how many students were enrolled last year?
Wednesday, October 9, 2013 at 12:23am
1. They are having an election for school president. 2. They are having an election for the school president. 3. They are having an election for a school president. (Which one is grammatical and
commonly used?)
Tuesday, October 8, 2013 at 7:43pm
What is the title for the student who is a representative of all the students in school? is it school president or student president? Would you let me know about that? Thank you.
Tuesday, October 8, 2013 at 6:56pm
And these... BEFORE the first day of school, they had only used the computer to play games. a. adverb b. preposition c. adjective d. conjunction A In order to see the double rainbow, the entire class
went OUTSIDE. a. conjunction b. adjective c. preposition d. adverb A Are ...
Tuesday, October 8, 2013 at 3:36pm
Please Help
Indicate your specific subject in the "School Subject" box, along with the specific problem in the content box, so those with expertise in the area will respond to the question.
Tuesday, October 8, 2013 at 1:41pm
Check my answer plz!!
You need to repost this and put Math in the School Subject space. You're more likely to get a response from a math tutor.
Tuesday, October 8, 2013 at 11:09am
Kharkutta govt. School
Use number line to evaluate the following. (1). -5+(-3) (2). 4-(-4)
Tuesday, October 8, 2013 at 9:47am
Kharkutta govt. School
a + a+d + a+2d = 597 3a + 3d = 597 a+d = 199 ----> d = 199-a a(a+d) = 796 a^2 + ad - 796=0 a^2 + a(199-a) - 796=0 a^2 + 199a - a^2 - 796=0 199a = 796 a = 4 d = 199-4 = 195 the 3 parts are 4, 199, 394
check: 4+199+394 = 597 4(199) = 796
Monday, October 7, 2013 at 11:40pm
Kharkutta govt. School
Split 597 into three parts such that these are in A.P. and the product of the two smallest parts is 796
Monday, October 7, 2013 at 11:34pm
the most important aspects of classical conditioning is that the conditioned stimulus becomes a _______ for the unconditional stimulus: a signal b stimulus generalization c stimulus discrimination d
high-color conditioning support
Monday, October 7, 2013 at 10:26pm
f*ck school !
Wtf demi lovato ? That ugly fat girl ? c'x Gtfo ! O.o That aint even the real demi ? cx lmfao Im out . Laaaates
Monday, October 7, 2013 at 7:56pm
foreign language
You could wait until it's offered in your school. Or try one of these sites. http://www.google.com/search?q=quadratic+formula&rlz=1C1AVSX_enUS414US423&oq=quadra&aqs=chrome.1.69i57j0l5.4633j0j8&
Monday, October 7, 2013 at 7:45pm
finance 571
Chip s Home Brew Whiskey management forecasts that if the firm sells each bottle of Snake-Bite for $20, then the demand for the product will be 15,000 bottles per year, whereas sales will be 85
percent as high if the price is raised 16 percent. Chip s variable cost ...
Monday, October 7, 2013 at 5:01pm
A man on the 8th floor of a building sees a bucket (dropped by a window washer) pass his window and notes that it hits the ground 1.5s later. Assuming a floor is 12ft high (and neglecting air
friction), from which floor was the bucket dropped?
Monday, October 7, 2013 at 4:58pm
Geography (Ms. Sue)
What is the value placed upon education in the countries of Russia and the Republics? A: The value placed upon education is extremely high in the region?
Monday, October 7, 2013 at 3:26pm
poetry (Pioneer by Dorothy Livesay)
I have to answer the questions according to this poem, I have answered them but just want someone to look over them to see if they are right. The last one I had trouble with and need help on that one
too. The answer that I think it is I have put arrows on them. Help would be ...
Monday, October 7, 2013 at 11:54am
If you mean your school's one and only school marathon, then 3 is correct. If you mean one of many schools' marathons, then 2 is correct. The others are incorrect.
Monday, October 7, 2013 at 7:50am
1. I got up late and missed the bus which I had to take in the morning. I was late for school and my teacher got mad at me when I entered my classroom. I had to remain in the classroom to clean it by
myself. 2. I got up late and missed the bus number 6 which I had to take in ...
Monday, October 7, 2013 at 7:48am
1. My sister and I took part in school marathon yesterday. 2. My sister and I took part in a school marathon yesterday. 3. My sister and I took part in the school marathon yesterday. 4. My sister and
I took part in School Marathon yesterday. 5. My sister and I took part in the...
Monday, October 7, 2013 at 6:19am
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 48 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Monday, October 7, 2013 at 4:33am
Hi, Am I on the right track? Reading for an hour now..and wanted to check my answers.. Which modem uses the same medium used to transmit video signals" My answer: C. Cable ADSL BRI DiGITAL '2. Which
is the fastest wireless standard? My answer: 802.11G 802.11b WAP WI-...
Sunday, October 6, 2013 at 10:49pm
In proton-beam therapy, a high-energy beam of protons is fired at a tumor. The protons come to rest in the tumor, depositing their kinetic energy and breaking apart the tumor s DNA, thus killing its
cells. For one patient, it is desired that 0.10 J of proton energy be ...
Sunday, October 6, 2013 at 10:14pm
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 35 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Sunday, October 6, 2013 at 5:49pm
Geography (Ms. Sue)
1, What is this? "cracking steel of vehicles ? Did you ever hear of floods, snow storms, tornadoes? What makes it hard to get into the high Sierras in January? 2. "In areas of continuous permafrost
and harsh winters, the depth of the permafrost can be as much as 1,...
Sunday, October 6, 2013 at 5:01pm
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 35 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Sunday, October 6, 2013 at 4:58pm
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 35 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Sunday, October 6, 2013 at 4:41pm
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 42 ∘.
The gravitational acceleration is g=10 m/s2 (a) At what ...
Sunday, October 6, 2013 at 3:52pm
Ethanol (C2H5OH) is synthesized for industrial use by the following reaction, carried out at very high pressure. C2H4(g) + H2O(g) → C2H5OH(l) What is the maximum amount, in kg, of ethanol that can be
produced when 1.65 kg of ethylene (C2H4) and 0.0610 kg of steam are ...
Sunday, October 6, 2013 at 3:13pm
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 35 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Sunday, October 6, 2013 at 3:10pm
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 35 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Sunday, October 6, 2013 at 3:00pm
Victor drove his truck off a cliff that was 45m high, and it landed 35m from the base of the cliff. What was the velocity of the truck as it drove off the cliff?
Sunday, October 6, 2013 at 2:46pm
I've started working on an article for my Journalism class about vegetarianism - pros and cons, and why or why not people at my school support it. I've gotten a few paragraphs in, but I'm having
trouble on writing the lead. Any suggestions?
Sunday, October 6, 2013 at 2:37pm
1: Use the standard equation in x- axis for time taken to travel 15+S meter. 2: It will take same time to travel in the Y direction. The Y distance traveled is 6+S(Tan30) 3: use y(t)=V0(t)-5t^2.
Substitute for t from 1 above. Then it is school level maths. 4: Once you get ...
Sunday, October 6, 2013 at 2:28pm
What stepping rate is needed if a person wants to work at a Vo2 of 45 mL/kg/min using a bench that is 34 cm high?
Sunday, October 6, 2013 at 12:59pm
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 35 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Sunday, October 6, 2013 at 12:54pm
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 35 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Sunday, October 6, 2013 at 11:39am
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 35 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Sunday, October 6, 2013 at 9:41am
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 35 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Sunday, October 6, 2013 at 8:28am
Economic's,tourism,geography,pure math's
What can i study when i finnish school i am doing grade 10
Sunday, October 6, 2013 at 7:53am
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 45 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Sunday, October 6, 2013 at 7:22am
Perhaps. Check with your high school counselor.
Saturday, October 5, 2013 at 10:02pm
Assume full mass of rocket is m, before igniting. If one assumes the mass of the rocket does not change when burning fuel.. nettrust*time=mass*velocityatburnout ma*time=m*vburnout 28.1*5.21=vburnout.
how high has it went at burnout? Vf=2ah where vf=burnout velocity. solve for ...
Saturday, October 5, 2013 at 9:20pm
A rocket moves upward, starting from rest with an acceleration of 28.1 m/s2 for 5.21 s. It runs out of fuel at the end of the 5.21 s, but does not stop. How high does it rise above the ground?
Saturday, October 5, 2013 at 8:19pm
Geometry Letter Grammar??
Huh!??? I haven't written such a letter since I was in high school. Obviously we are not going to impersonate your mother by writing a letter. Doesn't your mom want you to drop this class?
Saturday, October 5, 2013 at 6:57pm
criminal justuce
From what I observe, criminal activity is not a high priority in this federal administration. It's content to let the appropriate agencies do their jobs. http://en.wikipedia.org/wiki/
Saturday, October 5, 2013 at 6:15pm
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 35 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Saturday, October 5, 2013 at 5:33pm
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 35 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Saturday, October 5, 2013 at 5:28pm
A toy car runs off the edge of a table that is 1.150 m high. The car lands 0.380 m from the base of the table. It takes 0.48 s for it to fall.How fast was the car going on the table? ______m/s
Saturday, October 5, 2013 at 5:20pm
physics (it's rather urgent help is appreciated
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 35 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Saturday, October 5, 2013 at 5:01pm
Mary is passing out school supplies to students in the class. She gives colored pencils to every second student, markers to every third student, and crayons to every fifth student. The second student
received all three. Which student will be the next to receive all three?
Saturday, October 5, 2013 at 4:57pm
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 35 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Saturday, October 5, 2013 at 3:32pm
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 35 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Saturday, October 5, 2013 at 10:11am
We are standing at a distance d=15 m away from a house. The house wall is h=6 m high and the roof has an inclination angle β=30 ∘. We throw a stone with initial speed v0=20 m/s at an angle α= 37 ∘.
The gravitational acceleration is g=10 m/s2. (See figure...
Saturday, October 5, 2013 at 10:09am
Home School. PLEASE HELP ME!!
did anyone know what page and question 4 was found on?
Saturday, October 5, 2013 at 12:20am
Home School. PLEASE HELP ME!!
did anyone know what page and question 4 was found on?
Saturday, October 5, 2013 at 12:18am
Geography (Ms. Sue)
1). In what way does Western Europe have a diverse economy? A: Western Economy has a diverse economy as it includes agriculture and manufacturing, as well as high-tech and service industries.
Friday, October 4, 2013 at 9:10pm
A tennis player hits a ball 2.0 m above the ground. The ball leaves his racquet with a speed of 15.0m/s at an angle 5.3∘ above the horizontal. The horizontal distance to the net is 7.0 m, and the net
is 1.0 m high. How much did it clear the net by?
Friday, October 4, 2013 at 4:54pm
2. The National polling organization conducted a phone survey of 850 American adults on what they considered to be the most serious problem facing nation s public schools; 30% said violence. This
sample percent is an estimate of the percent of all adults who think that ...
Friday, October 4, 2013 at 12:30pm
A hot air balloon descends from a pressure of 380 mm Hg to 760 mm Hg and at the high altitude has a volume of 1,000 cubic meters. If the temperature starts at 50 degrees C and changes to 25 degrees
C, the new volume will be: a. 100.0 x 50 x 760/25 x 380 b. 100.0 x 50 x 25/760 ...
Friday, October 4, 2013 at 10:10am
A U
Question : <11{1[4(23)23]}> People have claimed that Jamie s paintings have given them the blues. Clearly this cannot be entirely correct, since many of Jamie s paintings contain no blue at all. The
argument above is flawed because the author: Student Answer: ...
Friday, October 4, 2013 at 12:48am
water tower is 11 feet high diameter is 18 feet,how many cubic feet are in this tower?
Thursday, October 3, 2013 at 11:49pm
advance math
an aquarium is in the shape of a right rectangular prism. each side of the aquarium is a rectangle that is 10 in wide and 8 in high. when the aquarium is tilted, the water in it just cover an 8 in by
10 in end, but only three-fourths of the rectangular bottom. find the depth ...
Thursday, October 3, 2013 at 10:24pm
A small lead ball of mass 2kg is suspended at the end of a light string 1m in length. A small peg, 0.5m below the suspension point, catches the string in its swing. The ball is set swinging through
small angles. A) What is the period of the pendulum? B) The ball has started ...
Thursday, October 3, 2013 at 10:07pm
c) universe is rapidly expanding with very high speed and temperature thus it didn't collapse into a black hole. The Schwarzschild solution of the gravitational equations is static and demonstrates
the limits placed on a static spherical body before it must collapse to a ...
Thursday, October 3, 2013 at 7:10pm
A centrifuge is a device in which a small container of material is rotated at a high speed on a circular path. Such a device is used in medical laboratories, for instance, to cause the more dense red
blood cells to settle through the less dense blood serum and collect at the ...
Thursday, October 3, 2013 at 6:59pm
A centrifuge is a device in which a small container of material is rotated at a high speed on a circular path. Such a device is used in medical laboratories, for instance, to cause the more dense red
blood cells to settle through the less dense blood serum and collect at the ...
Thursday, October 3, 2013 at 6:58pm
OK. Thanks. I told you I don't know much about science. Please post # 2 as a new question. Use Basic Chemistry as your School Subject.
Thursday, October 3, 2013 at 1:58pm
Electricity and Electronics
1. A transformer has a primary voltage of 115 V and a secondary voltage of 24 V. If the number of turns in the primary is 345, how many turns are in the secondary? A. 8 B. 690 C. 72 D. 1,653 2. To
use your left hand to determine the direction of the voltage developed in a ...
Thursday, October 3, 2013 at 1:47pm
For many years pellagra was believed to be an infectious disease promoted by unsanitary conditions. it was common in sanatoriums and orphanages but only among the patients and not the staff. a study
of the diets showed that suffers of pellagra rarely ate high quality proteins...
Thursday, October 3, 2013 at 12:09pm
English !!!
Editorials are opinion pieces, usually written by an editor of a newspaper or news website. Basically, this means you need to decide on something YOU believe is a serious issue that your school or
community is facing and write about it. http://www.udemy.com/blog/how-to-write-a...
Thursday, October 3, 2013 at 11:10am
English !!!
For this assignment, you will write an editorial about an issue that confronts your school or community. What does this mean?
Thursday, October 3, 2013 at 10:36am
1. If you're driving towards the sun late in the afternoon, you can reduce the glare from the road by wearing sunglasses that permit only the passage of light that's A. dispersed in a spherical
plane. B. polarized in a vertical plane. C. polarized in a horizontal plane...
Thursday, October 3, 2013 at 10:11am
i would like to know on what is required for a high chance to get accepted to uc berkeley
Wednesday, October 2, 2013 at 9:00pm
Which of the following lipoproteins carries dietary fat that is absorbed into the digestive system to the liver and body cells for use? a. Chylomicrons b. High density lipoprotein c. Micelle d. Low
density lipoproteins I think its A is that correct?
Wednesday, October 2, 2013 at 7:23pm
3/4 of the students in school were girls and the rest were boys.2/3 of the girls and1/2 of the boys attended the school carnival. Find the total number of students in the school if 330 students did
not attend the carnival
Wednesday, October 2, 2013 at 7:03pm
math 5th grade
Because this not a high school problem it has to simple they ask for 17 + other number squared = age. This what you have to do 2209 root = 47- 17= 30 (17+a= 30)2= 2209 Her age 47 You can't squared 17
and take the amount out of 2209 because it will give you 1920 and you ...
Wednesday, October 2, 2013 at 3:24pm
10. Hank was 62 1/8 inches tall at the end of school in June. He was 63 7/8 in September. How much did he grow during the summer? 1 3/4 11. 2/5+1/5=3/5 12. 3/10+7/10=1 13.(-3/4)+(-3/4)= -1 1/2
Tuesday, October 1, 2013 at 10:23pm
6th grade grammar
Thanks Ms. Sue!! You mentioned that there was another phrase for #2 (We were looking for the documents that were hidden under the generator.). But, I don't see it. Also, can you check these 2? 18. I
kicked out the vent and we jumped from the shaft into a dumpster filled ...
Tuesday, October 1, 2013 at 9:35pm
The Johnsons have accumulated a nest egg of $27,000 that they intend to use as a down payment toward the purchase of a new house. Because their present gross income has placed them in a relatively
high tax bracket, they have decided to invest a minimum of $1300/month in ...
Tuesday, October 1, 2013 at 7:20pm
Why didn t the Universe collapse into a black hole in the past when there was so much matter in such a small space? (a) Most of the particles in the early Universe were unstable and decayed before a
black hole could form. By the time only stable particles were left (...
Tuesday, October 1, 2013 at 5:13pm
math - calc
A conical water tank with vertex down has a radius of 12 feet at the top and is 26 feet high. If water flows into the tank at a rate of 30 {\rm ft}^3{\rm /min}, how fast is the depth of the water
increasing when the water is 12 feet deep?
Tuesday, October 1, 2013 at 3:13pm
Pages: <<Prev | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | Next>> | {"url":"http://www.jiskha.com/high_school/?page=15","timestamp":"2014-04-20T09:09:13Z","content_type":null,"content_length":"41699","record_id":"<urn:uuid:71956beb-9e87-4aff-9fd7-6f23d79a9a23>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US6937966 - System and method for on-line adaptive prediction using dynamic management of multiple sub-models
The present invention relates generally to performance management and, more particularly, to automated performance management techniques which provide on-line adaptive predictions using dynamic
management of multiple sub-models.
Predictive models are widely used for tasks in many domains. Examples include: anticipating future customer demands in retailing by extrapolating historical trends; planning equipment acquisition in
manufacturing by predicting the outputs that can be achieved by production lines once the desired machines are incorporated; and diagnosing computer performance problems by using queuing models to
reverse engineer the relationships between response times and service times and/or arrival rates.
Predictive models can take many forms. Linear forecasting models, such as Box-Jenkins models, are widely used to extrapolate trends. Weather forecasting often uses systems of differential equations.
Analysis of computer and manufacturing systems frequently use queuing models.
Predictive models are of two types. Off-line models estimate their parameters from historical data. This is effective for processes that are well understood (e.g., industrial control) but is much
less effective for processes that change rapidly (e.g., web traffic). On-line models adjust their parameters with changes in the data and so are able to adapt to changes in the process. For this
reason, a focus of the present invention is on-line models.
Another consideration is the exploitation of multiple models. For example, in computer systems, forecasting models are used to anticipate future workloads, and queuing models are employed to assess
the performance of equipment at the future workload levels. Indeed, over time, it is often necessary to use many models in combination.
To illustrate this point, we consider a forecasting model for web server traffic. Consider the model described in J. Hellerstein, F. Zhang, and P. Shahabuddin, “An Approach to Predictive Detection
for Service Level Management,” Integrated Network Management VI, edited by M. Sloman et al., IEEE Publishing, May 1999, the disclosure of which is incorporated by reference herein, that forecasts the
number of hypertext operations per second at time t, which we denote by S(t). The following models are considered:
• 1. S(t) is determined entirely by its mean. That is, S(t)=mean+e(t), where e(t) is the model's “residual,” i.e., what is left after the effect of the model is removed.
• 2. S(t) is determined by its mean and time of day. That is, t=(i,l), where i is an interval during a 24 hour day and l specifies the day. For example, days might be segmented into five minute
intervals, in which case i ranges from 1 to 288. Thus, S(i,l)=mean+mean[—]tod(i)+e(i,l).
• 3. S(t) is determined by its mean, time of day and day of week. That is, t=(i,j,l), where i is an interval during a 24 hour day, j indicates the day of week (e.g., Monday, Tuesday), and l
specifies the week instance. Thus, S(i,j,l)=mean+mean[—]tod(i)+mean[—]month(k)+e(i,j,l).
• 4. S(t) is determined by its mean, time of day, day of week and month. Here, t=(i,j,k,l), where k specifies the month and l specifies the week instance within a month. Thus, S(i,j,k,l)=mean+mean
It turns out that the S(i,j,k,l) model provides the best accuracy. So, this begs the question: Why not use this model and ignore the others? The answer lies in the fact that the data is
non-stationary. Using the techniques employed in the above-referenced Hellerstein, Zhang, and Shahabuddin article, obtaining estimates of tod(i) requires at least one measurement of the i^th time of
day value. Similarly, at least one week of data is required to estimate mean[—]day-of-week(j) and several months of data are required to estimate mean[—]month(k).
Under these circumstances, a reasonable approach is to use different models depending on the data available. For example, we could use model (1.) above when less than a day of history is present;
model (2.) when more than a day and less than a week is present, and so on.
Actually, the requirements are a bit more complex still. A further issue arises in that we need to detect when the characteristics of the data have changed so that a new model is needed. This is
referred to as change-point detection, see, e.g., Basseville and Nikiforov, “Detection of Abrupt Changes” Prentice Hall, 1993, the disclosure of which is incorporated by reference herein. Change
point detection tests for identically distributed observations (i.e., stationarity) under the assumption of independence. However, it turns out that the residuals of the above model are not
independent (although they are identically distributed under the assumption of stationarity and the model being correct). Thus, still another layer of modeling is required. In the above-referenced
Hellerstein, Zhang, and Shahabuddin article, a second order autoregressive model is used. That is, e(t)=a1*e(t−1)+a2*e(t−2)+y(t), where a1 and a2 are constants estimated from the data.
So the question arises: What happens after a change-point is detected? There are two possibilities. The first is to continue using the old model even though it is known not to accurately reflect the
process. A second approach is to re-estimate process parameters. That is, data that had been used previously to estimate parameter values must be flushed and new data must be collected. During this
period, no prediction is possible. In general, some prediction is required during this transition period. Thus, it may be that a default model is used until sufficient data is collected.
The foregoing motivates the requirements that the present invention envisions for providing adaptive prediction. First, it must be possible to add new modeling components (e.g., include time-of-day
in addition to the process mean) when sufficient data is available to estimate these components and it is determined that by adding the components there is an improvement in modeling accuracy.
Second, we must be able to remove modeling components selectively as non-stationarities are discovered. For example, it may be that the day-of-week effect changes in a way that does not impact
time-of-day. Thus, we need to re-estimate the mean[—]day-of-week(j) but we can continue using the mean[—]tod(i).
Existing art includes: the use of multiple models, e.g., U.S. Pat. No. 5,862,507 issued to Wu et al.; multiple models, e.g., P Eide and P Maybeck, “MMAE Failure Detection System for the F-16,” IEEE
Transactions on Aerospace Electronic Systems, vol. 32, no. 3, 1996; adaptive models, e.g., V. Kadirkamanathan and S. G. Fabri, “Stochastic Method for Neural-adaptive Control of Multi-modal Nonlineary
Systems,” Conference on Control, p. 49–53, 1998; and the use of multiple modules that adaptively select data, e.g., Rajesh Rao, “Dynamic Appearance-based Recognition,” IEEE Computer Society
Conference on Computer Vision, p. 540–546, 1997, the disclosures of which are incorporated by reference herein. However, none of these address the issue of “dynamic management of multiple on-line
models” in that this art does not consider either: (a) when to exclude a model; or (b) when to include a model.
There is a further consideration as well. This relates to the manner in which measurement data is managed. On-line models must (in some way) separate measurement data into “training data” and “test
data.” Training data provides a means to estimate model parameters, such as mean, mean[—]tod(i), mean[—]day-of-week(j), mean[—]month(k). Test data provide a means to check for change points. In the
existing art, a single repository (often in-memory) is used to accumulate data for all sub-models. Data in this repository is partitioned into training and test data. Once sufficient data has been
accumulated to estimate parameter values for all sub-models and sufficient training data is present to test for independent and identically distributed residuals, then the validity of the complete
model is checked. A central observation is that a dynamic management of multiple models requires having separate training data for each model. Without this structure, it is very difficult to
selectively include and exclude individual models. However, this structure is not present in the existing art.
The present invention addresses the problem of prediction of non-stationary processes by dynamically managing multiple models. Herein, we refer to the constituent models as “sub-models.” We use the
term “model” to refer to the end-result of combining sub-models. It is to be appreciated that, once there is an accurate model, predictions are obtained from models in a straightforward way, e.g., as
in least squares regression, time series analysis, and queuing models.
Dynamic management of sub-models according to the invention provides an ability to: (i) combine the results of sub-models; (ii) determine change points; that is, when the model is no longer a
faithful characterization of the process; (iii) identify the sub-model(s) to exclude when a change point occurs and/or as more data is acquired; (iv) identify the sub-model (s) to include when a
change point occurs and/or as more data is acquired; and (v) manage training and test data in a way to accomplish the above objectives.
In one aspect of the present invention, an on-line adaptive prediction system employing dynamic management of multiple sub-models may comprise the following components in order to address the
foregoing objectives. A sub-model combiner component combines the results of sub-models. This is in part based on information in the model context that includes combining functions that specify how
the results of sub-models should be combined. A model assessor component computes residuals of the model and checks for change points. A model adapter component determines the sub-models to include
and/or exclude, updating the model context as needed. Training data is maintained separately by each sub-model to enable the dynamic inclusion and exclusion of sub-models. Test data is managed by the
model assessor component.
The present invention provides two central processes. The first details the steps taken when new measurement data is made available to the prediction system. In one aspect of the invention, the
process includes steps for: (a) updating test data; (b) updating training data of each sub-model and its estimates of parameters; (c) testing for change points; and (d) determining the best
combination of sub-models based on the results of change point detection and other factors. The second process details the actions performed when an application requests a prediction. In one aspect
of the invention, this process includes the steps of: (a) determining the input parameters for each sub-model; (b) requesting predictions from each sub-model; and (c) combining the results.
The present invention provides numerous benefits to developers of systems that require a predictive capability for non-stationary processes. First, accuracy can be improved by choosing the best
combination of sub-models. The invention supports this by having a flexible technique for sub-model inclusion and exclusion, as well as a means to test for change points.
Second, the present invention provides methodologies to adjust incrementally the model as more data is available for parameter estimation. Accurate models often require considerable data to estimate
parameters. However, less accurate models are possible if the data available is modest. If the process being modeled is non-stationary, then the data available for parameter estimation will vary
greatly. Specifically, if change points are frequent, then little training data is acquired before the next change point, which must discard this data so that parameters can be estimated for the new
regime of the process. On the other hand, if change points are infrequent, considerable data can be acquired and hence it is possible to include sub-models that require more data to estimate their
parameters. As such, there is considerable benefit to a technique that adapts the sub-models used based on the data available.
Third, the modular structure provided by the present invention greatly facilitates the incremental inclusion and exclusion of sub-models, as well as the manner in which they are combined. Thus, it is
much easier to update the model than would be the case in a technique that hard codes sub-models and their relationships.
These and other objects, features and advantages of the present invention will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in
connection with the accompanying drawings.
FIG. 1 is a block diagram illustrating an overall architecture of an environment in which an on-line adaptive prediction system employing dynamic management of multiple sub-models according to one
embodiment of the present invention may operate;
FIG. 2 is a block diagram illustrating an on-line adaptive prediction system employing dynamic management of multiple sub-models according to one embodiment of the present invention;
FIG. 3 is a block diagram illustrating a sub-model component according to one embodiment of the present invention;
FIG. 4 is a flow diagram illustrating a process for handling data updates in an on-line adaptive prediction system employing dynamic management of multiple sub-models according to one embodiment of
the present invention;
FIG. 5 is a flow diagram illustrating a process for handling prediction requests in an on-line adaptive prediction system employing dynamic management of multiple sub-models according to one
embodiment of the present invention;
FIG. 6 is a flow diagram illustrating a process for estimating parameters in a sub-model component according to one embodiment of the present invention;
FIG. 7 is a flow diagram illustrating a process for computing predictions in a sub-model component according to one embodiment of the present invention; and
FIG. 8 is a block diagram illustrating a generalized hardware architecture of a computer system suitable for implementing an on-line adaptive prediction system employing dynamic management of
multiple sub-models according to the present invention.
The present invention will be explained below in the context of an illustrative on-line environment and predictive model application arrangement. However, it is to be understood that the present
invention is not limited to such a particular arrangement. Rather, the invention is more generally applicable to any on-line environment and predictive model application arrangement in which it is
desirable to: (i) improve the accuracy of the predictive models by adjusting one or more predictive models as more data is available for model parameter estimation by inclusion and/or exclusion of
sub-models in the one or more predictive models; and/or (ii) improve the accuracy of the predictive models by improving the handling of prediction requests.
Referring now to FIG. 1, a block diagram is shown illustrating an overall architecture of an environment in which an on-line adaptive prediction system employing dynamic management of multiple
sub-models according to one embodiment of the present invention may operate. As shown, an end user 100 interacts with applications 110-1 through 110-N that exploit predictive models that, in turn,
use one or more model subsystems 120-1 through 120-M. The subsystems 120-1 through 120-M comprise the on-line adaptive prediction system employing dynamic management of multiple sub-models.
It is to be understood that model applications may be computer programs which perform some function based on the domain in which they are employed, e.g., anticipating future customer demands in
retailing by extrapolating historical trends; planning equipment acquisition in manufacturing by predicting the outputs that can be achieved by production lines once the desired machines are
incorporated; and diagnosing computer performance problems by using queuing models to reverse engineer the relationships between response times and service times and/or arrival rates.
It is to be further understood that the end-user may include a computer system that is in communication with the one or more computer systems on which the model applications and model subsystems are
running. The end-user system may be remote from these other computer systems, or co-located with one or more of them. The computer systems may be connected by any suitable network.
As will be explained in detail below, the model subsystems make use of model contexts repositories 130-1 through 130-M. Each model context repository contains information such as the way in which
sub-models are combined and the current choice of sub-models. Model subsystems 120-1 through 120-M are informed of data updates by the data access component 140. The data being provided to the data
access component is coming from the process or system that the model application is interfacing with, e.g., the retailer, the production line, the computer network whose performance is being
considered, etc. It is to be appreciated that while more than one model application and more than one model subsystem is shown in FIG. 1, the system may operate with one or more model applications
and model subsystems.
Referring now to FIG. 2, a block diagram is shown illustrating an on-line adaptive prediction system employing dynamic management of multiple sub-models according to one embodiment of the present
invention. Particularly, FIG. 2 depicts an embodiment of one of the model subsystems (120-1 through 120-M) of FIG. 1. The model subsystem comprises sub-models 200-1 through 200-K, combining functions
210-1 through 210-L, a sub-model combiner 220, test data 230, model assessor 240, a model controller 250 and a model adaptor 260.
As shown in FIG. 2, both the data access component (140 in FIG. 1) and model applications (110-1 through 110-N in FIG. 1) make their requests to the model controller 250, which controls the overall
flow within the model subsystem. The model adapter 260 determines if a new combination of sub-models should be used by consulting the model assessor 240. The latter computes the residuals of the
model for test data 230 and maintains test data. The sub-model combiner 220 is responsible for computing predictions by invoking each sub-model (200-1 through 200-K) and combining the results by
consulting the model context and using the appropriate combining functions (210-1 through 210-L). Doing so requires determining the parameters for each sub-model. In addition, the sub-model combiner
determines the data to be provided to sub-models when a data update occurs. The combining functions take as input the results of one or more sub-models and compute partial results. The sub-models
accept two kinds of requests: (i) data update requests; and (ii) prediction requests.
Referring now to FIG. 3, a block diagram is shown illustrating a sub-model component according to one embodiment of the present invention. Specifically, FIG. 3 illustrates components of a sub-model
such as sub-models 200-1 through 200-K in FIG. 2. As shown, the sub-model comprises a parameter estimation component 305, sub-model training data 310, a result computation component 320 and a
sub-model descriptor 330.
In operation, data update requests 302 are made to the parameter estimation component 305, which interacts with the sub-model training data 310 and the sub-model descriptor 330. The former contains
the data needed to estimate the parameters of the model. The latter specifies the data required to perform these estimates and contains the values of the parameter estimates. Prediction requests 315
are made to the result computation component 320, which reads the parameter values and the specifics of the computation to perform from the sub-model descriptor 330.
Referring now to FIG. 4, a flow diagram is shown illustrating a process for handling data updates in an on-line adaptive prediction system employing dynamic management of multiple sub-models
according to one embodiment of the present invention. Reference will therefore be made back to components of FIGS. 2 and 3. The process begins in at step 400 where the request enters with data. In
step 405, the test data is updated. In step 410, an iteration is done for a sub-model in which step 415 invokes the sub-model to estimate the model parameters for the data presented. In step 417, a
check is done to see if sufficient data is present to do change-point detection for the current model. If not, step 430 resets the test data. Otherwise, step 420 tests for a change-point. If a
change-point is present, training data and parameters are reset for each sub-model by invoking it with null data. In step 435, the best combination of sub-models is determined. Sub-models can be
evaluated in standard ways, such as minimizing residual variance or maximizing the variability explained. The process terminates at block 440. It is to be understood that test data is data used to
evaluate a sub-model. This is separate from the data used to estimate parameters of a sub-model.
With reference back to FIGS. 2 and 3, it is to be appreciated that step 405 is accomplished by the combination of the model controller 250, the model adaptor 260, and the model assessor 240; step 410
by the sub-model combiner 220; step 415 by the parameter estimation component 305; steps 417 and 420 by the model assessor 240; step 435 by the model adaptor 260; and steps 425 and 430 by the
sub-model combiner 220 in combination with each sub-model 200.
Referring now to FIG. 5, a flow diagram is shown illustrating a process for handling prediction requests in an on-line adaptive prediction system employing dynamic management of multiple sub-models
according to one embodiment of the present invention. Reference will therefore be made back to components of FIGS. 2 and 3. In step 500, the process begins with entering the parameters to use in the
prediction. The parameters for each sub-model used are determined in step 505. Step 510 iterates across each sub-model in the model. In step 515, the prediction is computed for each sub-model. Step
520 combines the results. The decision as to which sub-model to use is determined by the sub-model combiner 220 in combination with the model context 130. The latter is updated by the model adaptor
260 when it determines the best combination of sub-models to use (step 435 in FIG. 4). The process terminates in block 525.
Thus, with reference back to FIGS. 2 and 3, it is to be appreciated that steps 505, 510 and 520 are accomplished by the sub-model combiner 220, while step 515 is done by each sub-model 200.
Referring now to FIG. 6, a flow diagram is shown illustrating a process for estimating parameters in a sub-model component according to one embodiment of the present invention. Specifically, FIG. 6
depicts details of the estimate parameter operation (step 415 in FIG. 4 and the parameter estimation component 305 in FIG. 3) as performed with respect to a sub-model. The process begins at step 600
when a sub-model is invoked to estimate parameters. Step 605 tests if the data provided on input is null. If so, step 610 invalidates or resets the parameter estimates in the sub-model descriptor (
330 in FIG. 3), and step 612 resets the training data (310 in FIG. 3) in the sub-model. The process then terminates at block 635. If the data provided on input is not null, step 615 updates the
training data. Step 620 tests if sufficient data is present to estimate the parameters of the model. If not, the process terminates at block 635. Otherwise, step 625 estimates the parameters, and
step 630 updates the sub-model descriptor with the parameters values.
Referring now to FIG. 7, a flow diagram is shown illustrating a process for computing predictions in a sub-model component according to one embodiment of the present invention. Specifically, FIG. 7
depicts details of the prediction computation operation (step 515 in FIG. 5 and the result computation component 325 in FIG. 3) as performed with respect to a sub-model. In step 700, the process is
entered with the values of the inputs in the prediction request. Step 710 retrieves the values of the model parameters from the sub-model descriptor. Step 720 computes the prediction. At block 725,
the process terminates.
Referring now to FIG. 8, a block diagram is shown illustrating a generalized hardware architecture of a computer system suitable for implementing the various functional components/modules of an
on-line adaptive prediction system employing dynamic management of multiple sub-models as depicted in the figures and explained in detail herein. It is to be understood that the individual components
of the on-line adaptive prediction system, namely, the model subsystems 120-1 through 120-M (FIG. 1), and their components (FIGS. 2 and 3), may be implemented on one such computer system, or on more
than one separate such computer systems. The other components shown in FIG. 1, e.g., end-user, model applications, model contexts and data access, may also be implemented on the same or other such
computer systems. Also, individual components of the subsystems and repositories may be implemented on separate such computer systems.
As shown, the computer system may be implemented in accordance with a processor 800, a memory 810 and I/O devices 820. It is to be appreciated that the term “processor” as used herein is intended to
include any processing device, such as, for example, one that includes a CPU (central processing unit) and/or other processing circuitry. The term “memory” as used herein is intended to include
memory associated with a processor or CPU, such as, for example, RAM, ROM, a fixed memory device (e.g., hard drive), a removable memory device (e.g., diskette), flash memory, etc. In addition, the
term “input/output devices” or “I/O devices” as used herein is intended to include, for example, one or more input devices, e.g., keyboard, for entering data to the processing unit, and/or one or
more output devices, e.g., CRT display and/or printer, for presenting results associated with the processing unit. It is also to be understood that the term “processor” may refer to more than one
processing device and that various elements associated with a processing device may be shared by other processing devices. Accordingly, software components including instructions or code for
performing the methodologies of the invention, as described herein, may be stored in one or more of the associated memory devices (e.g., ROM, fixed or removable memory) and, when ready to be
utilized, loaded in part or in whole (e.g., into RAM) and executed by a CPU.
Although illustrative embodiments of the present invention have been described herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those
precise embodiments, and that various other changes and modifications may be affected therein by one skilled in the art without departing from the scope or spirit of the invention. | {"url":"http://www.google.com/patents/US6937966?ie=ISO-8859-1&dq=6,952,563","timestamp":"2014-04-20T21:40:13Z","content_type":null,"content_length":"93831","record_id":"<urn:uuid:624e7af4-9bd0-43c6-ad87-4bf479c3ca86>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00333-ip-10-147-4-33.ec2.internal.warc.gz"} |
Counting pathways
December 5th 2006, 08:11 AM #1
Junior Member
Aug 2006
Counting pathways
I need to count the number of ways,from start to end,without passing through the red spots.
you can move one step at a time:forward or to the right.
There are 49 paths.
The numbers in each box of the excel spreadsheet in the attachments shows how many paths can go through each point. Do you see how it was made? Or would you like an explanation.
I would appreciate it if you could give me an explanation,in a combinatorial way i.e n choose k
Say $p(x,\ y)$ is the number of paths to a specific square. For the bottom left square, $x = 0$ and $y = 0$, and for the upper right square, $x = 4$ and $y = 4$. Then
$p(x,\ y)\ =\ \left\{\begin{array}{l}<br /> 0,\text{ if square } (x,\ y) \text{ cannot be entered}\\<br /> 1,\text{ if } (x,\ y) \text{ is the start square}\\<br /> \text{else }p(x\!-\!1,\ y) + p
(x,\ y\!-\!1)<br /> \end{array}\right.$
You can get to a square from only two other squares. The sum of the number of paths to dose squares is the number of paths to that square.
Then, if you turn it all $135^\circ$ clockwise, you will se it's starting to look like Pascal's triangle. And if it hadn't been for those two blocked squares $p(x,\ y)$ would have been $= {{x+y}\
choose{y}}$. Cause ${x+y}\choose{y}$ has almost the same properties as $p(x,\ y)$ has in this case. You do know that ${{x}\choose{y}}\ =\ {{x-1}\choose{y-1}}\ +\ {{x-1}\choose{y}}$? Let's say
that if (x, y) is outside the triangle, ${x\choose{y}}\ =\ 0$. I don't know if I'm the best to explain why, but in this case
$p(x,\ y)\ =\ {{x+y}\choose{y}}\ -\$${{1+3}\choose{3}}\cdot{{(x-1) + (y-3)}\choose{y-3}}\ -\$${{4+1}\choose{1}}\cdot{{(x-4) + (y-1)}\choose{y-1}}\ =$.
$=\ {{x+y}\choose{y}}\ -\ {4\choose{3}}\cdot{{x+y-4}\choose{y-3}}\ -\$${5\choose{1}}\cdot{{x+y-5}\choose{y-1}}\ =$
$=\ {{x+y}\choose{y}}\ -\ 4\cdot{{x+y-4}\choose{y-3}}\ -\ 5\cdot{{x+y-5}\choose{y-1}}$
Since $x=4$ and $y=4$ in the top right square,
$p(x,\ y)\ =\ {8\choose{4}}\ -\ 4\cdot{{8-4}\choose{4-3}}\ -\ 5\cdot{{8-5}\choose{4-1}}\ =$
$=\ 70\ -\ 4\cdot{4\choose{1}}\ -\ 5\cdot{3\choose{3}}\ =$
$=\ 70\ -\ 4\cdot 4\ -\ 5\cdot 1\ =$
$=\ 49$
Now i got it right, I forgot to use zero indexing.
Last edited by TriKri; December 5th 2006 at 12:40 PM.
i'm sorry but it's getting too complicated for me,we just started with this.
the first question was to solve this without any limitation on the squares.
so the way I did it was like that:
I can go RRRRRUUUUU from the start to the end.
(R-right, U-up)
so now I just computed the number of ways to arrange the letters.
could you try and explain it to me,with this kind of thinking(sorry for the trouble)
i'm sorry but it's getting too complicated for me,we just started with this.
the first question was to solve this without any limitation on the squares.
so the way I did it was like that:
I can go RRRRRUUUUU from the start to the end.
(R-right, U-up)
so now I just computed the number of ways to arrange the letters.
could you try and explain it to me,with this kind of thinking(sorry for the trouble)
Take a look at Quick's excel diagram. What it does is that it simulates the number of paths by calculating the number square by square. If you wan't to use a mathematical non-iterating formula,
you will have to use the combination function as I just did in my last post.
I will have to get back on this one. I don't seem to get the formula right.
I dont blame ya
I believe this is a graph theory related question (but because I am barely familar with it I cannot help you).
Basically you want to find all the possible paths (that is how Graph theorist call them). And not walks (going over the same edge).
So you set up a graph with 25 vertices. And you draw an edge between any two which you are allowed to move. (That means the pink dots are isolated from the rest of the graph). And there is a
special techinque of how to solve this problem.*)
*)One way is the adjancey matrix but it gets huge.
**)Another way, there are certain algorithms related to graphs that enable to find the solution. (Again not familar with them).
I hate to be a pain, but there are an infinite number of paths. The problem doesn't specify a maximum path length, nor that you can't, for example, take a backward step. I'm assuming either one
or both of the two previous thoughts are correct, but the problem statement should have a mention of some kind of limitation of this kind.
There are 70 ways to arrange 4-R’s & 4-U’s. That is the total number of paths from start to finish without regard to the forbidden squares.
There are 4 ways to go from start to the left most forbidden square and 4 ways to go from that square to finish. Thus, there are 16 ways to go from start to finish passing through the first
forbidden square.
Now likewise, there are 5 ways to go from start to finish passing through the right most forbidden square.
Now note that 70-(16+5)=49. That take the forbidden paths from the total.
December 5th 2006, 11:05 AM #2
December 5th 2006, 11:17 AM #3
Junior Member
Aug 2006
December 5th 2006, 11:21 AM #4
December 5th 2006, 11:29 AM #5
December 5th 2006, 11:46 AM #6
Junior Member
Aug 2006
December 5th 2006, 11:53 AM #7
December 5th 2006, 12:12 PM #8
December 5th 2006, 12:17 PM #9
Junior Member
Aug 2006
December 5th 2006, 12:39 PM #10
Global Moderator
Nov 2005
New York City
December 5th 2006, 01:53 PM #11
December 5th 2006, 01:58 PM #12
December 5th 2006, 02:03 PM #13
December 5th 2006, 02:07 PM #14
December 5th 2006, 02:42 PM #15 | {"url":"http://mathhelpforum.com/discrete-math/8436-counting-pathways.html","timestamp":"2014-04-19T11:22:35Z","content_type":null,"content_length":"83513","record_id":"<urn:uuid:7dd03779-72e5-4d2d-9b00-3bfd37e704cb>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00652-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How Do you write "A is the set of even whole numbers less than 12" in the notation builder?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5104ccf3e4b03186c3f99a7a","timestamp":"2014-04-19T10:24:51Z","content_type":null,"content_length":"83031","record_id":"<urn:uuid:5abfac02-a197-496e-a476-ac6cb3aee2a9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |