content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Trimmed Least Squares Estimation in the Linear Model
Trimmed Least Squares Estimation in the Linear Model (1980)
Download Links
by Raymond J. Carroll
Venue: J. Amer. Statist. Assoc
Citations: 30 - 0 self
author = {Raymond J. Carroll},
title = {Trimmed Least Squares Estimation in the Linear Model},
journal = {J. Amer. Statist. Assoc},
year = {1980},
pages = {828--838}
We consider two methods of defining a regression analogue to a trimmed mean. The first was suggested by Koenker and Bassett and uses their concept of regression quantiles. that of a trimmed mean. Its
asymptotic behavior is completely analogous to The second method uses residuals from a preliminary estimator. Its asymptotic behavior depends heavily on the preliminary estimate; it behaves, in
general, quite differently than the estimator proposed by Koenker and Bassett, and it can be rather inefficient at the normal model even if the percent trimming is small. However, if the preliminary
estimator is the average of the two regression quantiles used with Koenker and Bassett's estimator, then the first and second methods are asymptotically equivalent for sYmmetric error distributions.
Key Words and Phrases: regression analogue, trimmed mean, regression quantile, preliminary estimator, linear model, trimmed least squares David Ruppert is an Assistant Professor and Raymond,J.
Carroll an
3023 Probability and Measure - Billingsley - 1979
349 Regression quantiles - Koenker, Basset - 1978
61 A note on quantiles in large samples - BAHADUR - 1966
49 Robust Estimates of Location: Survey and Advances - Andrews, Bickel, et al. - 1972
44 One-Step Huber Estimates in the Linear Model - Bickel - 1975
42 A survey of sampling from contaminated distributions. Contributions to Probability and Statistics (in: I. Olkin et al., eds - Tukey - 1960
35 Robust statistical procedures - Huber - 1977
28 Adaptive robust procedures: A partial review and some suggestions for future applications and theory - Hogg - 1974
22 A new proof of the Bahadur representation of quanti1es and an - Ghosh - 1971
18 Asymptotic Relations of M-Estimates and R-Estimates in Linear Regression Model,”The - Jureckova - 1977
13 Examination of residuals - ANSCOMBE - 1961
13 vulnerable confidence and a significance procedures for location based on a single sample: Trimming/Winsorization 1. Sankhya A - Tukey, McLaughlin - 1963
12 On estimating variances of robust estimators when the errors are asymmetric - Carroll - 1979
11 Do robust estimators work with real data - Stigler - 1977
9 Robust estimation - HUBER - 1981
7 A M (1976) Confidence interval robustness with long-tailed symmetric distributions - Gross
6 On some analogues to linear combinations of order statistics in the linear model - Bickel - 1973
4 Robust estimates of location: symmetry and asymmetric contamination - Jaeckel - 1971
3 Almost Sure Properties of Robust Regression Estimates, Unpublished manuscript - Carroll, Ruppert - 1979
2 An asymptotic representation of trimmed means with applications - WET, VENTER - 1974
2 A Monte-Carlo swindle for estimators of location - Gross - 1973
2 A Student Oriented Preprocessor for MPS/360 - McKeown, Rubin - 1977
1 Robust regres~ion by trimmed least squares estimation - Ruppert, Carroll - 1978
1 CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE Air Force Office of Scientific Research February 1980 Bolling Air Force Base 13 - NUMBERS | {"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.153.8493","timestamp":"2014-04-18T20:49:56Z","content_type":null,"content_length":"29993","record_id":"<urn:uuid:e49ff69e-2a8f-442f-961d-0ec007589d0d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hilshire Village, TX SAT Math Tutor
Find a Hilshire Village, TX SAT Math Tutor
I have been a private math tutor for over ten(10) years and am a certified secondary math instructor in the state of Texas. I have taught middle and high-school math for over ten (10) years. I am
available to travel all over the greater Houston area, including as far south as Pearland, as far north as Spring, as far west as Katy and as far east as the Galena Park/Pasadena area.
9 Subjects: including SAT math, calculus, geometry, algebra 1
...I have scored a 5 on both AP English tests, as well as the US History, Government, and Calculus exams (and a 4 on Art History). This isn't because I'm a genius--it's because I'm a good test
taker, and I can help you become a better one as well. Whether you want to try to make dramatic gains on a...
32 Subjects: including SAT math, Spanish, reading, writing
...I have published a number of studies in scientific journals and have written successful grant proposals totaling over $60M. I can teach you how to write in technical format, including standard
reference and graphic formats. I have taught statistics and experimental design at the college level f...
20 Subjects: including SAT math, writing, algebra 1, algebra 2
...I took several English classes at Yale University, and as a result, I've written several papers at the college level. What's more, the classes I took through my Psychology major have given me
a great deal of experience writing research papers and papers reviewing scientific literature. I have a...
22 Subjects: including SAT math, Spanish, writing, algebra 1
...While in college, I worked for my professors and tutored college students in Calculus I, College Math, and Geometry. My experience as a teacher taught me that every child does not learn math
the same way. I used a variety of hands-on learning experiences in my classroom to make the math comprehension easier for the diverse student population I taught.
24 Subjects: including SAT math, calculus, statistics, biology
Related Hilshire Village, TX Tutors
Hilshire Village, TX Accounting Tutors
Hilshire Village, TX ACT Tutors
Hilshire Village, TX Algebra Tutors
Hilshire Village, TX Algebra 2 Tutors
Hilshire Village, TX Calculus Tutors
Hilshire Village, TX Geometry Tutors
Hilshire Village, TX Math Tutors
Hilshire Village, TX Prealgebra Tutors
Hilshire Village, TX Precalculus Tutors
Hilshire Village, TX SAT Tutors
Hilshire Village, TX SAT Math Tutors
Hilshire Village, TX Science Tutors
Hilshire Village, TX Statistics Tutors
Hilshire Village, TX Trigonometry Tutors
Nearby Cities With SAT math Tutor
Bellaire, TX SAT math Tutors
Bunker Hill Village, TX SAT math Tutors
Cypress, TX SAT math Tutors
Hedwig Village, TX SAT math Tutors
Hunters Creek Village, TX SAT math Tutors
Iowa Colony, TX SAT math Tutors
Jacinto City, TX SAT math Tutors
Kemah SAT math Tutors
Meadows Place, TX SAT math Tutors
North Houston SAT math Tutors
Oak Forest, TX SAT math Tutors
Piney Point Village, TX SAT math Tutors
Southside Place, TX SAT math Tutors
Spring Valley, TX SAT math Tutors
West University Place, TX SAT math Tutors | {"url":"http://www.purplemath.com/Hilshire_Village_TX_SAT_math_tutors.php","timestamp":"2014-04-17T15:55:34Z","content_type":null,"content_length":"24796","record_id":"<urn:uuid:b2ced8f8-1adb-4304-8eef-ab4a2bc9070f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. Squares and Rectangles
Print out this page.
Find the area of the squares and rectangles below. Write the formula for:
Area [Square] = [_______________] Area [Rectangle] = [_______________]
Notice that these figures are a physical model for multiplication. Elementary students should have this concrete model in the early grades.
Copyright Margo Lynn Mankus | {"url":"http://mason.gmu.edu/~mmankus/AreaLab/1Sqrect.htm","timestamp":"2014-04-21T07:31:40Z","content_type":null,"content_length":"1441","record_id":"<urn:uuid:6cbe2703-a9a9-48bf-a41a-d899df39e394>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
Colonia Prealgebra Tutor
...Helping you voice those questions and overcome those obstacles is an important part of what I do. Let me help you learn how and why the pieces fit together. You will gain the knowledge and
confidence you need to succeed!
10 Subjects: including prealgebra, calculus, geometry, statistics
...I have prepared students for integrated algebra and geometry regents. As of now, I am tutoring junior high students for SHSAT and a sophomore for PSAT. I am patient with my students and help
them build strong basic skills which will help them solve complicated problems.I have helped students prepare for integrated algebra and geometry regents.
15 Subjects: including prealgebra, calculus, geometry, ESL/ESOL
I'm a recent graduate of Pace University's Honors College with a Bachelor's of Science degree in Bio-Chemistry. I have a lot of experience working with children, as I've been a boy scout counselor
for three years now, working with kids in the 11-17 year old age range. I began tutoring two years ago, first with a family friend and slowly expanding, and have really enjoyed it.
37 Subjects: including prealgebra, English, chemistry, reading
...I have assisted many others in familiarizing themselves with Word and becoming users. I am extremely familiar and able to assist students in the Microsoft PowerPoint program. I have been a user
of MPP for over ten years.
29 Subjects: including prealgebra, English, writing, reading
...Additionally, the way I try and tutor facilitates critical thinking which can benefit students no matter what their educational and life's path and goals. I have a PhD and did my thesis work
studying the molecular mechanisms of learning and memory. I have four kids of my own, and often do outreach as a guest science speaker in their schools, which is awesome.
11 Subjects: including prealgebra, chemistry, physics, algebra 1 | {"url":"http://www.purplemath.com/Colonia_Prealgebra_tutors.php","timestamp":"2014-04-18T03:50:27Z","content_type":null,"content_length":"23980","record_id":"<urn:uuid:d4c9e7ca-2fbf-4819-90da-7128076bb719>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are You Old Enough To Remember New Math?
DawnT Are You Old Enough To Remember New Math? Tue, 01/24/12 10:20 PM
• Total
Posts: permalink
• Joined: )
2005 When I was in the first grade ~ '61, during the first few months of the year, we were abruptly issued workbooks. Up in front of the room there were these two stick figures (counting
• Location: men) placed that had removable fingers. Ones man and Tens man. We were going to relearn how to count! From that point on until the 6th grade we were in some sort of mathematic twilight
South FL zone. In retrospect, the teachers from the beginning were clueless. Parents were outraged that we were functionally illiterate in basic mathmatics. Conventional addition,
subtraction,divison and multiplication that one would need on a daily basis had been replaced with pages of expanded notation, simple verbal problems were turned into convoluted
narratives involving sets. Johnny no longer had 50 cents in his pocket, it became a labled set of five tens and zero one cents units. Oh it got better. We're assuming the "operation'
was being transacted in base 10. On another day, we might be adding in base 8 or base 3. A simple addition problem would have to be accompanied with Venn diagrams showing the
intersection of two sets of intergers. By the time we were in 4th grade they finally got around to multiplication that was a fiasco in itself. We had to draw a grid to just to multiply
two numbers. Everything was absolutely abusrd and parents were totally freaked out after a few years. Long division became so weird with colums of numbers on the side with "remainders".
Jr high became a watershed year. We were totally disfunctional with day to day math and 7th and 8th was mostly remedial. While the whole idea was to prepare one for higher mathmatics,
none of it was useful really unless you ventured into pre-calculus and Trig as a college prep student.
How many of you old timers had the pleasure of being indoctrinated in the New Math to spite the Russkies while practicing "Duck and Cover" ?
6star Re:Are You Old Enough To Remember New Math? Tue, 01/24/12 10:59 PM
• Total Posts: 4388
• Joined: 1/28/2004 permalink
• Location: West Peoria, IL
Anyone who got the new math "bug" is a youngster. I graduated from high school ten years earlier (1951).
Treetop Tom Re:Are You Old Enough To Remember New Math? Tue, 01/24/12 11:05 PM
• Total Posts: 321
• Joined: 7/20/2005 permalink
• Location: Baltimore,
MD )
Musical satirist (and Harvard-trained mathematician) Tom Lehrer once stated that "Base 8 is just like Base 10, if you're missing 2 Fingers." His song "New Math" was one of
his best.
MetroplexJim Re:Are You Old Enough To Remember New Math? Wed, 01/25/12 9:48 AM
• Total
Posts: 3704 permalink
• Joined: 6/
24/2007 )
• Location:
McKinney, Frankly, I believe the introduction of the "new math" to have been a communist plot. That and allowing calculators and now computers in the elementary math classroom have now rendered
TX the vast majoyity of Americans now under 50 to be mathematically illiterate as they were made to try running and leaping before they were able even to crawl.
mar52 Re:Are You Old Enough To Remember New Math? Wed, 01/25/12 2:47 PM
• Total Posts: 7593
• Joined: 4/17/2005 permalink
• Location: Marina del Rey, CA
Eight Grade, 1966
The teachers HAD to teach us new math. The new math books just came out and the teachers were one chapter ahead of the students.
It was a joke to be discontinued later. A year of wasted learning.
But remember... addition is commutative
fishtaco Re:Are You Old Enough To Remember New Math? Wed, 01/25/12 4:00 PM
• Total Posts: 740
• Joined: 5/11/2010 permalink
• Location: Roachdale, IN
I do recall it. But sure don't remember anything about it. Was there something to do with division that was done down the side of the problem?
plb Re:Are You Old Enough To Remember New Math? Wed, 01/25/12 8:28 PM
Frankly, I believe the introduction of the "new math" to have been a communist plot. That and allowing calculators and now computers in the elementary math classroom have now rendered the
vast majority of Americans now under 50 to be mathematically illiterate as they were made to try running and leaping before they were able even to crawl.
I keep reading that we need to have more computers in elementary classrooms because we are not turning out enough computer programmers and engineers and have to import them from China and India.
No one seems to realize that schools in China and India do not have computers in their classrooms. Their students are studying academic subjects, not punching away on keyboards.
Louis Re:Are You Old Enough To Remember New Math? Wed, 01/25/12 8:58 PM
• Total Posts: 713
• Joined: 4/28/2003 permalink
• Location: Henderson, KY
I remember new math, and now I can't do math problems because of it.
Mosca Re:Are You Old Enough To Remember New Math? Wed, 01/25/12 10:49 PM
• Total Posts: 2929
• Joined: 5/26/2004 permalink
• Location: Mountain
Top, PA )
Hey, I liked new math. It fit the way I understood things. I started it in '60, or '61, and I just "got it". To this day I keep up with stuff like Fermat's last theorem (just
solved recently), the Banach-Tarski paradox, stuff like that.
carlton pierre Re:Are You Old Enough To Remember New Math? Wed, 02/1/12 8:22 AM
• Total Posts: 2500
• Joined: 7/12/2004 permalink
• Location: Knoxville, TN
Remember it well. Seems to me it was around 3rd or 4th grade when we got into it. I started 1st grade in 1960. Seemed very confusing to me at the time.
FriedClamFanatic Re:Are You Old Enough To Remember New Math? Thu, 02/2/12 3:27 AM
• Total Posts: 1496
• Joined: 7/14/2008 permalink
• Location: west chester, PA
I stuck with the old Math...........let's see (XII+VI)/III = VI. Ave atque vale!
Mosca Re:Are You Old Enough To Remember New Math? Thu, 02/2/12 11:17 AM
• Total Posts: 2929
• Joined: 5/26/2004 permalink
• Location: Mountain Top, PA
I stuck with the old Math...........let's see (XII+VI)/III = VI. Ave atque vale!
Quiquid latine dictum sit altum viditur.
Michael Hoffman Re:Are You Old Enough To Remember New Math? Thu, 02/2/12 11:30 AM
• Total Posts:
17810 permalink
• Joined: 7/1/
2000 )
• Location:
Gahanna, OH The only thing I know about New Math is that eldest daughter came home from junior high one day with this new book and asked for help. Her mother and I looked at it and handed it
right back to her. The next year she was back to Old Math, and I was just as much help then.
DawnT Re:Are You Old Enough To Remember New Math? Thu, 02/2/12 12:39 PM
• Total
Posts: permalink
• Joined: )
2005 Pretty much what happened to me. Worse yet, when parents of the kids that I went to school with raised hell that they were unable to participate working with their kid's at home with
• Location: their homework, the official response from the school was they didn't want the parents confusing the kids with traditional math or interfering with their homework. The teachers
South FL themselves were clueless and the only two people that seemed to have any idea what was going on were two, young, twenty something Wunderkind academics that they brought down from one of
the universities up north that were overseeing the math program. Both were seemingly unable to communicate coherently with the parents, teachers, or even the school's board. I even
remember overhearing a teacher mention that with the two of them, it was like a robot talking to a robot. In today's terms, it sounds like they turned the keys over to two Aspies. They
eventually sent them packing or they left during the summer after 5th grade.
CCinNJ Re:Are You Old Enough To Remember New Math? Thu, 02/2/12 12:59 PM
• Total Posts: 7743
• Joined: 7/24/2008 permalink
• Location: Bayonne, NJ
My Mother tried to teach me some strange counting system by tapping in the 70s. I don't know if it was new math. It did work for counting cards in the 80s.
Thanks Mom!!!!!
DawnT Re:Are You Old Enough To Remember New Math? Thu, 02/2/12 2:41 PM
• Total
Posts: permalink
• Joined: )
2005 I remember an effort to get a Korean abacus method into the early grades during the early 70's that wasn't sucessful. There was also some strange card like tool the kids shuffled little
• Location: windows with sliding blinds that had colored dots underneath them that I remember seeing on some talk shows with tiny tots doing big number math much like they did earlier with the
South FL abacus. That wasn't new math though and it's primary criticism was the kids were doing a route operation like keying numbers into a calculator years later without having any
understanding how they got the results.
CCinNJ Re:Are You Old Enough To Remember New Math? Thu, 02/2/12 2:54 PM
• Total Posts: 7743
• Joined: 7/24/2008 permalink
• Location: Bayonne, NJ
I learned the abacus. Honestly...I was mistaken for being partly Asian when I was young. I get that still...but not as much. The abacus is fun!!!
jmckee Re:Are You Old Enough To Remember New Math? Fri, 02/3/12 9:08 AM
• Total Posts: 1172
• Joined: 11/26/2001 permalink
• Location: Batavia, OH
FriedClamFanatic Re:Are You Old Enough To Remember New Math? Fri, 02/3/12 6:22 PM
• Total Posts: 1496
• Joined: 7/14/2008 permalink
• Location: west chester, PA
OMG....I had forgotten about that!......Thanks.....................Guess I was off poisoning pigeons in the park (another good song of his!)
MetroplexJim Re:Are You Old Enough To Remember New Math? Sat, 02/4/12 10:23 AM
• Total
Posts: 3704 permalink
• Joined: 6/
24/2007 )
• Location:
McKinney, plb
Frankly, I believe the introduction of the "new math" to have been a communist plot. That and allowing calculators and now computers in the elementary math classroom have now
rendered the vast majority of Americans now under 50 to be mathematically illiterate as they were made to try running and leaping before they were able even to crawl.
I keep reading that we need to have more computers in elementary classrooms because we are not turning out enough computer programmers and engineers and have to import them from
China and India. No one seems to realize that schools in China and India do not have computers in their classrooms. Their students are studying academic subjects, not punching
away on keyboards.
Yes. There is an unfortunate general misconception that the more we spend on education, the better the results will be. What good does it do a kid to give him a computer before they
are able to read; a calculator before they can multiply and divide?
bartl Re:Are You Old Enough To Remember New Math? Sun, 02/5/12 4:05 PM
• Total Posts:
1208 permalink
• Joined: 7/6/
2004 )
• Location: New
Milford, NJ DawnT
How many of you old timers had the pleasure of being indoctrinated in the New Math to spite the Russkies while practicing "Duck and Cover" ?
Just want to point out that, while "Duck and Cover" would do nothing at ground zero of a nuclear attack, most of the deaths are due to the shockwave and being hit by flying debris.
Therefore, "duck and cover" would have greatly increased your chances of surviving a nuclear attack that was 10-100 miles away.
Current active users
There are 0 members and 2 guests.
Icon Legend and Permission | {"url":"http://www.roadfood.com/Forums/tm.aspx?m=683086&mpage=1","timestamp":"2014-04-18T03:39:12Z","content_type":null,"content_length":"122440","record_id":"<urn:uuid:751a5439-f2bb-4ae7-8437-7a17d5a381f9>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Twisted Verma modules and their quantized analogues
Henning Haahr Andersen
1. Introduction
In [AL] we studied twisted Verma modules for a nite dimensional semisim-
ple complex Lie algebra g. In fact, we gave three rather dierent constructions
which we showed lead to the same modules. Here we shall brie y recall one of
these approaches - the one based on Arkhipov's twisting functors [Ar]. We then
demonstrate that this construction can also be used for the quantized enveloping
algebra U q (g):
In analogy with their classical counterparts the quantized twisted Verma mod-
ules belong to the category O q for U q (g) and have the same composition factors as
the ordinary Verma modules for U q (g): They also possess Jantzen type ltrations
with corresponding sum formulae.
I would like to thank Catharina Stroppel and Niels Lauritzen for some very
helpful comments.
2. The classical case
2.1. Let h denote a Cartan subalgebra of g and choose a set R + of positive
roots in the root system R attached to (g; h): Then we have the usual triangular
decomposition g = n hn + of g with n
+ (respectively n ) denoting the nilpotent | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/457/3879800.html","timestamp":"2014-04-17T12:51:44Z","content_type":null,"content_length":"8209","record_id":"<urn:uuid:463038d5-0dc4-4ef9-8ea9-fe471dd6e0e5>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simple Projectile Motion 3 – Between roof and ground
Let’s try a much shorter range problem. The ship and the fort in the previous post were shooting at each other from 10 miles apart, and the ship could not return fire for about a 220 yard interval.
Reduce the muzzle velocity to 20 meter/second; change h to -10 meter for the high ground firing at the low ground. With these numbers, this situation is more like two guys throwing rocks at each
other; one of them is on the roof of a 2-story building and the other is at street level.
Call them “roof” and “ground” forces.
Here, then, are the two parameters for the roof firing upon the ground force.
Since pictures are quite powerful, let me show you the results up front:
I claim that the roof forces can hit the ground forces 50 m away, while the ground forces have to get within 30 m to return fire.
We have a formula for the angle of launch which leads to maximum range for the roof forces.
The angle for maximum range is noticeably less than 45°. Let me remind you what those other rules are that I used:
The first rule is the solution for the angle of launch which leads to maximum range. We worked that out in the previous post, using calculus. The second rule merely sets the acceleration of gravity.
Now we put that angle into our equations for sx and sy at maximum range… solve the second equation for the time(s) T when the projectile is at h = -10 m… and then get the horizontal distance for each
of those times from the first equation…
That is, the projectile is at -10 m vertically at approximately –0.6 and +3.2 seconds, and the two distances from the roof are approximately –10 m and +50 m). Let’s look at that. I need the
trajectory, i.e. the position vector as a function of time. Here it is with units:
Let me remind you what all those pieces were. fe2 was the general equations for the x- and y-components of position and velocity:
BC and BC2 are boundary conditions, to set the initial position to (0,0) and to express the initial velocity in terms of initial speed vo and angle of launch $\alpha\$:
After applying the boundary conditions we have
The [[{1,2}]] in the following screenshot selects the first two equations – the curly brackets {} inside are crucial; it’s they that ask for elements of a list, in this case the first two.
Then s1, ps3, and cs, which we’ve seen already, set the angle, set the problem-specific parameters, and set the acceleration of gravity g.
Anyway, I want to plot the right hand sides of the final form of those two equations.
I have one last thing to do: get rid of the units (“strip” is a rule that sets Meter and Second to 1 – in fact, it also sets Feet and Foot to 1… in case I use English units, and because Mathematica®
recognizes both Foot and Feet):
As a quick check, one would want the maximum altitude — and don’t call it h, because we’re using h already! Oh, I’ll remind you that we can get the maximum altitude by equating specific potential
energy gh (i.e. mass m = 1) to the specific kinetic energy due to the initial vertical component of velocity. (We saw that in the first projectile post.)
So the maximum altitude is 8 m above the roof.
Rather than type in the values for times and distances, I really should let Mathematica compute them (called t3, t4, x3 for no particular reason):
The red dot marks the position of the negative-time solution. The point is that there is nothing special about time = 0 on the trajectory. Yes, we want it to be the time of launch from the roof – but
this projectile could have been launched from the red dot at the negative-time solution. And no, it would not have the same velocity or angle of launch at the red dot as at the roof – but it could be
launched to traverse the same path as if fired from the roof. If you couldn’t see the launch point, for example, you couldn’t tell whether the projectile came from the roof or from the red dot – or
any other out-of-sight point on the trajectory.
I’m curious. Is that trajectory maximum range for a launch at the red point? To answer that, I need the velocity at launch. (Equivalently, I need the tangent vector of the trajectory at launch.) Here
are the velocity equations… and I plug in $\alpha\$, g, and vo – and change t to T… and then plug in the negative time solution…
Now I have equations in t – but a solution in T:
and it would be cleaner to just work with the RHS, changing t to T (otherwise sx[t] becomes
sx[–0.633682], which is OK but ugly):
Just the fact that the two components are different tells me that the tangent vector is not at 45°: this trajectory is not maximum range for a launch from the ground.
Let me get a clean picture of the trajectory. Note that I have changed the coordinate system simply by adding a constant vector {0,10} to the computed trajectory; I have thereby moved the launch
point to (0,10) – my origin is now at the base of the building. Note also that I have explicitly set the image size and the plot range – and I changed the aspect ratio to correspond better with the
chosen plot range.
That’s half the problem. Now set h = +10 m for the ground force firing upon the roof.
Our initial angle becomes:
The angle for maximum range is significantly more than 45°.
Now we put that angle into our equations for sx and sy at maximum range. As before, we solve the second equation for time T, getting two answers, and then we solve the first equation for the two
corresponding distances.
That is, the projectile is at +10 m vertically at two times (approx 0.8 and 2.5 seconds) at two distances from the launch (approx 9 m and 29 m). The roof can hit the ground 50 m away; the ground
force can return fire 29 m away. That’s not good: the ground force is vulnerable and unable to return fire for 21 m.
Let’s see that. I need the trajectory, i.e. the position as a function of time. Here it is with units:
so I want to plot the right hand sides.
Get rid of the units using “strip” to set Meter and Second to 1):
Just for the fun of it, again, get the maximum altitude:
Note that the maximum altitude isn’t all that much over 10 m.
Anyway, here’s what the ground force is doing:
As before, I use the red dot to mark the first time when the projectile is at the target altitude. In this case it’s part of the physical solution (“on the way up”) whereas in the other case it was
at a negative time.
So. Would it be fair to say that the only reason why the ranges are different is because one is in the air longer than the other? Maybe. Almost. Essentially.
Maybe it would be too pedantic to say that that’s a very, very good first approximation; there is another, less significant factor: we change the angle of launch, too.
Let me close by putting the two trajectories on the same picture. In addition to using the same plot range and image size and aspect ratio, I have reflected and shifted the trajectory. The expression
{-1,1} traj is a termwise product: it multiplies the x-component of traj by -1 and the y-component by +1. I also added a constant vector {x3,0} to the trajectory, so that the launch point is now
(x3,0). In other words, my origin is at the base of the building, just as it was for the other revised plot.
Now I can put the two images together, and end as I began:
Leave a Reply Cancel reply
Recent Comments
Color Systems — Part… on Color: from Spectrum to XYZ an…
Mariz Yap on Trusses – Example 3, a Howe…
Poker game(r) on Poker Hands – 5 card draw
sola on Trusses – Snow Load on Howe, F…
pahb on Control Theory: Transfer Funct…
pahb on Control Theory: Transfer Funct…
David Cortes-Ortuño… on Mathematica Notes – Coloring…
kmoxe on Color: from Spectrum to XYZ an…
Dan on Color: Color-primary transform…
Guenter Bruetting on Color: from XYZ to spectr…
Femi on Cubic Polynomials and Complex…
Femi on Cubic Polynomials and Complex…
Nikunj on Happenings – 2013 Apr 13
bad boy sex secrets… on axis and angle of rotatio… | {"url":"http://rip94550.wordpress.com/2011/05/23/simple-projectile-motion-3-%E2%80%93-between-roof-and-ground/","timestamp":"2014-04-18T10:35:03Z","content_type":null,"content_length":"95507","record_id":"<urn:uuid:b89f6d9d-d347-414e-807f-e9c030dd6f27>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
s -
In cryptography, an asymmetric key algorithm uses a pair of different, though related, cryptographic keys to encrypt and decrypt. The two keys are related mathematically; a message encrypted by the
algorithm using one key can be decrypted by the same algorithm using the other. In a sense, one key "locks" a lock (encrypts); but a different key is required to unlock it (decrypt).
Some, but not all, asymmetric key cyphers have the "public key" property, which means that there is no known effective method of finding the other key in a key pair, given knowledge of one of them.
This group of algorithms is very useful, as it entirely evades the key distribution problem inherent in all symmetric key cyphers and some of the asymmetric key cyphers. One may simply publish one
key while keeping the other secret. They form the basis of much of modern cryptographic practice.
A postal analogyEdit
An analogy which can be used to understand the advantages of an asymmetric system is to imagine two people, Alice and Bob, sending a secret message through the public mail. In this example, Alice has
the secret message and wants to send it to Bob, after which Bob sends a secret reply.
With a symmetric key system, Alice first puts the secret message in a box, and then locks the box using a padlock to which she has a key. She then sends the box to Bob through regular mail. When Bob
receives the box, he uses an identical copy of Alice's key (which he has somehow obtained previously) to open the box, and reads the message. Bob can then use the same padlock to send his secret
In an asymmetric key system, Bob and Alice have separate padlocks. Firstly, Alice asks Bob to send his open padlock to her through regular mail, keeping his key to himself. When Alice receives it she
uses it to lock a box containing her message, and sends the locked box to Bob. Bob can then unlock the box with his key and read the message from Alice. To reply, Bob must similarly get Alice's open
padlock to lock the box before sending it back to her. The critical advantage in an asymmetric key system is that Bob and Alice never need send a copy of their keys to each other. This substantially
reduces the chance that a third party (perhaps, in the example, a corrupt postal worker) will copy a key while it is in transit, allowing said third party to spy on all future messages sent between
Alice and Bob. In addition, if Bob were to be careless and allow someone else to copy his key, Alice's messages to Bob will be compromised, but Alice's messages to other people would remain secret,
since the other people would be providing different padlocks for Alice to use.
Actual algorithms - two linked keysEdit
Fortunately cryptography is not concerned with actual padlocks, but with encryption algorithms which aren't vulnerable to hacksaws, bolt cutters, or liquid nitrogen attacks.
Not all asymmetric key algorithms operate in precisely this fashion. The most common have the property that Alice and Bob own two keys; neither of which is (so far as is known) deducible from the
other. This is known as public-key cryptography, since one key of the pair can be published without affecting message security. In the analogy above, Bob might publish instructions on how to make a
lock ("public key"), but the lock is such that it is impossible (so far as is known) to deduce from these instructions how to make a key which will open that lock ("private key"). Those wishing to
send messages to Bob use the public key to encrypt the message; Bob uses his private key to decrypt it.
Of course, there is the possibility that someone could "pick" Bob's or Alice's lock. Unlike the case of the one-time pad or its equivalents, there is no currently known asymmetric key algorithm which
has been proven to be secure against a mathematical attack. That is, it is not known to be impossible that some relation between the keys in a key pair, or a weakness in an algorithm's operation,
might be found which would allow decryption without either key, or using only the encryption key. The security of asymmetric key algorithms is based on estimates of how difficult the underlying
mathematical problem is to solve. Such estimates have changed both with the decreasing cost of computer power, and with new mathematical discoveries.
Weaknesses have been found in promising asymmetric key algorithms in the past. The 'knapsack packing' algorithm was found to be insecure when an unsuspected attack came to light. Recently, some
attacks based on careful measurements of the exact amount of time it takes known hardware to encrypt plain text have been used to simplify the search for likely decryption keys. Thus, use of
asymmetric key algorithms does not ensure security; it is an area of active research to discover and protect against new and unexpected attacks.
Another potential weakness in the process of using asymmetric keys is the possibility of a 'Man in the Middle' attack, whereby the communication of public keys is intercepted by a third party and
modified to provide the third party's own public keys instead. The encrypted response also must be intercepted, decrypted and re-encrypted using the correct public key in all instances however to
avoid suspicion, making this attack difficult to implement in practice.
The first known asymmetric key algorithm was invented by Clifford Cocks of GCHQ in the UK. It was not made public at the time, and was reinvented by Rivest, Shamir, and Adleman at MIT in 1976. It is
usually referred to as RSA as a result. RSA relies for its security on the difficulty of factoring very large integers. A breakthrough in that field would cause considerable problems for RSA's
security. Currently, RSA is vulnerable to an attack by factoring the 'modulus' part of the public key, even when keys are properly chosen, for keys shorter than perhaps 700 bits. Most authorities
suggest that 1024 bit keys will be secure for some time, barring a fundamental breakthrough in factoring practice or practical quantum computers, but others favor longer keys.
At least two other asymmetric algorithms were invented after the GCHQ work, but before the RSA publication. These were the Ralph Merkle puzzle cryptographic system and the Diffie-Hellman system. Well
after RSA's publication, Taher Elgamal invented the Elgamal discrete log cryptosystem which relies on the difficulty of inverting logs in a finite field. It is used in the SSL, TLS, and DSA
A relatively new addition to the class of asymmetric key algorithms is elliptic curve cryptography. While it is more complex computationally, many believe it to represent a more difficult
mathematical problem than either the factorisation or discrete logarithm problems.
Practical limitations and hybrid cryptosystemsEdit
One drawback of asymmetric key algorithms is that they are much slower (factors of 1000+ are typical) than 'comparably' secure symmetric key algorithms. In many quality crypto systems, both algorithm
types are used; they are termed 'hybrid systems'. PGP is an early and well-known hybrid system. The receiver's public key encrypts a symmetric algorithm key which is used to encrypt the main message.
This combines the virtues of both algorithm types when properly done.
We discuss asymmetric ciphers in much more detail later in the Public Key Overview and following sections of this book.
Last modified on 24 June 2013, at 16:08 | {"url":"http://en.m.wikibooks.org/wiki/Cryptography/Asymmetric_Ciphers","timestamp":"2014-04-18T03:05:02Z","content_type":null,"content_length":"21969","record_id":"<urn:uuid:7ce95fd7-ba06-4623-853e-8816e37211f3>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complicated Maxwell Boltzman Distribution Integration
From this, I have since got:
[tex] (\frac{2 \pi}{\pi ^{3/2}})(\frac{m^{3/2}}{m^2})(\frac{(2kT)^2}{(2kT)^{3/2}}) [/tex]
Which I cancelled down to:
[tex] (\frac{2 \pi}{\pi ^{3/2}})(\frac{1}{\sqrt{m}})(\sqrt{2kT}) [/tex]
Does this look right?
Well continuing the cancellation of pi in the last expression, one obtains
[tex] \left(\frac{2}{\pi^{1/2}}\right)\left(\frac{1}{\sqrt{m}}\right)(\sqrt{2kT}) [/tex],
Then one can bring pi and m under the square root as the denominator under the numerator 2kT,
[tex] 2 \sqrt{\frac{2kT}{\pi m}} [/tex]
which is the same as
[tex] \sqrt{\frac{8kT}{\pi m}} [/tex] which is correct.
and now that you've gone through this exercise
One should also try the relationsip between mean kinetic energy and gas temperature,
or <v | {"url":"http://www.physicsforums.com/showthread.php?t=231263","timestamp":"2014-04-19T22:45:37Z","content_type":null,"content_length":"52323","record_id":"<urn:uuid:190cf09d-a37f-46a3-a639-f862ea7f1888>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Dan Tow's SQL formula
From: Yong Huang <yong321_at_yahoo.com> Date: Wed, 25 Feb 2009 21:49:58 -0800 (PST) Message-ID: <735246.85143.qm_at_web80601.mail.mud.yahoo.com>
[Resend. I'm forced to trim quoted text down when posting a message. Complete original message can be found at
> To better my understanding I did a test with more familiar table which is
> scott.emp table with following query:
> select sum(count(mgr)*count(mgr)) / (sum(count(mgr))*sum(count(*)))
> from emp
> group by mgr
> SUM(COUNT(MGR)*COUNT(MGR))/(SUM(COUNT(MGR))*SUM(COUNT(*)))
> ----------------------------------------------------------
> .225274725
> So I got a selectivity of roughly 0.225. Does not sound right to me because
> there are 6 managers and 13 employees (14 but one has no manager)
> My question is really, does anyone understand this formula and actually use
> it?
> If so please throw some lights :-)))
> Thank you all
> Alex
I don't know the answer. But Dan Tow's formula looks very much like Jonathan Lewis' calculation of density when he describes histograms (see his book on p.172), or Ari Mozes's patent 6732085 (http://
www.freepatentsonline.com/6732085.html) where he says "density can be calculated as the sum of the square of the repetition counts for non-popular values divided by the product of the number of rows
in the table and the number of non-popular values in the table", which is kind of beyond my knowledge.
Unfortunately, the number you get by applying Dan's formula to scott.emp, .225274725, is NOT the density for the mgr column, which is .038461538 on my database. But it's close to index selectivity
(ix_sel) shown in 10053 trace when an index is created on mgr column and a query has "where mgr = <some number>" is parsed.
Yong Huang
Received on Wed Feb 25 2009 - 23:49:58 CST | {"url":"http://www.orafaq.com/maillist/oracle-l/2009/02/25/0767.htm","timestamp":"2014-04-19T18:46:58Z","content_type":null,"content_length":"8907","record_id":"<urn:uuid:ad877630-e5ff-43b0-8c57-9cad0e047cd2>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discrete Math Tutors
Chicago, IL 60613
PhD Student Available to Tutor Middle School through Graduate Math
...I work well with students from middle school through college and I can tutor all K-12 math, including college Calculus, Probability, Statistics,
Discrete Math
, Linear Algebra, and other subjects. I have flexible days and afternoons, and I can get around Chicago...
Offering 10+ subjects including discrete math | {"url":"http://www.wyzant.com/Chicago_discrete_math_tutors.aspx","timestamp":"2014-04-19T08:43:50Z","content_type":null,"content_length":"60058","record_id":"<urn:uuid:4dd7ce3e-dd85-45c8-be18-794c04a24c6e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why most charity fundraisers cause harm | 80,000 Hours
Why most charity fundraisers cause harm
Doing harm when you think you’re doing good - part 2
</br> </br>
If you become a fundraisers, you won’t raise extra money for charity. Rather, you’ll shift money from one charity to another.
Suppose Oxfam spends an extra £1 on fundraising. We’d expect this to raise them at least £4^1. So, it looks like they’ve generated an extra £3 for charity. Great!
But what would have happened if they hadn’t spent that £1? If this was an easy opportunity to raise money, then another charity would take it instead. Charities as a whole are raising about as much
money as they can from the public. This is what we’d expect. Charities are aiming to do as much for their cause as possible, so if there were easy money to raise, they’d take it. And intuitively,
people have a relatively stable ‘charity budget’ . They avoid future giving once they’ve ‘done their part.’
This means that Oxfam wasn’t actually generating new money for charity by doing more fundraising, rather, it was shifting it from somewhere else. So the extra fundraiser wasn’t doing any good^2.
But it gets worse.
If you graph how cost-effective charities are, you’d expect to find a curve a bit like the one on my very basic graphic below^3:
The key point is that the median is significantly less than the mean. In other words, the effectiveness of the majority of charities is less than the effectiveness of the average charity. That sounds
a bit odd, but it’s just because the average effectiveness is pulled up a long way by the small number of really good charities at the top. (Similarly, the average wage at a company is normally
higher than what the majority of the staff are paid).
This means that most fundraisers at the margin are shifting money to charities that are less effective than the average charity. So, they are reducing the overall effectiveness of the charity sector.
They are actually reducing how much good gets done!
It’s pretty likely that the effectiveness of the median charity^4 is significantly less than the mean. It could easily be as low as one fifth^5. If it were one fifth, then even if every £1 of
fundraising only caused 80p to move from the mean to the median, it would negate the good done by the £4 spent by the median. See below for the calculation^6. So, even if most of the money raised
wouldn’t have been given by the public otherwise, no good is being done.
In fact, we’ve just seen that extra charity fundraising doesn’t cause much extra money to be given to charity. Most or perhaps almost all of the money is just shifted from other charities. Assuming
the money comes from the mean charity, then the majority of fundraisers are shifting money from more effective causes to less, significantly reducing how much good is done by charity overall. The
majority of fundraisers look like they’re doing good, but taking into account the indirect effects could mean they’re actually making the world worse off^7.
Before we get carried away, it’s important to note that that fundraising as a whole is still doing good^8. Moreover, if you’re a fundraiser, and you’ve got no idea how effective your charity is, then
you should expect it to have the mean effectiveness (not median). That means that if you become a fundraiser you should expect to be doing good (or at worst be neutral). If you’re working for a
charity that’s more effective than the mean, then you’re doing a lot of good.
If, on the other hand, you’ve got reason to think that your charity is less effective than the mean, then please stop!
The reverse, however, is true for the minority of fundraisers who work for charities that are more cost-effective than the mean. They’re shifting money from less effective charities to more effective
ones, potentially doing a huge amount of good.
If you’d like to find out more about effective giving, check out our page.
Thanks to Will MacAskill for the inspiration
You might also enjoy:
References and notes
http://80000hours.org/blog/92-why-don-t-charities-spend-more-on-fundraising The nation’s charity budget is equal to about 1.1% of GDP. http://www.philanthropyuk.org/resources/us-philanthropy Of
course, there may still be ways to significantly increase how much is given to charity in total. It’s just that it would require a new approach. In fact, we’re very hopeful that 80,000 Hours and GWWC
are one of these new approaches.
This has direct empirical support. The DCP2 found that health interventions were log-normally distributed. The most effective intervention generated 300 DALY per $1000, while the least generated only
0.02. The median intervention generated about 5. The mean generated about 25. There is a difference, however, between the distribution of interventions and the distribution of charity effectiveness.
If we assume that charities choose interventions randomly (which might not be so bad an assumption), then the distributions would be the same. However, it seems like charities care at least a little
about effectiveness. This will mean that the distribution of charities will move further to the right. We’d expect, however, for it to still have that kind of shape.
• Let the median charity have an effectiveness of 1 unit/£, where ‘1 unit’ is just whatever is of value in the area being considered, for instance ‘healthy people.’
• The mean charity has an effectiveness of 5 unit/£
• The median spends £1 on fundraising, and raises £4
• £3.20 is money that wouldn’t have been given otherwise, 80p is moved from the mean charity to the median
• In reality, the median charity spends the (4-1) = £3 of ‘profit’, creating 3x1 = 3 units of value
• But if they hadn’t fundraised, then the mean charity would have had an extra 80p. It would have cost them 20p to raise this money, so they would have had an extra 60p to spend. This would have
produced 0.6x5=3 units of value
• So, no good has been done overall
More realistic figures: * The median spends £1 on fundraising, and raises £4 * £0.50 is money that wouldn’t have been given otherwise, £3.50 is moved from the mean charity to the median * The £3 of
‘profit’ creates 3x1 = 3 units of value at the median charity * But if they hadn’t fundraised, then the mean charity would have had an extra £3.50. Subtracting the fundraising cost of 90p, means it
would have produced 2.6x5 = 13 units of value. * So, the median charity is overall causing the loss of 13 - 3 = 10 units of value. Impressively, this loss is over three times as large as the good
they apparently do directly.
1. According to the Institute of Fundraising’s Fundratios survey: ↩
2. More accurately, additional fundraising seems to cause additional giving of about £1.9, rather than £4, so the extra ‘profit’ to the charity sector is only 90p per £1. It’s not clear that this
can go much lower, due to the risks of fundraising. So, fundraisers probably cause some extra money to be donated to charity, but it’s much less than it looks. ↩
3. We’d expect the distribution of interventions theoretically to be log-normal because most characteristics that make for an effective intervention will be distributed normally. The total
effectiveness will arise from the product of all these characteristics. Multiplying normal distributions gives you a log-normal distribution. ↩
4. Or more accurately, the average effectiveness of all charities that are less effective than the mean. By taking using the, we’re making an underestimate of the gap. ↩
5. For the health interventions in the DCP, the median effectiveness was about 5, while the mean was 25. We’d expect the mean in practice to be slightly less due to reversion to the mean. On the
other hand, if we’re considering all types of intervention (rather than just health), the overall dispersion will be much larger, making the gap between the mean and the median significantly
higher again. So, assuming the median effectiveness is about one fifth of mean effectiveness is likely to be an overestimate. ↩
6. Assuming the mean pound that goes to charity does more good than a pound spent on consumption ↩ | {"url":"http://80000hours.org/blog/93-why-most-charity-fundraisers-cause-harm","timestamp":"2014-04-20T00:38:06Z","content_type":null,"content_length":"35894","record_id":"<urn:uuid:e81f8cad-6c07-4d4b-81e9-8f5bf0850116>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |
SailNet Community - View Single Post - Block and tackle breaking strength
Block and tackle breaking strength
I'm interested in how reeving a line through blocks affects the whole system's breaking strength.
Suppose I have a line with breaking strength B and I produce a block and tackle with mechanical advantage A (i.e. pull A feet to move the moving part 1 foot).
To simplify things, for now, imagine the blocks and attachment points have infinite breaking strength and zero friction, and that the curve in the rope at all points (including knots and splices)
does not decrease the breaking strength of the rope.
I would guess that in this idealized scenario, the breaking strength of the whole system is now A x B, since to achieve a tension of B in the line, I would have to load the moving part with a tension
of A x B.
The first thing that doesn't feel right is that, at the points where the line turns around the block, the line has tension in opposite directions of equal magnitude, which should add up. Imagine a
2:1 purchase, rove to disadvantage. The moving part has two bits of line sticking out, straight up. If I put a weight W on the moving part, each bit of line supports W/2, which initially support the
above reasoning for the breaking strength being A x B (since the line will break when W = 2 x B).
But the "two bits of line" are really just two ends of the same line being pulled away from another, each with tension W/2, so in between them, surely the tension is now W. At the very least, the
block in the moving part is pressing down (across) the line with force W. So this seems to imply that the breaking strength is now just B again.
An important thing that I don't know how to account for here is how strong a line is when a force is applied transversely, rather than longitudinally. I would imagine it depends on how the line is
assembled (single braid, double braid, three strand, etc.).
Finally, as an application: I put together a soft shackle last night from 3/6-in dyneema with a breaking strength of 5,400 lbs. The open shackle has the line doubled up, and the closed shackle is
doubled up again, so it seems the breaking strength of the whole thing (by the original A x B argument) should be 21,600 lbs, minus whatever loss in strength is created by the knot, and assuming that
all parts share the load equally.
s/v Essorant
1972 Catalina 27
Last edited by AdamLein; 01-29-2011 at 12:42 PM. | {"url":"http://www.sailnet.com/forums/692343-post1.html","timestamp":"2014-04-17T11:25:27Z","content_type":null,"content_length":"35886","record_id":"<urn:uuid:52d3574a-c5f5-4789-8211-9af4bc4682ec>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US20030208343 - Design optimization of circuits by selecting design points on parameter functions to improve optimizing parameters of circuits within design constraints
[0022] The present invention is a method and computer program product for determining the optimal values of the design parameters of a circuit block. Parameter functions relating the design
parameters for circuits in the circuit block are created. Based on these parameter functions, the design parameters are optimized to satisfy the design constraints. In one embodiment, the design
parameters include power and delay and the parameter functions are power-delay curves. The power-delay curves are generated using a timing simulator, a power estimator, and transistor sizing tools.
The invention provides a technique to help designer to perform trade-off analysis for optimizing the design while meeting the design constraints.
[0023] In the following description, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to
one skilled in the art that these specific details are not required in order to practice the present invention. In other instances, well known electrical structures and circuits are shown in block
diagram form in order not to obscure the present invention.
[0024] In a circuit design, the design engineer typically faces with a number of design parameters and design constraints. The design constraints are usually dictated by the system requirements and
specifications. Examples of the design constraints include propagation delay, power consumption, packaging, number of input/output (I/O) lines, etc. The design constraints are typically imposed on
one or more design parameters, while leaving other parameters to be optimized to achieve high performance. The design parameters, therefore, are divided into two parameter sets: a constraint set and
an optimizing set. The “constraint set” includes constraint parameters which are the parameters that have to meet the design constraints. The “optimizing set” includes the optimizing parameters which
are the parameters that need to be optimized. In an exemplary scenario, a constraint parameter is the propagation delay and an optimizing parameter is the power consumption. In another scenario, the
propagation delay is the optimizing parameter and the power consumption is the constraint parameter.
[0025] The relationship between the constraint parameters and the optimizing parameters is described by a parameter function. A “parameter function” describes the variation of one parameter as a
function of another parameter. For example, a parameter function may describe the variation of the power consumption as a function of the delay. The variation of one parameter as a function of
another is typically caused by a configuration of the circuit such as the size of the transistors, the choice of circuit technology (e.g., domino versus static), etc. A configuration of the circuit
that gives rise to the particular values of the design parameters corresponds to a design point.
[0026] A system, a subsystem, a module or a functional block may consist of a number of circuits. Each circuit is characterized by a parameter function. Optimizing the design of a subsystem or
functional block involves a trade-off consideration of all the parameter functions of all the individual circuits of the subsystem or functional block. For a parameter function of a given circuit,
there are many design points corresponding to different circuit configurations. Therefore, optimizing a subsystem or functional block involves the selection of the design points on the parameter
functions that provide the optimal values of the optimizing parameters and acceptable values of the constraint parameters. The present invention provides a technique to optimize the overall design
using the parameter functions.
[0027]FIG. 1A is a diagram illustrating an engineering design cycle in accordance with the teachings of the invention. The engineering design cycle 100 includes a first logic synthesis phase 110, a
circuit design phase 120, a design optimization phase 130, and a second logic synthesis phase 140.
[0028] The first logic synthesis phase 110 provides the high level logic description and/or design of the circuits. In the first logic synthesis phase 110, the designer synthesizes the circuits
manually or using a number of tools including Computer-Aided Design (CAD) tools. Examples of CAD tools include hardware description language (HDL) compilers, and schematic entry tool. The result of
the first logic synthesis phase 110 includes the design in high level form such as a textual description of circuit at the behavioral level, register transfer language (RTL), or microarchitecture.
[0029] The circuit design phase 120 receives the generated logic synthesis files to generate the synthesized circuits. The synthesized circuits may be represented by circuit schematics, a netlist of
the circuits, or any other convenient form that can be further processed by additional CAD tools. Essentially, the circuit design phase 120 represents an unoptimized complete design that shows
subsystems or functional blocks at the detailed implementation level.
[0030] The design optimization phase 130 determines the optimal values for the design parameters to meet the design constraints. In the design optimization phase 130, the design engineer uses a
design workstation or a computer system 132. The computer system 132 is supported by a design environment which includes the operating system and many CAD tools such as timing analyzer, power
estimator, transistor sizing tool to adjust the design parameters according to the allowable design budgets. The design optimization phase 130 typically produces a number of parameter functions that
relate the design parameters for the circuits. An example of such a parameter function is a power-delay curve 135. The power-delay curve 135 shows the relationship between the power consumption and
the propagation delay for a particular circuit in a functional block. The power-delay curve 135 has a number of design points corresponding to different implementations or configurations of the
circuit under consideration. The power-delay curve 135 provides the design engineer the basic information to optimize his or her circuit under the specified design constraints.
[0031] As shown in FIG. 1A, from the information provided by the power-delay curve 135, the design engineer modifies the circuit design according to the design points. The exemplary power-delay curve
135 has three design points A, B, and C. The design point A corresponds to a circuit implementation that has high power consumption and fast speed, representing an undesirable implementation because
of excessive power consumption. The design point B corresponds to the optimal power consumption and optimal speed, also representing the best circuit implementation. The design point C corresponds to
low power consumption and acceptable speed, representing a desirable implementation. If the circuit implementation is at the design point A, the design engineer will have the option to go back to the
first logic synthesis phase 110 or the circuit design phase 120. If the circuit implementation is at the design point C, the design engineer will go to the second logic synthesis phase 140.
[0032] The second logic synthesis phase 140 is essentially the same as the first logic synthesis phase 110 with the exception that the design engineer now focuses more on giving the extra design
margin to other circuits in the subsystem or functional block. The low power consumption at the design point C provides more margin to the power budget for other circuits. In the second logic
synthesis phase 140, the design engineer modifies the circuit synthesis based on the extra margin, such as repartitioning, floor-plan editing, etc.
[0033]FIG. 1B is a diagram illustrating one embodiment of a computer system 132 in which one embodiment of the present invention may be utilized. The computer system 132 comprises a processor 150, a
host bus 155, a peripheral bridge 160, a storage device 165, an advanced graphics processor 175, a video monitor 177, and a peripheral bus 180,
[0034] The processor 150 represents a central processing unit of any type of architecture, such as complex instruction set computers (CISC), reduced instruction set computers (RISC), very long
instruction word (VLIW), or hybrid architecture. The processor 150 is coupled to the peripheral bridge 160 via the host bus 155. While this embodiment is described in relation to a single processor
computer system, the invention could be implemented in a multi-processor computer system.
[0035] The peripheral bridge 160 provides an interface between the host bus 115 and a peripheral bus 180. In one embodiment, the peripheral bus 180 is the Peripheral Components Interconnect (PCI)
bus. The peripheral bridge 160 also provides the graphic port, e.g., Accelerated Graphics Port (AGP), or the graphics bus 172 for connecting to a graphics controller or advanced graphics processor
175. The advanced graphics processor 175 is coupled to a video monitor 177. The video monitor 177 displays graphics and images rendered or processed by the graphics controller 125. The peripheral
bridge 160 also provides an interface to the storage device 165.
[0036] The storage device 165 represents one or more mechanisms for storing data. For example, the storage device 165 may include non-volatile or volatile memories. Examples of these memories include
flash memory, read only memory (ROM), or random access memory (RAM). FIG. 1B also illustrates that the storage device 165 has stored therein data 167 and program code 166. The data 167 stores
graphics data and temporary data. Program code 166 represents the necessary code for performing any and/or all of the techniques in the present invention. Of course, the storage device 165 preferably
contains additional software (not shown), which is not necessary to understanding the invention.
[0037] The peripheral bus 180 represents a bus that allows the processor 150 to communicate with a number of peripheral devices. The peripheral bus 180 provides an interface to a
peripheral-to-expansion bridge 185, peripheral devices 190 [1 ]to 190 [N], a mass storage controller 192, a mass storage device 193, and mass storage media 194. The peripheral devices 190 [1 ]to 190
[N ]represent any device that is interfaced to the peripheral bus 180. Examples of peripheral devices are fax/modem controller, audio card, network controller, etc. The mass storage controller 192
provides control functions to the mass storage device 193. The mass storage device 193 is any device that stores information in a non-volatile manner. Examples of the mass storage device 193 includes
hard disk, floppy disk, and compact disk (CD) drive. The mass storage device 193 receives the mass storage media 194 and reads their contents to configure the design environment for the design
[0038] The mass storage media 194 contain programs or software packages used in the environment. The mass storage media 194 represent a computer program product having program code or code segments
that are readable by the processor 150. A program code or a code segment includes a program, a routine, a function, a subroutine, or a software module that is written in any computer language (e.g.,
high level language, assembly language, machine language) that can be read, processed, compiled, assembled, edited, downloaded, transferred, or executed by the processor 150. The mass storage media
194 include any convenient media such as floppy diskettes, compact disk read only memory (CDROM), digital audio tape (DAT), optical laser disc, or communication media (e.g., internet, radio frequency
link, fiber optics link). For illustrative purposes, FIG. 1B shows floppy diskettes 195 and compact disk read only memory (CDROM) 196. The floppy diskettes 195 and/or CDROM 196 contain design
environment 198. Examples of the tools or computer readable program code in the design environment 198 include operating system, computer aided design (CAD) tools such as schematic capture, hardware
description language (HDL) compiler, text editors, netlist generator, timing analyzer, power vector generator, timing simulator, power simulator, circuit configuration, component sizer, parameter
function generator, parameter optimizer, and graphics design environment. These tools, together with the operating system of the computer system 132 form the design environment 198 on which the
design and optimization process can be carried out.
[0039] The peripheral-to-expansion bridge 187 represents an interface device between the peripheral bus 180 and an expansion bus 187. The expansion bus 187 represents a bus that interfaces to a
number of expansion devices 188 [1 ]to 188 [K]. Example of an expansion device includes a parallel input/output (I/O) device, a serial communication interface device. In one embodiment, the expansion
bus 187 is an Industry Standard Architecture (ISA) or Extended Industry Standard Architecture (EISA) bus.
[0040] The computer system 132 can be used in all or part of the phases of the design process. The processor 150 execute instructions in the program 166 to access data 167 and interact with the
design environment 198. In particular, the computer system 132 is used in the design optimization phase 130.
[0041]FIG. 2 is a diagram illustrating a design optimization phase according to one embodiment of the invention. The design optimization phase 130 includes a netlist generation module 210, a critical
path generation module 223, a power vector generation module 227, a delay calculation module 233, a power calculation module 237, a circuit configuration module 240, a parameter function generation
module 250, and an optimization module 260. Each of these modules may be a software module or a hardware module or a combination of both. In one embodiment, these modules are implemented by program
code that are readable and executed by the processor 150.
[0042] The netlist generation module 210 generates the circuit netlist which provides the information on component identification and how the components of the circuit are interconnected. The circuit
netlist becomes the input to the critical path generation module 223 and the power vector generation module 227. The critical path generation module 223 generates timing delays of various paths in
the circuit based on circuit components and interconnection patterns. From these timing delays, the critical path(s) is (are) identified. The critical path represents the path through which the
overall propagation delay is the most critical, e.g., timing parameters (e.g., setup time, hold time) are difficult to satisfy. The timing files generated by the critical path generation module 223
become the input to the delay calculation module 233. The delay calculation module 233 calculates the delays of the critical paths and other paths using a timing simulator. In one embodiment, the
timing simulator is the PathMill tool, developed by Epic Technologies, now owned by Synopsys, of Mountain View, Calif. The timing values are then forwarded to the circuit configuration module 240. On
the power side, the power vector generation module 227 generates power vectors as input to the power calculation module 237. The power calculation module 237 calculates the power consumption of the
circuit using a power estimator tool. In one embodiment, the power estimator tool is the PowerMill tool, developed by Epic Technologies of Mountain View, Calif. The power values are then forwarded to
the circuit configuration module 240.
[0043] The circuit configuration module 240 configures the circuit to effectuate the power consumption and delay. One configuration is scaling the sizes (e.g., transistor size) of the circuit
components using a sizing tool. In one embodiment, the sizing tool is Amps developed by Epic Technologies of Mountain View, Calif. The sizing tool applies scale factors to scale down the circuit
elements either globally or locally. The resulting circuit is then simulated again for the next delay and power values. The circuit configuration module 240 generates new circuit information to be
fedback to the delay calculation module 233 and the power calculation module 237. The process continues until all the values within the range of the scaling have been used. Then the delay and power
values are forwarded to the parameter function generation module 250. The parameter function generation module 250 generates the parameter function (e.g., power-delay curves) showing the relationship
between the design parameters. The parameter function generation module 250 may also generate the design parameters in any other convenient forms for later processing.
[0044] The optimization module 260 receives the values of the design parameters either in the form of a parameter curve, or in any other convenient format. The optimization module 260 determines the
optimal values of the design parameters.
[0045]FIG. 3 is a diagram illustrating an environment for the power modules using tools according to one embodiment of the invention. The environment 300 includes a format converter module 310, the
power vector generation module 227, and the power calculation module 237.
[0046] The format converter module 310 receives a default file 312 and Intel SPice (ISP) files 314, and generates a netlist file (.ntl) 318. In one embodiment, the format converter module 310 uses a
program called ISPECE2 to map interconnect models and assign transistor sizes for transistors. The ISPECE2 program converts a netlist file into another netlist file which has a format compatible with
the program used in the power calculation module 237. The default file 312 is used by the ISPECE2 program and provides some default information based on the current circuit technology. The ISP files
are circuit description files or netlist files that describe the transistor sizes, cell names, and circuit connectivity for the designs. The .ntl file 318 is a file containing the circuit description
and/or netlist file in a format compatible with the power calculation module 237 (e.g., the Epic format which is a format compatible with tools by Epic Technologies).
[0047] The power vector generation module 227 receives a command text (.tcmd) file 322 and generates a vector (.vec) file 328. In one embodiment, the power vector generation module uses an Intel
Vector Generation (iVGEN) program. The .tcmd file is a small text file which has a list of input pins for a given circuit. The program iVGEN uses the .tcmd file to generate vectors corresponding to
the pins listed in the .tcmd file. A script program takes the circuit's ISP file 314 and generate the .tcmd file. The .vect file 328 is the file containing the vectors needed to run the power
calculation module 237. The vec file basically contains the time steps and a list of binary values (0's and 1's) for each input in the circuit.
[0048] The power calculation module 237 receives the .ntl file 318, the vec file 328, a circuit technology (.tech) file 332, a configuration (.cfg) file 334, and a capacitance (.cap) file 336, and
generates a log (.log) file 342 and an error (.err) file 344. In one embodiment, the power calculation module 237 uses a power estimator tool, e.g., PowerMill. The .tech file is a circuit technology
dependent file that lists the process parameters for a given circuit technology. This file is used by the PowerMill tool to calculate the current consumption of each transistor in the circuit. The
.cfg file 334 is a command file for the PowerMill. The .cfg file 334 is a user generated text file used to instruct the PowerMill what kind of outputs to generate. The command line syntax is
according to the PowerMill format. The cap file 336 is a file containing parasitic capacitance from the layout if the layout exists. If the circuit is designed before the layout, there is no cap file
336 and the PowerMill estimates the parasitic capacitance. The .log file 342 contains the output of the power calculation module 237. The .log file 342 contains a list of average and peak current
consumption for the entire simulated circuit. The .err file 344 is a file contain errors that may occur during the simulation. Examples of the errors include incomplete input, vectors or syntax
problems in the input files.
[0049]FIG. 4 is a diagram illustrating a power-delay curve according to one embodiment of the invention. The power-delay curves show two curves: a domino curve 410 and a static curve 420.
[0050] The power-delay curves in FIG. 4 show the parameter function for an arithmetic circuit. The arithmetic circuit can be designed using a domino circuit technology or a static circuit technology.
The domino curve 410 is the power-delay curve for the circuit using the domino circuit technology and the static curve 420 is the power-delay curve for the circuit using the static circuit
[0051] The domino curve 410 has two design points A and B. The design point A corresponds to the current domino design. At this design point, the circuit has a delay of approximately 1.35 nsec and a
power consumption of approximately 14 mA. The design point B corresponds to another domino design with longer delay at approximately 1.62 nsec and a power consumption of approximately 6.1 mA.
Therefore the saving in power to go from design point A to design point B is 53% for a delay penalty of 23%.
[0052] The static curve 420 has a design point C. The static curve 420 has a delay limit at approximately 1.42 nsec. The design point C is at a delay of approximately 1.62 nsec and a power
consumption of approximately 4.5 mA. Therefore, the design point C has approximately the same delay as the design point B of the domino curve 410 but has an additional power saving of 16%.
[0053] The parameter curve therefore provide the design engineer an immediate visualization of the relationship between the design parameters, e.g., power, delay, so that optimization can be carried
[0054]FIG. 5 is a diagram illustrating an example of an arithmetic logic unit (ALU) datapath subsystem or functional block (FB) according to one embodiment of the invention. The ALU datapath FB 500
includes an input multiplexer (MUX) 510, a comparator 520, a static adder 530, and an output MUX 540. The ALU datapath FB 500 is a common design used in the processor 150 or the graphic processor 175
in FIG. 1B.
[0055] In this illustrative example, the design parameters include power and delay. The parameter function is the power-delay curve. The constraint parameter is the propagation delay through the ALU
FB 500 and the optimizing parameter is the power. The optimization is to minimize the overall power consumption while keeping the propagation delay within the specified design constraint.
[0056] The input MUX 510, the comparator 520, the static adder 530 and the output MUX 540 form a cascaded chain of circuit elements which has a critical path going from one end to the other end. The
composite delay is the sum of the individual delays through each of the circuit elements. In addition, it is assumed that these circuit elements are active, e.g., the power consumption of the ALU FB
500 is the sum of the individual power consumption.
[0057] The design optimization phase includes the generation of the power-delay curves for the individual circuit elements. Then a trade-off analysis is performed on these power-delay curves. Each
power-delay curve has several design points. Each design point corresponds to a design configuration of the circuit. In one embodiment, the design configuration is the transistor size characterized
by a scale factor. The design optimization phase begins with a set of initial design points on the power-delay curves. These initial design points correspond to a composite delay that meets the
specified timing constraint. The optimization process then proceeds to iteratively determine the new set of design points on the power-delay curves such that the specified timing constraint remains
met while the total power is reduced.
[0058]FIG. 6A is a diagram illustrating a power-delay curve 610A for the input multiplexer shown in FIG. 5 according to one embodiment of the invention. The power-delay curve 610A has two design
points, A and B. The design point A has a delay value of 0.25 nsec and a power value of 3.2 mA. The design point B has a delay value of 0.28 nsec and a power value of 1.6 mA. A and B are the initial
and new design points, respectively. The arrow shows the move from design point A to design point B during the design optimization phase.
[0059]FIG. 6B is a diagram illustrating a power-delay curve 610B for the comparator shown in FIG. 5 according to one embodiment of the invention. The power-delay curve 610B has two design points, C
and D. The design point C has a delay value of 1.12 nsec and a power value of 1.0 mA. The design point D has a delay value of 1.05 nsec and a power value of 1.9 mA. C and D are the initial and new
design points, respectively. The arrow shows the move from design point C to design point D during the design optimization phase.
[0060]FIG. 6C is a diagram illustrating a power-delay curve 610C for the static adder shown in FIG. 5 according to one embodiment of the invention. The power-delay curve 610C has two design points, E
and F. The design point E has a delay value of 1.23 nsec and a power value of 10.0 mA. The design point F has a delay value of 1.37 nsec and a power value of 4.0 mA. E and F are the initial and new
design points, respectively. The arrow shows the move from design point E to design point F during the design optimization phase.
[0061]FIG. 6D is a diagram illustrating a power-delay curve for the output multiplexer shown in FIG. 5 according to one embodiment of the invention. The power-delay curve 610D has two design points,
G and H. The design point G has a delay value of 1.75 nsec and a power value of 4.0 mA. The design point H has a delay value of 1.65 nsec and a power value of 6.0 mA. G and H are the initial and new
design points, respectively. The arrow shows the move from design point G to design point H during the design optimization phase.
[0062] The power and delay parameters obtained from the power-delay curves 610A, 610B, 610C, and 610D have the following values:
[0063] Therefore, it is seen that the new design points B, D, F, H result in the same composite delay of 4.35 nsec, but with a 25.8% saving in power.
[0064] The power-delay curves in FIGS. 6A, 6B, 6C, and 6D illustrate the optimization process by varying the variable design parameter and selecting the best overall values. The variable design
parameter is common to all the curves. In this example, the variable design parameter is the transistor size, or the power of the block.
[0065] The optimization process can be applied for different circuit configurations. For example, a circuit block can be designed using a static circuit technology or a dynamic (e.g., domino) circuit
technology as illustrated in FIG. 4. In another example, a circuit block may be designed using a multiplexer or a decoder. In these cases, the optimization process can be carried out based on the
parameter function, e.g., power-delay curve.
[0066]FIG. 7 is a diagram illustrating a comparison of the power-delay curves for the three different implementation of an example circuit according to one embodiment of the invention. The
power-delay curve 710, 720, and 730 correspond to the initial, better, and worse designs, respectively.
[0067] The power-delay curve 710 has high power consumption but fast speed. The power-delay curve 720 has a wider delay range and reasonable power consumption. The power-delay curve 730 is similar to
720 but the delay covers a slower range.
[0068] Suppose the design constraint is a delay of approximately 1.5 nsec. Under this timing constraint, it is seen that the design depicted by the power-delay curve 730 is not acceptable. Both
designs depicted by the power-delay curves 710 and 720 are acceptable because they cover the specified timing constraint. However, the power-delay curve 720 shows a better design because at 1.5 nsec,
it results in a 50% power reduction.
[0069]FIG. 8 is a diagram illustrating a design process 800 according to one embodiment of the invention. The design process 800 includes an initial phase 810, an optimization phase 820, and a final
phase 830.
[0070] In the initial phase 810, the design engineer generates the initial design parameters files for all circuits in the circuit block. The initial design parameters are selected according to some
predetermined criteria. In most cases, they are selected based on the experience of the design engineer. The selected design parameters may or may not meet the design constraints.
[0071] In the optimization phase 820, the design engineer optimizes the optimizing parameters according to the design constraints imposed on the constraint parameters. As part of the optimization
phase 820, parameter functions are generated to facilitate the trade-off analysis. In one embodiment, the parameter functions are shown as parameter data files which contain values of the parameters
at various design points. The optimization process can be done manually, automatically, or semiautomatically. In manual mode, the design engineer examines the parameter function and adjusts the
design points in an iterative manner to improve the optimizing parameter(s) while keeping the constraint parameters to be within the specified range. In automatic mode, an optimization program reads
the parameter data files and process the data according to an optimization algorithm. Numerous optimization algorithms for numerical data exist. Examples include traditional search algorithms,
branch-and-bound, scheduling techniques, and genetic algorithms. In semiautomatic mode, part of the optimization can be done manually and part is done automatically. The optimization is usually done
iteratively. The optimizing parameters are selected to provide overall optimal values while keeping the constraint parameters within the specified design constraints. The iterative process is
terminated when the optimal values are within a predetermined range.
[0072] The final phase 830 generates the new design parameters files from the values determined by the optimization phase 820. The new design parameters are used for all the circuits in the subsystem
or functional block.
[0073] The product as a result of the above three phases may include a design environment, a computer program product, a software library, a script file, a program, or a function stored in any
computer readable media. The elements of the product includes a code or program segment to create the parameter functions for the circuits, a code or program segment to optimize the design parameters
based on the generated parameter functions, a code or program segment to configure each circuit in the subsystem or functional block or subsystem, a code or program segment to generate the values of
the design parameters to be used in creating the parameter functions, a code or program segment to select the values of the constraint parameters to meet the design constraints, a code or program
segment to determine the values of the optimizing parameters, a code or program segment to iterate the determination and selection of the values of the parameters, a code or program segment to size
the circuit components, a code or program segment to select the circuit technology (e.g., domino versus static), and a code or program segment to perform CAD functions (e.g., netlist generation,
timing analysis, power vector generation, timing and power calculations). As is known by one skilled in the art, these code or program segments may link, call, or invoke other programs or modules
including the CAD tools.
[0074] The present invention therefore is a technique to optimize the design of a subsystem or functional block having a number of circuits. The subsystem or functional block has a set of design
parameters which are divided into two groups: optimizing parameters and constraint parameters. The technique includes the generation of parameter functions or data files which show the relationship
between the design parameters. An optimization process is then carried out to select the optimal values for the optimizing parameters while keeping the constraint parameters to be within the
specified range. The technique provides the design engineer a global picture of the overall design so that global optimization can be performed.
[0075] While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications of the
illustrative embodiments, as well as other embodiments of the invention, which are apparent to persons skilled in the art to which the invention pertains are deemed to lie within the spirit and scope
of the invention.
[0009] The features and advantages of the present invention will become apparent from the following detailed description of the present invention in which:
[0010]FIG. 1A is a diagram illustrating an engineering design cycle in accordance with the teachings of the invention.
[0011]FIG. 1B is a diagram illustrating a computer system in which one embodiment of the present invention may be utilized.
[0012]FIG. 2 is a diagram illustrating a design optimization phase according to one embodiment of the invention.
[0013]FIG. 3 is a diagram illustrating an environment for the power modules using tools according to one embodiment of the invention.
[0014]FIG. 4 is a diagram illustrating power-delay curves according to one embodiment of the invention.
[0015]FIG. 5 is a diagram illustrating an example of an arithmetic logic unit datapath functional block according to one embodiment of the invention.
[0016]FIG. 6A is a diagram illustrating a power-delay curve for the input multiplexer shown in FIG. 5 according to one embodiment of the invention.
[0017]FIG. 6B is a diagram illustrating a power-delay curve for the comparator shown in FIG. 5 according to one embodiment of the invention.
[0018]FIG. 6C is a diagram illustrating a power-delay curve for the static adder shown in FIG. 5 according to one embodiment of the invention.
[0019]FIG. 6D is a diagram illustrating a power-delay curve for the output multiplexer shown in FIG. 5 according to one embodiment of the invention.
[0020]FIG. 7 is a diagram illustrating a comparison of the power-delay curves for the three different implementation of an example circuit according to one embodiment of the invention.
[0021]FIG. 8 is a diagram illustrating a design process according to one embodiment of the invention.
[0001] 1. Field of the Invention
[0002] This invention relates to computer systems. In particular, the invention relates to circuit design techniques and computer-aided design (CAD) software tools.
[0003] 2. Description of Related Art
[0004] In the design of circuits of a system, the design engineer typically faces with a number of decisions based on system requirements and criteria. The system requirements and criteria usually
dictate the specifications of the circuits in the system in terms of design constraints. The design constraints limit the choice of design techniques, component selection, design parameters, etc. The
design engineer has to satisfy the design constraints and, at the same time, optimize the design in terms of other design parameters. This task becomes increasingly difficult as the complexity of the
system increases. The design engineer has to perform complex trade-off analysis to optimize the design from a number of design parameters. Examples of the design parameters include propagation delay
and the power consumption of the circuit.
[0005] Propagation delay and power consumption are two important design parameters. As microprocessor technology is becoming more advanced in speed and complexity, device power consumption increases
significantly. Even with processor operating voltage reduction, device power consumption still grows at a rate of several orders of magnitude. This is largely due to an increased use of on-chip
hardware to get parallelism and improve microprocessor performance. In addition, to get extra performance on certain critical timing paths, device sizes are being optimized to provide faster delays
at the circuit level. However, size optimization in given design is a very time consuming process. Often, the penalty of upsizing transistors to get performance boosts comes at the expense of a much
larger increase in circuit power consumption.
[0006] When a system or a functional block consists of many circuits, it is difficult to optimize the design while still meeting the design constraints. Currently, there is no known technique to
allow the design engineer to optimize the overall design in a systematic and efficient manner. The design engineer usually works on each circuit separately and performs incremental optimization. This
process is tedious and does not give the design engineer a global picture of the entire design.
[0007] Therefore there is a need in the technology to provide a simple and efficient method to optimize the design.
[0008] The present invention is a method and computer program product for determining optimal values of design parameters of a subsystem to meet design constraints. The subsystem comprises a
plurality of circuits. Parameter functions are created for the corresponding circuits. The parameter functions represent a relationship among the design parameters. The design parameters are
optimized based on the parameter functions to satisfy the design constraints. | {"url":"http://www.google.com/patents/US20030208343?dq=6,073,142","timestamp":"2014-04-18T06:33:34Z","content_type":null,"content_length":"95551","record_id":"<urn:uuid:c1213147-3e43-49cf-a167-736930e2645f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proofs that require fundamentally new ways of thinking
up vote 130 down vote favorite
I do not know exactly how to characterize the class of proofs that interests me, so let me give some examples and say why I would be interested in more. Perhaps what the examples have in common is
that a powerful and unexpected technique is introduced that comes to seem very natural once you are used to it.
Example 1. Euler's proof that there are infinitely many primes.
If you haven't seen anything like it before, the idea that you could use analysis to prove that there are infinitely many primes is completely unexpected. Once you've seen how it works, that's a
different matter, and you are ready to contemplate trying to do all sorts of other things by developing the method.
Example 2. The use of complex analysis to establish the prime number theorem.
Even when you've seen Euler's argument, it still takes a leap to look at the complex numbers. (I'm not saying it can't be made to seem natural: with the help of Fourier analysis it can. Nevertheless,
it is a good example of the introduction of a whole new way of thinking about certain questions.)
Example 3. Variational methods.
You can pick your favourite problem here: one good one is determining the shape of a heavy chain in equilibrium.
Example 4. Erdős's lower bound for Ramsey numbers.
One of the very first results (Shannon's bound for the size of a separated subset of the discrete cube being another very early one) in probabilistic combinatorics.
Example 5. Roth's proof that a dense set of integers contains an arithmetic progression of length 3.
Historically this was by no means the first use of Fourier analysis in number theory. But it was the first application of Fourier analysis to number theory that I personally properly understood, and
that completely changed my outlook on mathematics. So I count it as an example (because there exists a plausible fictional history of mathematics where it was the first use of Fourier analysis in
number theory).
Example 6. Use of homotopy/homology to prove fixed-point theorems.
Once again, if you mount a direct attack on, say, the Brouwer fixed point theorem, you probably won't invent homology or homotopy (though you might do if you then spent a long time reflecting on your
The reason these proofs interest me is that they are the kinds of arguments where it is tempting to say that human intelligence was necessary for them to have been discovered. It would probably be
possible in principle, if technically difficult, to teach a computer how to apply standard techniques, the familiar argument goes, but it takes a human to invent those techniques in the first place.
Now I don't buy that argument. I think that it is possible in principle, though technically difficult, for a computer to come up with radically new techniques. Indeed, I think I can give reasonably
good Just So Stories for some of the examples above. So I'm looking for more examples. The best examples would be ones where a technique just seems to spring from nowhere -- ones where you're tempted
to say, "A computer could never have come up with that."
Edit: I agree with the first two comments below, and was slightly worried about that when I posted the question. Let me have a go at it though. The difficulty with, say, proving Fermat's last theorem
was of course partly that a new insight was needed. But that wasn't the only difficulty at all. Indeed, in that case a succession of new insights was needed, and not just that but a knowledge of all
the different already existing ingredients that had to be put together. So I suppose what I'm after is problems where essentially the only difficulty is the need for the clever and unexpected idea.
I.e., I'm looking for problems that are very good challenge problems for working out how a computer might do mathematics. In particular, I want the main difficulty to be fundamental (coming up with a
new idea) and not technical (having to know a lot, having to do difficult but not radically new calculations, etc.). Also, it's not quite fair to say that the solution of an arbitrary hard problem
fits the bill. For example, my impression (which could be wrong, but that doesn't affect the general point I'm making) is that the recent breakthrough by Nets Katz and Larry Guth in which they solved
the Erdős distinct distances problem was a very clever realization that techniques that were already out there could be combined to solve the problem. One could imagine a computer finding the proof
by being patient enough to look at lots of different combinations of techniques until it found one that worked. Now their realization itself was amazing and probably opens up new possibilities, but
there is a sense in which their breakthrough was not a good example of what I am asking for.
While I'm at it, here's another attempt to make the question more precise. Many many new proofs are variants of old proofs. These variants are often hard to come by, but at least one starts out with
the feeling that there is something out there that's worth searching for. So that doesn't really constitute an entirely new way of thinking. (An example close to my heart: the Polymath proof of the
density Hales-Jewett theorem was a bit like that. It was a new and surprising argument, but one could see exactly how it was found since it was modelled on a proof of a related theorem. So that is a
counterexample to Kevin's assertion that any solution of a hard problem fits the bill.) I am looking for proofs that seem to come out of nowhere and seem not to be modelled on anything.
Further edit. I'm not so keen on random massive breakthroughs. So perhaps I should narrow it down further -- to proofs that are easy to understand and remember once seen, but seemingly hard to come
up with in the first place.
ho.history-overview big-list proofs
2 Perhaps you could make the requirements a bit more precise. The most obvious examples that come to mind from number theory are proofs that are ingenious but also very involved, arising from a
rather elaborate tradition, like Wiles' proof of Fermat's last theorem, Faltings' proof of the Mordell conjecture, or Ngo's proof of the fundamental lemma. But somehow, I'm guessing that such
complicated replies are not what you have in mind. – Minhyong Kim Dec 9 '10 at 15:18
9 Of course, there was apparently a surprising and simple insight involved in the proof of FLT, namely Frey's idea that a solution triple would give rise to a rather exotic elliptic curve. It seems
to have been this insight that brought a previously eccentric seeming problem at least potentially within the reach of the powerful and elaborate tradition referred to. So perhaps that was a new
way of thinking at least about what ideas were involved in FLT. – roy smith Dec 9 '10 at 16:21
10 Never mind the application of Fourier analysis to number theory -- how about the invention of Fourier analysis itself, to study the heat equation! More recently, if you count the application of
complex analysis to prove the prime number theorem, then you might also count the application of model theory to prove results in arithmetic geometry (e.g. Hrushovski's proof of Mordell-Lang for
function fields). – D. Savitt Dec 9 '10 at 16:42
7 I agree that they are difficult, but in a sense what I am looking for is problems that isolate as well as possible whatever it is that humans are supposedly better at than computers. Those big
problems are too large and multifaceted to serve that purpose. You could say that I am looking for "first non-trivial examples" rather than just massively hard examples. – gowers Dec 9 '10 at
4 It seems to me that this question has been around a long time and is unlikely garner new answers of high quality. It also seems unlikely most would even read new answers. Furthermore, nowadays I
imagine a question like this would be closed as too broad, and if we close this then we'll discourage questions like it in the future. So I'm voting to close. – David White Oct 13 '13 at 18:52
show 14 more comments
closed as off-topic by David White, Suvrit, Chris Godsil, Vidit Nanda, Fernando Muro Oct 13 '13 at 23:05
• This question does not appear to be about research level mathematics within the scope defined in the help center.
If this question can be reworded to fit the rules in the help center, please edit the question.
64 Answers
active oldest votes
Grothendieck's insight how to deal with the problem that whatever topology you define on varieties over finite fields, you never seem to get enough open sets. You simply have to re-define
what is meant by a topology, allowing open sets not to be subsets of your space but to be covers.
up vote
93 down I think this fits the bill of "seem very natural once you are used to it", but it was an amazing insight, and totally fundamental in the proof of the Weil conjectures.
2 Obviously, I agree that this was fundamental. But since we're speaking only about Grothendieck topologies and not the eventual proof of the Weil conjectures, there could be a curious
sense in which this idea might be particularly natural to computers. Imagine encoding a category as objects and morphisms, which I'm told is quite a reasonable procedure in computer
science. You'll recall then that it's somewhat hard to define a subobject. – Minhyong Kim Dec 10 '10 at 1:20
That is, it's easier to refer directly to arrows $A\rightarrow B$ between two of the symbols rather than equivalence classes of them. In this framework, a computer might easily ask
19 itself why any reasonable collection of arrows might not do for a topology. Grothendieck topologies seem to embody exactly the kind of combinatorial and symbolic thinking about open
sets that's natural to computers, but hard for humans. We are quite attached to the internal 'physical' characteristics of the open sets, good for some insights, bad for others. –
Minhyong Kim Dec 10 '10 at 1:26
1 According to Grothendieck-Serre correspondence, I think it is more appropriate to say it is an insight due to both of them. – temp Jun 3 '12 at 19:49
add comment
Although this has already been said elsewhere on MathOverflow, I think it's worth repeating that Gromov is someone who has arguably introduced more radical thoughts into mathematics than
anyone else. Examples involving groups with polynomial growth and holomorphic curves have already been cited in other answers to this question. I have two other obvious ones but there are
many more.
I don't remember where I first learned about convergence of Riemannian manifolds, but I had to laugh because there's no way I would have ever conceived of a notion. To be fair, all of the
groundwork for this was laid out in Cheeger's thesis, but it was Gromov who reformulated everything as a convergence theorem and recognized its power.
up vote
69 down Another time Gromov made me laugh was when I was reading what little I could understand of his book Partial Differential Relations. This book is probably full of radical ideas that I don't
vote understand. The one I did was his approach to solving the linearized isometric embedding equation. His radical, absurd, but elementary idea was that if the system is sufficiently
underdetermined, then the linear partial differential operator could be inverted by another linear partial differential operator. Both the statement and proof are for me the funniest in
mathematics. Most of us view solving PDE's as something that requires hard work, involving analysis and estimates, and Gromov manages to do it using only elementary linear algebra. This then
allows him to establish the existence of isometric embedding of Riemannian manifolds in a wide variety of settings.
1 Partial Differential Relations? I don't believe the h-principle requires much analysis. In the sections on isometric embeddings, he does state and give a complete analytic proof of the
version of the Nash-Moser implicit function he needs. – Deane Yang Dec 9 '13 at 21:59
show 1 more comment
Do Cantor's diagonal arguments fit here? (Never mind whether someone did some of them before Cantor; that's a separate question.)
up vote 64
down vote
4 I vote yes. Look how hard Liouville had to work to find the first examples of transcendental numbers, and how easy Cantor made it to show that there are scads of them. – Gerry Myerson
Dec 9 '10 at 22:07
4 While Cantor's argument is amazing, and certainly produces scads, Liouville didn't have to work that hard; his approach is also very natural, and doesn't rely on much more than the
pigeon-hole principle. – Emerton Dec 10 '10 at 3:15
Cantor's whole idea of casting mathematics in the language of set theory is now so pervasive we don't even think about it. It dominated our subject until the category point of view. So
7 to me these are the two most basic insights, viewing mathematics in terms of sets, and then in terms of maps. Etale topologies are just one example of viewing maps as the basic
concept. – roy smith Dec 13 '10 at 17:55
show 2 more comments
Generating functions seem old hat to those who have worked with them, but I think their early use could be another example. If you did not have that tool handy, could you create it?
Similarly, any technique that has been developed and is now widely used is made to look natural after years of refining and changing the collected perspective, but might it not have seemed
up vote quite revolutionary when first introduced? Perhaps the question should also be about such techniques.
63 down
vote Gerhard "Old Wheels Made New Again" Paseman, 2010.12.09
9 I'd like to add to generating functions the idea that you can use singularity analysis to determine the coefficient growth. But I don't know how unexpected this was when first used... –
Martin Rubey Dec 9 '10 at 17:24
"any technique that has been developed and is now widely used is made to look natural after years of refining and changing the collected perspective, but might it not have seemed quite
2 revolutionary when first introduced?" It surely was, and it is exactly why it is widely used now: it allowed a lot of things that were impossible previously and we are still trying to
figure out how much is "a lot". Also note, that shaping an idea and recognizing its power is a long process, so "unexpected" means that 20 years ago nobody would have thought of that,
not that it shocked everyone on one day. – fedja Dec 17 '10 at 12:53
show 2 more comments
The method of forcing certainly fits here. Before, set theorists expected that independence results would be obtained by building non-standard, ill-founded models, and model theoretic
methods would be key to achieve this. Cohen's method begins with a transitive model and builds another transitive one, and the construction is very different from all the techniques being
tried before.
up vote
63 down This was completely unexpected. Of course, in hindsight, we see that there are similar approaches in recursion theory and elsewhere happening before or at the same time.
But it was the fact that nobody could imagine you would be able to obtain transitive models that mostly had us stuck.
I took the last set theory course that Cohen taught, and this isn't how he presented his insight at all (though his book takes this approach). The central problem is "how do I prove
10 that non-constructible [sub]sets [of N] are possible without access to one?", and his solution is "don't use a set; use an adaptive oracle". Once that idea is present, the general
method falls right into place. The oracle's set of states can be any partial order, generic filters fall right out, names are clearly necessary, everything else is technical. The
hardest part is believing it will actually work. – Chad Groft Dec 14 '10 at 2:07
1 @Chad : Very interesting! Curious that his description is so "recursion-theoretic." Do you remember when was this course? – Andres Caicedo Dec 14 '10 at 2:48
show 1 more comment
My favorite example from algebraic topology is Rene Thom's work on cobordism theory. The problem of classifying manifolds up to cobordism looks totally intractable at first glance. In low
dimensions ($0,1,2$), it is easy, because manifolds of these dimensions are completely known. With hard manual labor, one can maybe treat dimensions 3 and 4. But in higher dimensions,
there is no chance to proceed by geometric methods.
Thom came up with a geometric construction (generalizing earlier work by Pontrjagin), which is at the same time easy to understand and ingenious. Embed the manifold into a sphere,
collapse everything outside a tubular neighborhood to a point and use the Gauss map of the normal bundle... What this construction does is to translate the geometric problem into a
up vote 60 homotopy problem, which looks totally unrelated at first sight.
down vote
The homotopy problem is still difficult, but thanks to work by Serre, Cartan, Steenrod, Borel, Eilenberg and others, Thom had enough heavy guns at hand to get fairly complete results.
Thom's work led to an explosion of differential topology, leading to Hirzebruch's signature theorem, the Hirzebruch-Riemann-Roch theorem, Atiyah-Singer, Milnor-Kervaire classification of
exotic spheres.....until Madsen-Weiss' work on mapping class groups.
add comment
I don't know who deserves credit for this, but I was stunned by the concept of view complicated objects like functions simply as points in a vector space. With that view one solves
up vote 44 down and analyzes PDEs or integral equations in Lebesgue or Sobolev spaces.
5 I think one can credit this point of view to Fréchet, who introduced metric spaces for applications to functional analysis. – Qiaochu Yuan Jan 18 '11 at 16:26
add comment
What about Euler's solution to the Konigsberg bridge problem? It's certainly not difficult, but I think (not that I really know anything about the history) it was quite novel at the
up vote 37 down time.
9 @Kimball: It was so novel that Euler didn't even think the problem or its solution were mathematical. See the extract from a letter of Euler on the page en.wikipedia.org/wiki/
Carl_Gottlieb_Ehler. – KConrad Jan 2 '12 at 15:46
show 1 more comment
Technically, the following are not proofs, or even theorems, but I think they count as insights that have the quality that it's hard to imagine computers coming up with them. First,
Mathematics can be formalized.
Along the same lines, there's:
Computability can be formalized.
If you insist on examples of proofs then maybe I'd be forced to cite the proof of Goedel's incompleteness theorem or of the undecidability of the halting problem, but to me the most
difficult step in these achievements was the initial daring idea that one could even formulate a mathematically satisfactory definition of something as amorphous as "mathematics" or
up vote "computability." For example, one might argue that the key step in Turing's proof was diagonalization, but in fact diagonalization was a major reason that Goedel thought one couldn't come
31 down up with an "absolute" definition of computability.
Nowadays we are so used to thinking of mathematics as something that can be put on a uniform axiomatic foundation, and of computers as a part of the landscape, that we can forget how
radical these insights were. In fact, I might argue that your entire question presupposes them. Would computers have come up with these insights if humans had not imagined that computers
were possible and built them in the first place? Less facetiously, the idea that mathematics is a formally defined space in which a machine can search systematically clearly presupposes
that mathematics can be formalized.
More generally, I'm wondering if you should expand your question to include concepts (or definitions) and not just proofs?
Edit. Just in case it wasn't clear, I believe that the above insights have fundamentally changed mathematicians' conception of what mathematics is, and as such I would argue that they are
stronger examples of what you asked for than any specific proof of a specific theorem can be.
5 There's something amusing about the idea of a computer coming up with the idea that computability can be formalized. – ndkrempel Dec 10 '10 at 13:58
This is an example that has bothered me in the past, and I have to admit that I don't have a good answer to it. The ability to introspect seems to be very important to mathematicians,
3 and it's far from clear how a computer would do it. One could perhaps imagine a separate part of the program that looks at what the main part does, but it too would need to introspect.
Perhaps this infinite regress is necessary for Godelian reasons but perhaps in practice mathematicians just use a bounded number of levels of navel contemplation. – gowers Dec 10 '10 at
Conversely, this type of introspection and formalisation is much less effective outside of mathematics (Weinberg has called this the "unreasonable ineffectiveness of philosophy".)
7 Attempts to axiomatise science, the humanities, etc., for instance, usually end up collapsing under the weight of their own artificiality (with some key exceptions in physics, notably
relativity and quantum mechanics). The fact that mathematics is almost the sole discipline that actually benefits from formalisation is indeed an interesting insight in my opinion. –
Terry Tao Dec 11 '10 at 16:59
4 But if you axiomatize some portion of some other science, doesn't that axiomatization constitute mathematics? So it seems almost tautologous to say that only mathematics "benefits" from
formalization. – Michael Hardy Dec 11 '10 at 17:11
show 4 more comments
The use of spectral sequences to prove theorems about homotopy groups. For instance, until Serre's mod C theory, nobody knew that the homotopy groups of spheres were even finitely
up vote 28 down generated.
show 1 more comment
Not sure whether to credit Abel or Galois with the "fundamental new way of thinking" here, but the proof that certain polynomial equations are not solvable in radicals required quite
the reformulation of thinking. (I'm leaning towards crediting Galois with the brain rewiring reward.)
up vote 27
down vote P.S. Is it really the case that no one else posted this, or is my "find" bar not working properly?
3 "Use of group theory to prove insolvability of 5th degree equation" is part of an earlier answer. – Gerry Myerson Dec 12 '10 at 11:14
show 2 more comments
It seems that certain problems seem to induce this sort of new thinking (cf. my article "What is good mathematics?"). You mentioned the Fourier-analytic proof of Roth's theorem; but in fact
many of the proofs of Roths' theorem (or Szemeredi's theorem) seem to qualify, starting with Furstenberg's amazing realisation that this problem in combinatorial number theory was equivalent
to one in ergodic theory, and that the structural theory of the latter could then be used to attack the former. Or the Ruzsa-Szemeredi observation (made somewhat implicitly at the time) that
Roth's theorem follows from a result in graph theory (the triangle removal lemma) which, in some ways, was "easier" to prove than the result that it implied despite (or perhaps, because of)
the fact that it "forgot" most of the structure of the problem. And in this regard, I can't resist mentioning Ben Green's brilliant observation (inspired, I believe, by some earlier work of
Ramare and Ruzsa) that for the purposes of finding arithmetic progressions, that the primes should not be studied directly, but instead should be viewed primarily [pun not intended] as a
generic dense subset of a larger set of almost primes, for which much more is known, thanks to sieve theory...
Another problem that seems to generate radically new thinking every few years is the Kakeya problem. Originally a problem in geometric measure theory, the work of Bourgain and Wolff in the
early 90s showed that the combinatorial incidence geometry viewpoint could lead to substantial progress. When this stalled, Bourgain (inspired by your own work) introduced the additive
combinatorics viewpoint, re-interpreting line segments as arithmetic progressions. Meanwhile, Wolff created the finite field model of the Kakeya problem, which among other things lead to the
sum-product theorem and many further developments that would not have been possible without this viewpoint. In particular, this finite field version enabled Dvir to introduce the polynomial
method which had been applied to some other combinatorial problems, but whose application to the finite field Kakeya problem was hugely shocking. (Actually, Dvir's argument is a great example
up of "new thinking" being the key stumbling block. Five years earlier, Gerd Mockenhaupt and I managed to stumble upon half of Dvir's argument, showing that a Kakeya set in finite fields could
vote not be contained in a low-degree algebraic variety. If we had known enough about the polynomial method to make the realisation that the exact same argument also showed that a Kakeya set could
23 not have been contained in a high-degree algebraic variety either, we would have come extremely close to recovering Dvir's result; but our thinking was not primed in this direction.)
down Meanwhile, Carbery, Bennet, and I discovered that heat flow methods, of all things, could be applied to solve a variant of the Euclidean Kakeya problem (though this method did appear in
vote literature on other analytic problems, and we viewed it as the continuous version of the discrete induction-on-scales strategy of Bourgain and Wolff.) Most recently is the work of Guth, who
broke through the conventional wisdom that Dvir's polynomial argument was not generalisable to the Euclidean case by making the crucial observation that algebraic topology (such as the ham
sandwich theorem) served as the continuous generalisation of the discrete polynomial method, leading among other things to the recent result of Guth and Katz you mentioned earlier.
EDIT: Another example is the recent establishment of universality for eigenvalue spacings for Wigner matrices. Prior to this work, most of the rigorous literature on eigenvalue spacings relied
crucially on explicit formulae for the joint eigenvalue distribution, which were only tractable in the case of highly invariant ensembles such as GUE, although there was a key paper of
Johansson extending this analysis to a significantly wider class of ensembles, namely the sum of GUE with an arbitrary independent random (or deterministic) matrix. To make progress, one had
to go beyond the explicit formula paradigm and find some way to compare the distribution of a general ensemble with that of a special ensemble such as GUE. We now have two basic ways to do
this, the local relaxation flow method of Erdos, Schlein, Yau, and the four moment theorem method of Van Vu and myself, both based on deforming a general ensemble into a special ensemble and
controlling the effect on the spectral statistics via this deformation (though the two deformations we use are very different, and in fact complement each other nicely). Again, both arguments
have precedents in earlier literature (for instance, our argument was heavily inspired by Lindeberg's classic proof of the central limit theorem) but as far as I know it had not been thought
to apply them to the universality problem before.
5 But, Terry, are the adjectives "radical" or "fundamentally new" really justified in the description of any of these examples? and of our business as a whole? – Gil Kalai Dec 10 '10 at 13:30
3 In other words, his contribution was that D:A=C:B (algebraic topology is to continuous incidence geometry as algebraic geometry is to discrete incidence geometry), which was definitely a
very different way of thinking about these four concepts that was totally absent in previous work. (After Guth's work, it is now "obvious" in retrospect, of course.) – Terry Tao Dec 11 '10
at 16:48
Perhaps what this example shows is that a computer trying to generate mathematical progress has to look at more than just the 1-skeleton of mathematics (B is solved by C; A is close to B;
2 hence A might be solved by C) but also at the 2-skeleton (B is solved by C; D is to A as C is to B; hence A might be solved by D) or possibly even higher order skeletons. It seems unlikely
though that these possibilities can be searched through systematically in polynomial time, without the speedups afforded by human insight... – Terry Tao Dec 11 '10 at 17:12
show 5 more comments
Another example from logic is Gentzen's consistency proof for Peano arithmetic by transfinite induction up to $\varepsilon_0$, which I think was completely unexpected, and
up vote 21 down unprecedented.
add comment
I think that Eichler and Shimura's proof of the Ramanujan--Petersson conjecture for weight two modular forms provides an example. Recall that this conjecture is a purely analytic statement:
namely that if $f$ is a weight two cuspform on some congruence subgroup of $SL_2(\mathbb Z)$, which is an eigenform for the Hecke operator $T_p$ ($p$ a prime not dividing the level of the
congruence subgroup in question) with eigenvalue $\lambda_p$, then $| \lambda_p | \leq 2 p^{1/2}.$ Unfortunately, no purely analytic proof of this result is known. (Indeed, if one shifts
one's focus from holomorphic modular forms to Maass forms, then the corresponding conjecture remains open.)
What Eichler and Shimura realized is that, somewhat miraculously, $\lambda_p$ admits an alternative characterization in terms of counting solutions to certain congruences modulo $p$, and
that estimates there due to Hasse and Weil (generalizing earlier estimates of Gauss and others) can be applied to show the desired inequality.
This argument was pushed much further by Deligne, who handled the general case of weight $k$ modular forms (for which the analogous inequality is $| \lambda_p | \leq 2 p^{(k-1)/2}$), using
up vote etale cohomology of varieties in characteristic $p$ (which is something of a subtle and more technically refined analogue of the notion of a congruence mod $p$). (Ramanujan's original
21 down conjecture was for the unique cuspform of weight 12 and level 1.)
The idea that there are relationships (some known, others conjectural) between automorphic forms and algebraic geometry over finite fields and number fields has now become part of the
received wisdom of algebraic number theorists, and lies at the heart of the Langlands program. (And, of course, at the heart of the proof of FLT.) Thus the striking idea of Eichler and
Shimura has now become a basic tenet of a whole field of mathematics.
Note: Tim in his question, and in some comments, has said that he wants "first non-trivial instances" rather than difficult arguments that involve a whole range of ideas and techniques. In
his comment to Terry Tao's answer regarding Perelman, he notes that long, difficult proofs might well include within them instances of such examples. Thus I am offering this example as
perhaps a "first non-trivial instance" of the kind of insights that are involved in proving results like Sato--Tate, FLT, and so on.
add comment
I'm a little surprised no one has cited Thurston's impact on low-dimensional topology and geometry. I'm far from an expert, so I'm reluctant to say much about this. But I have the
impression that Thurston revolutionized the whole enterprise by taking known results and expressing them from a completely new perspective that led naturally both new theorems and a lot of
up vote 20 new conjectures. Perhaps Thurston himself or someone else could say something, preferably in a separate answer so I can delete mine.
down vote
add comment
Gromov's use of J-holomorphic curves in symplectic topology (he reinterpreted holomorphic functions in the sense of Vekua) as well as the invention of Floer homology (in order to
up vote 18 down deal with the Arnol'd conjecture).
add comment
Donaldson's idea of using global analysis to get more insight about the topology of manifolds. Nowadays it is clear to us that (non-linear) moduli spaces give something new, and more than
up vote linear (abelian) Hodge theory, for example, but I think at that time this was really new.
18 down
I fully agree with this. I was a graduate student at Harvard, when Atiyah came and described Donaldson's thesis (Donaldson got his Ph.D. the same year as me). Before that, we all thought
3 we were trying to understand Yang-Mills, because it connected geometric analysis to physics and not because we thought it would prove topological theorems. As I recall it, Atiyah said
that when Donaldson first proposed what he wanted to do, Atiyah was skeptical and tried to convince Donaldson to work on something less risky. – Deane Yang Dec 11 '10 at 5:10
add comment
Gromov's proof that finitely generated groups with polynomial growth are virtually nilpotent. The ingenious step is to consider a scaling limit of the usual metric on the Cayley
graph of the finitely generated group.
up vote 17 down Of course the details are messy and to get the final conclusion one has to rely on a lot of deep results on the structure of topological groups. However, already the initial idea is
vote breathtaking.
add comment
Quillen's construction of the cotangent complex used homotopical algebra to find the correct higher-categorical object without explicitly building a higher category. This may sound
newfangled and modern, but if you read Grothendieck's book on the cotangent complex, his explicit higher-categorical construction was only able to build a cotangent complex that had its (co)
up vote homology truncated to degree 2. Strangely enough, by the time Grothendieck's book was published, it was already obsolete, as he notes in the preface (he says something about how new work of
16 down Quillen (and independently André) had made his construction (which is substantially more complicated) essentially obsolete).
1 Which book of Grothendieck's are you referring to? Do you, perhaps, mean Illusie's book? – Dylan Wilson Jun 27 '13 at 17:19
add comment
Topological methods in combinatorics (started by Lovasz' proof of the Kneser conjecture, I guess).
up vote 16 down vote
add comment
I don't know how good an example this is. The Lefschetz fixed point theorem tells you that you can count (appropriately weighted) fixed points of a continuous function $f : X \to X$ from a
compact triangulable space to itself by looking at the traces of the induced action of $f$ on cohomology. This is a powerful tool (for example it more-or-less has the Poincare-Hopf theorem
as a special case).
up vote Weil noticed that the number of points of a variety $V$ over $\mathbb{F}_{q^n}$ is the number of fixed points of the $n^{th}$ power of the Frobenius map $f$ acting on the points of $V$ over
15 down $\overline{\mathbb{F}_q}$ and, consequently, that it might be possible to describe the local zeta function of $V$ if one could write down the induced action of $f$ on some cohomology theory
vote for varieties over finite fields. This led to the Weil conjectures, the discovery of $\ell$-adic cohomology, etc. I think this is a pretty good candidate for a powerful but unexpected
add comment
Emil Artin's solution of Hilbert's 17th problem which asked whether every positive polynomial in any number of variables is a sum of squares of rational functions.
Artin's proof goes roughly as follows. If $p \in \mathbb R[x_1,\dots,x_n]$ it not a sum of squares of rational functions, then there is some real-algebraically closed extension $L$ of the
field of rational functions in which $p$ is negative with respect to some total ordering (compatible with the field operations), i.e. there exists a $L$-point of $R[x_1,\dots,x_n]$ at which
up vote $p$ is negative. However, using a model theoretic argument, since $\mathbb R$ is also a real-closed field with a total ordering, there also has to be a real point such that $p<0$, i.e. there
15 down exists $x \in \mathbb R^n$ such that $p(x)< 0$. Hence, if $p$ is everywhere positive, then it is a sum of squares of rational functions.
The ingenius part is the use of a model theoretic argument and the bravery to consider a totally ordered real-algebraic closed extension of the field of rational functions.
I'd generalize this answer to include the observation that transfinite induction (or the axiom of choice) can simplify proofs of statements that don't actually require them. This is
1 similar to how probabilistic arguments can sometimes be simpler than constructions. Here's an example statement for which all three kinds of proof exists: there exists a set $ A \subseteq
[0,1]^2 $ that is dense everywhere on the unit square [0,1]<sup>2</sup>, but for every x, A contains only finitely many points of form (x, y) or (y, x). – Zsbán Ambrus Dec 19 '10 at 16:57
show 2 more comments
Heegner's solution to the Gauss class number 1 problem for imaginary quadratic fields, by noting that when the class number is 1 then a certain elliptic curve is defined over Q and certain
modular functions take integer values at certain quadratic irrationalities, and then finding all the solutions to Diophantine equations that result, seems to me equally beautiful and
up vote 13 unexpected. Maybe its unexpectedness kept people from believing it for a long time.
down vote
add comment
Sometimes mathematics is not only about the methods of the proof, it is about the statement of the proof. E.g., it is hard to imagine an theorem-searching algorithm ever finding a proof of
the results in Shannon's 1948 Mathematical Theory of Communication, without that algorithm first "imagining" (by some unspecified process) that there could BE a theory of communication.
Even so celebrated a mathematician as J. L. Doob at first had trouble grasping that Shannon's reasoning was mathematical in nature, writing in his AMS review (MR0026286):
[Shannon's] discussion is suggestive throughout, rather than mathematical, and it is not always clear that the author's mathematical intentions are honorable.
The decision of which mathematical intentions are to be accepted as "honorable" (in Doob's phrase) is perhaps very difficult to formalize.
[added reference]
One finds this same idea expressed in von Neumann's 1948 essay The Mathematician:
up vote
12 down Some of the best inspirations of modern mathematics (I believe, the best ones) clearly originated in the natural sciences. ... As any mathematical discipline travels far from its
vote empirical source, or still more, if it is a second or third generation only indirectly inspired by ideas coming from "reality", it is beset by very grave dangers. It becomes more and
more purely aestheticizing, more and more l`art pour le art. ... Whenever this stage is reached, the only remedy seems to me to be the rejuvenating return to the source: the reinjection
of more or less directly empirical ideas.
One encounters this theme of inspiration from reality over-and-over in von Neumann's own work. How could a computer conceive theorems in game theory ... without having empirically played
games? How could a computer conceive the theory of shock waves ... without having empirically encountered the intimate union of dynamics and thermodynamics that makes shock wave theory
possible? How could a computer conceive theorems relating to computational complexity ... without having empirically grappled with complex computations?
The point is straight from Wittgenstein and E. O. Wilson: in order to conceive mathematical theorems that are interesting to humans, a computer would have to live a life similar to an
ordinary human life, as a source of inspiration.
show 1 more comment
up vote And how about Perelman's proof of Poincare's conjecture?
10 down
6 It would be a better example if the proof were easier to understand ... – gowers Dec 9 '10 at 16:38
I think there are at least two aspects of the Perelman-Hamilton theory that fit the bill. One is Hamilton's original realisation that Ricci flow could be used to at least partially
17 resolve the Poincare conjecture (in the case of 3-manifolds that admit a metric with positive Ricci curvature). There was some precedent for using PDE flow methods to attack geometric
problems, but I think this was the first serious attempt to attack the manifestly topological Poincare conjecture in that fashion, and was somewhat contrary to the conventional wisdom
towards Poincare at the time. [cont.] – Terry Tao Dec 9 '10 at 17:30
The other example is when Perelman needed a monotone quantity in order to analyse singularities of the Ricci flow. Here he had this amazing idea to interpret the parabolic Ricci flow as
21 an infinite-dimensional limit of the elliptic Einstein equation, so that monotone quantities from the elliptic theory (specifically, the Bishop-Gromov inequality) could be transported
to the parabolic setting. This is a profoundly different perspective on Ricci flow (though there was some precedent in the earlier work of Chow) and it seems unlikely that this quantity
would have been discovered otherwise. – Terry Tao Dec 9 '10 at 17:32
14 Terry's answer illustrates a principle relevant to this question: even if a proof as a whole is too complex to count as a good example, there are quite likely to be steps of the proof
that are excellent examples. – gowers Dec 9 '10 at 20:54
show 2 more comments
Shigefumi Mori's proof of Hartshorne's conjecture (the projective spaces are the only smooth projective varieties with ample tangent bundles). In his proof, Mori developed many new
up vote 10 techniques (e.g. the bend-and-break lemma), which later became fundamental in birational geometry.
down vote
add comment
Lobachevsky and Bolyai certainly introduced a fundamentally new way of thinking, though I'm not sure it fits the criterion of being a proof of something - perhaps a proof that a lot
up vote 10 down of effort had been wasted in trying to prove the parallel postulate.
add comment
Morse theory is another good example. Indeed it is the inspiration for Floer theory, which has already been mentioned.
Atiyah-Bott's paper "Yang-Mills equations on a Riemann surface" and Hitchin's paper "Self-duality equations on a Riemann surface" both contain rather striking applications of Morse theory.
The former paper contains for example many computations about cohomology rings of moduli spaces of holomorphic vector bundles over Riemann surfaces; the latter paper proves for instance
that moduli spaces of Higgs bundles over Riemann surfaces are hyperkähler.
up vote 10
down vote Note that these moduli spaces are algebraic varieties and can be (and are) studied purely from the viewpoint of algebraic geometry. But if we look at things from an analytic point of view,
and we realize these moduli spaces as quotients of infinite dimensional spaces by infinite dimensional groups, and we use the tools of analysis and Morse theory, as well as ideas from
physics(!!!), then we can discover perhaps more about these spaces than if we viewed them just algebraically, as simply being algebraic varieties.
show 1 more comment
The use of ideals in rings, rather than elements (in terms of factorization, etc...).
up vote 10 down vote This was followed by another revolutionary idea: using radical (Jacobson radical, etc...) instead of simple properties on elements.
add comment
I find Shannon's use of random codes to understand channel capacity very striking. It seems to be very difficult to explicitly construct a code which achieves the channel capacity - but
picking one at random works very well, provided one chooses the right underlying measure. Furthermore, this technique works very well for many related problems. I don't know the details of
up vote 9 your Example 4 (Erdos and Ramsey numbers), but I expect this is probably closely related.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged ho.history-overview big-list proofs or ask your own question. | {"url":"http://mathoverflow.net/questions/48771/proofs-that-require-fundamentally-new-ways-of-thinking/73973","timestamp":"2014-04-20T05:58:07Z","content_type":null,"content_length":"205862","record_id":"<urn:uuid:7e225087-8efc-4282-b716-2d87a63428c5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
GRE Quantitative Comparison
In a company of 140 employees, there are three departments, A, B and C. The average number of safety errors committed by each employee in department A in 2005 was 10, and the average number of safety
errors committed by each employee in department B was 12.25. Was the average number of safety errors committed by each employee in the company greater than 11?
(1) The 45 employees in department C each committed an average of 13 safety errors in 2005. (2) There are 44 employees in department B. | {"url":"http://gre.practice-tests.learnhub.com/tests/gre-quantitative-comparison","timestamp":"2014-04-18T15:39:13Z","content_type":null,"content_length":"57823","record_id":"<urn:uuid:018f5f3f-2f38-4db4-9609-ad4fbfca9b83>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Random Numbers
3.10 Random Numbers
A deterministic computer program cannot generate true random numbers. For most purposes, pseudo-random numbers suffice. A series of pseudo-random numbers is generated in a deterministic fashion. The
numbers are not truly random, but they have certain properties that mimic a random series. For example, all possible values occur equally often in a pseudo-random series.
Pseudo-random numbers are generated from a “seed”. Starting from any given seed, the random function always generates the same sequence of numbers. By default, Emacs initializes the random seed at
startup, in such a way that the sequence of values of random (with overwhelming likelihood) differs in each Emacs run.
Sometimes you want the random number sequence to be repeatable. For example, when debugging a program whose behavior depends on the random number sequence, it is helpful to get the same behavior in
each program run. To make the sequence repeat, execute (random ""). This sets the seed to a constant value for your particular Emacs executable (though it may differ for other Emacs builds). You can
use other strings to choose various seed values.
— Function:
random &optional limit
This function returns a pseudo-random integer. Repeated calls return a series of pseudo-random integers.
If limit is a positive integer, the value is chosen to be nonnegative and less than limit. Otherwise, the value might be any integer representable in Lisp, i.e., an integer between
most-negative-fixnum and most-positive-fixnum (see Integer Basics).
If limit is t, it means to choose a new seed based on the current time of day and on Emacs's process ID number.
If limit is a string, it means to choose a new seed based on the string's contents. | {"url":"https://www.gnu.org/software/emacs/manual/html_node/elisp/Random-Numbers.html","timestamp":"2014-04-16T14:26:59Z","content_type":null,"content_length":"4814","record_id":"<urn:uuid:e66cf9dc-b1a3-4d46-a6cf-a05be79b1ef3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
Power series for arctan(x/3)???
April 12th 2009, 08:42 PM #1
Feb 2009
Power series for arctan(x/3)???
How does one go about finding the power series for this? I know that the derivative of arctan(x) = 1/(1+x^2), so if I think I can find the power series for arctan(x), which would just be
integrating each term of the power series for 1/(1+x^2), but that (x/3) with the "/3" is throwing me off. How do I do it? Thanks.
How does one go about finding the power series for this? I know that the derivative of arctan(x) = 1/(1+x^2), so if I think I can find the power series for arctan(x), which would just be
integrating each term of the power series for 1/(1+x^2), but that (x/3) with the "/3" is throwing me off. How do I do it? Thanks.
$f(x)=\tan^{-1}\left( \frac{x}{3}\right)$
Now we can take a derivative to get
$\frac{df}{dx}=\frac{1}{3}\frac{1}{1+\left( \frac{x}{3}\right)^2}=\frac{1}{3}\sum_{n=0}^{\inft y}\left( -\frac{x}{3}\right)^{2n}=\sum_{n=0}^{\infty}\frac{x ^{2n}}{3^{2n+1}}$
Now we can integrate to find f
$f(x)=\int f'(x)dx=\int \sum_{n=0}^{\infty}\frac{x^{2n}}{3^{2n+1}}dx=\sum_ {n=0}^{\infty}\frac{x^{2n+1}}{3^{2n+1}(2n+1) }$
Last edited by TheEmptySet; April 13th 2009 at 07:51 AM. Reason: I made a mistake
Thanks, but what happened to (x/3)^2? When you put it into the series it became (-x/3)^n, but why isn't it [(-x^2)/9]^n since (-x/3) was squared?
April 12th 2009, 08:52 PM #2
April 12th 2009, 09:11 PM #3
Feb 2009
April 13th 2009, 07:51 AM #4 | {"url":"http://mathhelpforum.com/calculus/83446-power-series-arctan-x-3-a.html","timestamp":"2014-04-19T21:55:39Z","content_type":null,"content_length":"41808","record_id":"<urn:uuid:12d7a979-3664-4ca8-a4f4-426cdfa63f3d>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
zeta function
February 27th 2010, 11:50 AM
zeta function
why is this correct?
If $\zeta(s)= \Sigma ^\infty _{n=1} {1 \over {n^s}}$ then $\zeta(s)= \Sigma ^\infty _{N=1} {\tau(N) \over {N^s}}$ where $\tau(N)$ the number of divisors of N...
$\zeta(s)\zeta(s-1) = \Sigma {\sigma(N) \over N^s}$ where $\sigma(N)$ is the sum of the divisors of N....
February 27th 2010, 01:56 PM
why is this correct?
If $\zeta(s)= \Sigma ^\infty _{n=1} {1 \over {n^s}}$ then $\zeta(s)= \Sigma ^\infty _{N=1} {\tau(N) \over {N^s}}$ where $\tau(N)$ the number of divisors of N...
$\zeta(s)\zeta(s-1) = \Sigma {\sigma(N) \over N^s}$ where $\sigma(N)$ is the sum of the divisors of N....
the LHS of the first identity is wrong. it should be $\zeta^2(s).$ (Nod)
February 27th 2010, 02:11 PM
ok, for the integers $k \geq 0, \ n \geq 1,$ let $f_k(n)=\sum_{d \mid n} d^k.$ then, assuming that $\zeta(s-k)$ is defined we have: $\sum_{n=1}^{\infty} \frac{f_k(n)}{n^s}=\sum_{n=1}^{\infty} \
sum_{d \mid n} \frac{d^k}{n^s}=\sum_{d=1}^{\infty} d^k \sum_{m=1}^{\infty} \frac{1}{(md)^s}=\sum_{d=1}^{\infty} \frac{1}{d^{s-k}} \sum_{m=1}^{\infty} \frac{1}{m^s}=\zeta(s-k) \zeta(s).$
now use the fact that $f_0(n)=\tau(n)$ and $f_1(n)=\sigma(n).$ | {"url":"http://mathhelpforum.com/number-theory/131049-zeta-function-print.html","timestamp":"2014-04-17T07:13:42Z","content_type":null,"content_length":"8417","record_id":"<urn:uuid:cebe5c6d-9dba-4cda-a9aa-c068f7d7d0ce>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Page 1
The k-e turbulence model is one of several two-equation models that have developed over the years. It is probably the most widely and thoroughly tested of them all (Nallasamy, 1987). Based on simple
dimensional arguments concerning the relationship between the size and the energetics of individual eddies in fully [DEL:-:DEL]developed, isotropic turbulence, the model employs the following
diagnostic equation for the turbulent viscosity (Launder and Spalding, 1974).
where Cm is a dimensionless model constant, r is the local fluid density, and k and e are the specific turbulent kinetic energy (SI units: m^2/s^2) and turbulent kinetic energy dissipation rate (SI
units: m^2/s^3), respectively. These quantities are in turn computed using a pair of auxiliary transport equations of the form
where C1 and C2 are additional dimensionless model constants; Prk and Pre are the turbulent Prandtl numbers for kinetic energy and dissipation, respectively; Sk,p and Se,p are source terms for the
kinetic energy and turbulent dissipation; and the turbulent production rate is | {"url":"http://www.adaptive-research.com/storm_page/storm_turb.htm","timestamp":"2014-04-20T08:15:44Z","content_type":null,"content_length":"11816","record_id":"<urn:uuid:6410b3c6-8e81-4f3a-ae15-1c0fe9d4a1b3>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
Excel Functions: AVERAGE, MEDIAN, MODE
When we created a Box Plot recently, one of the measures was the MEDIAN.
• For an odd set of numbers, the MEDIAN is the number in the middle of the set.
• For an even set of numbers, the MEDIAN is the average of the two numbers in the middle.
MEDIAN is one of the central tendency functions, along with AVERAGE and MODE.
AVERAGE Function
If, like me, you’re not a statistician, you probably use the AVERAGE function, more often than you use MEDIAN or MODE.
• The AVERAGE is the SUM of the numbers, divided by the COUNT of the numbers
MODE Function
Do you ever use the MODE function? I used it in my statistics class at university, but not much since then, and we won’t talk about how long ago that was!
• The MODE function returns the most frequently occurring number in the set
• If there aren’t any duplicate numbers, the result is an #N/A error
• If there is a tie, the most frequent number that occurs first is the result
Comparing AVERAGE, MEDIAN and MODE
Below, you can see a couple of very simple examples of measuring a small set of numbers.
In the first example, the numbers are symmetrically distributed, as you can see in the COUNT chart.
The second chart shows that the AVERAGE, MEDIAN and MODE are the same.
In the next example, the numbers are NOT symmetrically distributed, as you can see in the COUNT chart.
The second chart shows that the AVERAGE, MEDIAN and MODE are different.
Use the Interactive Workbook
If you would like to play with the sample workbook, you can change the numbers in the interactive Excel workbook, shown below. There are two worksheets with number sets – one is symmetrically
distributed, and the other is not.
To get the score counts, I used the FREQUENCY function.
Share and Enjoy
I'm looking for some excel help - and I've come across your page. I couldn't really google this problem so I'm hoping I can explain it and you might be able to help?
I have a total number of items (lets say, 600 dollars). I then have a weekly number subtracted from that total, that is manually entered by the user (user spent 50 dollars that week). after 3 weeks
of this, I then take an average of those numbers, 4 weeks, new average, 5 weeks, new average - and so on. I then add/subtract a set amount to that average. this becomes a "high", "average", "low". so
if the output of the average is "50", the numbers calculated would be 65.00,50.00,45.00(high, average, low). I then use those numbers to forecast ahead So that I can answer a question like "in 3
weeks from now where will we be, how close will we be to having used up our original amount of money?" This gives me then a "high" "normal" "Low" number to forecast with. so I can say things like
"well, if more than average is spent, we'll be out of money in week 5, if the average amount is spent, week 7, and if we spend lower than what we have been, week 9".
The difficulty for me is this: I need that forecast to always be moving forward - only showing a forecast, never the past weeks forecasted amount. so in week 3 - it forecasts from week 4 - 7, in week
4, it forecasts 5 - 8, and so on. It needs to be a "rolling forecast". It needs to be always moving forward because this is used in a graph where the past figures are a bar chart, and this is all
done on a secondary axis.
This essentially a burndown chart with a "cone of uncertainty" attached to it. I have it working but the forecasting part I'm asking about is highly manual - I'm trying to formula this if possible.
Here's a link to a visual example where this is entirely done with a line graph. Mine is more complicated in the data it gathers(different kinds of spending is also shown by using a filled bar
chart), but the idea and intent is the exact same.
http://alistair.cockburn.us/get/1851 - as you can see from the image, the high/average/low is always moving forward and is separate from "actual"
can anyone help or point me in a certain direction? | {"url":"http://blog.contextures.com/archives/2013/06/13/excel-functions-average-median-mode/","timestamp":"2014-04-18T05:55:56Z","content_type":null,"content_length":"95988","record_id":"<urn:uuid:09a588ab-e1d2-4ac2-9177-d537702b646f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kindergarten to 8th Grade math videos by topics
Math videos for children in kindergarten, 1st grade, 2nd grade, 3 grade, 4th grade, 5th grade, 6th grade, 7th grade and 8th grade to practice math skills in : addition, geometry, number theory,
subtraction, ratios, algebra, multiplication, division, number lines, relations, telling time, money, comparing, decimals, percentages, polynomials, linear equations, graphs, expressions, coordinates
and more
Addition videos
Division videos
Division with Fraction Remainders
Practice Division with Fraction Remainders with this video on MathFox.
Decimals, Fractions and Percentage videos
Adding Fractions – Common Denominator
Practice Adding Fractions Common Denominator with this video on MathFox.
Mixed Numbers to Improper Fractions
Practice Mixed Numbers to Improper Fractions with this video on MathFox.
Fractions, Decimals – Mixed Numbers Line
Practice Fractions Decimals Mixed Numbers Line with this video on MathFox.
Adding Fractions with Unlike Denominators
Practice Adding Fractions with Unlike Denominators with this video on MathFox.
Adding Mixed Numbers with Unlike Denominators
Practice Adding Mixed Numbers with Unlike Denominators with this video on MathFox.
Dividing Decimals by Whole Numbers
Practice Dividing Decimals by Whole Numbers with this video on MathFox.
Division with Decimal Remainders
Practice Division with Decimal Remainders with this video on MathFox.
Subtracting Fractions with Unlike Denominators
Practice Subtracting Fractions with Unlike Denominators with this video on MathFox.
Subtracting Mixed Numbers with Unlike Denominators
Practice Subtracting Mixed Numbers with Unlike Denominators with this video on MathFox.
Terminating and Repeating Decimals
Practice Terminating and Repeating Decimals with this video on MathFox.
Finding the Percent of a Number
Practice Finding the Percent of a Number with this video on MathFox.
Finding Wholes from Percentages
Practice Finding Wholes from Percentages with this video on MathFox.
Fraction to Decimal using Division
Practice Fraction to Decimal using Division with this video on MathFox.
Proportions using equivalent fractions
Practice Solving proportions equivalent fractions with this video on MathFox.
Algebra videos
Correct Order of Operations PEMDAS
Practice Correct Order of Operations PEMDAS with this video on MathFox.
Multiplying / Dividing integers using rules
Practice Multiplying Dividing integers using rules with this video on MathFox.
Order of Operations in Equation
Practice Order of Operations in Equation with this video on MathFox.
Solving proportions using cross products
Practice Solving proportions using cross products with this video on MathFox.
Subtracting integers using rules
Practice Subtracting integers using rules with this video on MathFox.
Subtracting integers using T- chart
Practice Subtracting integers using T- chart with this video on MathFox.
Graph Linear Equations 3 Points
Practice Graph Linear Equations 3 Points with this video on MathFox.
Graph Linear Equations 3 Points Lesson 2
Practice Graph Linear Equations 3 Points Lesson 2 with this video on MathFox.
Scientific Notation – Negative Exponents
Practice Scientific Notation with Negative Exponents with this video on MathFox.
Scientific Notation – Positive Exponents
Practice Scientific Notation Positive Exponents with this video on MathFox.
Solving Equations Using Addition
Practice Solving Equations Using Addition with this video on MathFox.
Solving Equations Using Division
Practice Solving Equations Using Division with this video on MathFox.
Solving Equations Using Multiplication
Practice Solving Equations Using Multiplication with this video on MathFox.
Solving Equations Using Subtraction
Practice Solving Equations Using Subtraction with this video on MathFox.
Solving Expressions with Substitution
Practice Solving Expressions with Substitution with this video on MathFox.
Geometry videos
Congruent / not Congruent Figures
Practice Congruent or not Congruent Figures with this video on MathFox.
Perimeter of Irregular Polygons
Practice Perimeter of Irregular Polygons with this video on MathFox.
Finding Measures of Missing Angles
Practice Finding Measures of Missing Angles with this video on MathFox.
Pythagorean Theory Solving for a Side
Practice Pythagorean Theory Solving for a Side with this video on MathFox.
Graphs, Data and measurements videos
Mixed operations videos
Identity Property- Multiplication, Division
Practice Identity Property of Multiplication & Division with this video on MathFox.
Basic Concept of Multiplication
Practice Basic Concept of Multiplication with this video on MathFox.
Associative Property Addition Multiplication
Practice Associative Property Addition Multiplication with this video on MathFox.
Cummutative Property Addition Multiplication
Practice Cummutative Property Addition Multiplication with this video on MathFox.
Zero Property of Multiplication
Practice Zero Property of Multiplication with this video on MathFox.
Multiplying Double Digit Numbers
Practice Multiplying Double Digit Numbers with this video on MathFox.
Number Theory videos
Ordering Numbers
Practice Ordering Numbers with this video on MathFox.
Representing Relationship with Equation
Practice Representing Relationship with Equation with this video on MathFox.
Representing Relationships with inequalities
Practice Representing Relationships with inequalities with this video on MathFox.
Standard to Exponent Forms of Numbers
Practice Standard to Exponent Forms of Numbers with this video on MathFox.
Pre-Algebra videos
Multiplication / Division Vocabulary
Practice Multiplication & Division Vocabulary with this video on MathFox.
Patterns and Spatial Sense videos
Time videos
Unit Cost videos | {"url":"http://www.mathfox.com/math-videos/","timestamp":"2014-04-17T19:24:04Z","content_type":null,"content_length":"162709","record_id":"<urn:uuid:89b71964-f7aa-4923-bde9-d463fbde8d6e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Dirichlet's theorem; boiling down proofs
Wayne Aitken waitken at csusm.edu
Wed Aug 22 15:35:39 EDT 2007
Jacques Carette wrote:
> My undergraduate degree (in pure mathematics, completed a mere 17 years
> ago) covered the last 5 topics in decent depth, and half of the first
> topic, but no projective geometry. And in fact, the proofs were based
> on exactly the tools you mention
Also a talented, ambitious undergraduate in a good program can and should
take graduate courses, read on his/her own, and learn informally from
faculty and graduate students. So I will agree that all 19th century
mathematics, and a large amount of 20th century mathematics is within
reach of such an undergraduate.
Furthermore, as Jacques Carette mentions, undergraduates can (and should)
get a decent introduction to several areas of 19th century mathematics.
For example, introductory galois theory is a staple of undergraduate
algebra, Körner's Fourier analysis is written for undergraduates, and
several undergraduate level geometry texts introduce projective geometry.
Realistically, however, outside of core areas such as analysis and
algebra, an undergraduate curriculum has room for only a one semester
course in any given area. My point is that there are 19th century results
whose proof would be inappropriate for such a course. For example, it
would not be a good idea to give a full proof of the Kronecker-Weber
theorem in an introductory one semester undergraduate course in algebraic
number theory. The results of the Italian school of algebraic geometry
give even stronger examples. Given the large amount of 19th century
research in projective geometry, invariant theory, ellipic and abelian
integrals, and Fourier analysis, I suspect that there are results in these
areas whose proofs would similarly be out of place in a one semester
introductory undergraduate course even at a top 25 program.
Now, Dirichlet's theorem is different. Its proof could be covered in an
introductory analytic number theory course. After all, it is the start of
an extremely fruitful tradition (culminating in the Langland's program).
--- Wayne Aitken
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2007-August/011872.html","timestamp":"2014-04-21T03:07:42Z","content_type":null,"content_length":"4545","record_id":"<urn:uuid:01c840b8-1aa0-4127-af04-accfb4b6e0cc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
Legal Theory Lexicon: The Prisoners' Dilemma
One of the most useful tools in analyzing legal rules and the policy problems to which they apply is game theory. The basic idea of game theory is simple. Many human interactions can be modeled as
games. To use game theory, we build a simple model of a real world situations as a game. Thus, we might model civil litigation as a game played by plaintiffs against defendants. Or we might model the
confirmation of federal judges by the Senate as a game played by Democrats and Republicans. This week's installment of the Legal Theory Lexicon discusses one important example of game theory, the
prisoner's dilemma. This introduction is very basic--aimed at a first year law student with an interest in legal theory.
An Example
Ben and Alice have been arrested for robbing Fort Knox and placed in separate cells. The police make the following offer to each of them. "You may choose to confess or remain silent. If you confess
and your accomplice remains silent I will drop all charges against you and use your testimony to ensure that your accomplice gets a heavy sentence. Likewise, if your accomplice confesses while you
remain silent, he or she will go free while you get the heavy sentence. If you both confess I get two convictions, but I'll see to it that you both get light sentences. If you both remain silent,
I'll have to settle for token sentences on firearms possession charges. If you wish to confess, you must leave a note with the jailer before my return tomorrow morning." This is illustrated by Table
One. Ben's moves are read horizontally; Alice's moves read vertically. Each numbered pair (e.g. 5, 0) represents the payoffs for the two players. Alice's payoff is the first number in the pair, and
Ben's payoff is the second number. Larger numbers represent more utility (a better payoff); so 5 is best, then 3, then 1, then 0 (the worst).
Table One: Example of the Prisoner's Dilemma.
Suppose that you are Ben. You might reason as follows. If Alice confesses, then I have two choices. If I confess, I get a light sentence (to which we assign a numerical value of 1). If Alice
confesses and I do not confess, then I get the heavy sentence and a payoff of 0. So if Alice confesses, I should confess (1 is better than 0). If Alice does not confess, I again have two choices. If
I confess, then I get off completely and a payoff of 5. If I do not confess, we both get light sentences and a payoff of 3. So if Alice does not confess, I should confess (because 5 is better than
3). So, no matter what Alice does, I should confess.
Alice will reason the same way, and so both Ben and Alice will confess. In other words, one move in the game (confess) dominates the other move (do not confess) for both players. But both Ben and
Alice would be better off if neither confessed. That is, the dominant move (confess) will yield a lower payoff to Ben and Alice (1, 1) than would the alternative move (do not confess), which yields
(3, 3). By acting rationally and confessing, both Ben and Alice are worse off than they would be if they both had acted irrationally.
The Real World
The prisoner's dilemma is not just a theoretical model. Here is an example from Judge Frank Easterbrook's opinion in United States v. Herrera, 70 F.3d 444 (7th Cir. 1995):
Cynthia LaBoy Herrera survived a nightmare. She and her husband Geraldo Herrera were arrested after a drug transaction. The couple, separated by the agents, then played and lost a game of
Prisoner's Dilemma. See Page v. United States, 884 F.2d 300 (7th Cir.1989); Douglas G. Baird, Robert H. Gertner & Randal C. Picker, Game Theory and the Law 312-13 (1994). Cynthia told agents who
their suppliers were. Learning of this, Geraldo talked too. When both were out on bond, Geraldo decided that Cynthia should pay for initiating the revelations. Geraldo clobbered Cynthia on the
back of her head with a hammer; while she tried to defend herself, Geraldo declared that she talked too much to the DEA. As Cynthia grappled with the hand holding the hammer, Geraldo used his
free hand to punch her in the face. Geraldo got the other hand free and hit Cynthia repeatedly with the hammer; she lapsed into unconsciousness.
Communication and Bargains
How can we overcome a prisoner's dilemma? You have probably noticed that the prisoner's dilemma assumed that the two prisoner's were isolated from each other. This was not an accident. If the two
prisoner's can communicate with each other, then they might reach an agreement. Alice might say to Ben, "I won't confess if you won't," and Ben might say, "I agree." Of course, this might not solve
the prisoner's dilemma. Why not? Suppose they do agree not to confess, but each is then taken to a separate room and given a confession to sign. Ben might reason as follows, "If I keep the bargain,
and Alice does not, then she will get off while I get a heavy sentence." So Ben may be tempted to defect from their agreement. And Alice may reason in exactly the same way. On the other hand, it may
be that Ben and Alice have a reason to trust one another. For example, they may have had prior dealings in which each proved trustworthy to the other. Of course, trust can be established in another
way. If each party can make a credible threat of retaliation against the other, then those threats may change the payoff structure in such a way as to make the cooperative strategy dominant. One
situation in which the threat of retaliation is built into the model is the iterative (repeated) prisoner's dilemma.
Iterated Game
As described above, the prisoner's dilemma is a one-shot game. But in the real world, may prisoner's dilemmas involve repeated plays. You can imagine a series of moves, for example:
Round One--Alice Confesses, Ben Does Not Confess
Round Two--Alice Confesses, Ben Confesses
Round Three--Alice Does Not Confess, Ben Does Not Confess
We can imagine various strategies of play for Ben and Alice. One of the most important strategies is called tit for tat. Alice might say to herself, "If Ben Confesses, then I will retaliate and
confess, but if Ben does not confess, then neither will I." Add one more element to this strategy. Suppose both Ben and Alice say to themselves, on the first round of play, I will cooperate and not
confess. Then we would get the following pattern:
Round One--Alice Does Not Confess, Ben Does Not Confess
Round Two--Does Not Confess, Ben Does Not Confess Round Three--Alice Does Not Confess, Ben Does Not Confess
Thus, if both Ben and Alice play tit for tat, the result might be a stable pattern of cooperation, which benefits both Ben and Alice.
If you want to get a really good feel for the iterative prisoner's dilemma, go to this website, where you can actually try out various strategies.
One more twist. Suppose that this game is finite, i.e. it has a fixed number of moves, e.g. ten. How will Ben and Alex play in the "end game." Ben might reason as follows. If I defect and confess on
the tenth move, Alice cannot retaliate on the eleventh move (because there is no eleventh round of play). And Alice might reason the same way, leading both Ben and Alice to confess in the final round
of play. But now Ben might think, since it is rational for both of us to defect in the tenth round, I need to rethink my strategy in the ninth round. Since I know that Alice will confess anyway in
the tenth round, I might as well confess in the ninth round. But once again, Alice might reason in exactly this same way. Before we know it, both Alice and Ben have decided to defect in the very
first round.
This has been a very basic introduction to the prisoner's dilemma, but I hope that it has been sufficient to get the basic concept across. As a first year law student, you are likely to run into the
prisoner's dilemma sooner or later. If you have an interest in this kind of approach to legal theory, I've provided some references to much more sophisticated accounts. Happy modeling!
Related Lexicon Entries
Resources on the Web
(Last revised on April 7, 2013.) | {"url":"http://lsolum.typepad.com/legaltheory/2013/04/legal-theory-lexicon-the-prisoners-dilemma.html","timestamp":"2014-04-18T18:36:52Z","content_type":null,"content_length":"51971","record_id":"<urn:uuid:7cc2a92d-d239-4f1c-b693-dab09c149f30>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
Designing graph drawings by layout graph grammars
Results 1 - 10 of 16
- ACM Transactions on Computer-Human Interaction , 2006
"... In a graphical user interface, physical layout and abstract structure are two important aspects of a graph. This article proposes a new graph grammar formalism which integrates both the spatial
and structural specification mechanisms in a single framework. This formalism is equipped with a parser th ..."
Cited by 19 (9 self)
Add to MetaCart
In a graphical user interface, physical layout and abstract structure are two important aspects of a graph. This article proposes a new graph grammar formalism which integrates both the spatial and
structural specification mechanisms in a single framework. This formalism is equipped with a parser that performs in polynomial time with an improved parsing complexity over its nonspatial
predecessor, that is, the Reserved Graph Grammar. With the extended expressive power, the formalism is suitable for many user interface applications. The article presents its application in adaptive
Web design and presentation.
- Constraints , 1998
"... Abstract. Graphs are widely used for information visualization purposes, since they provide a natural and intuitive representation of complex abstract structures. The automatic generation of
drawings of graphs has applications a variety of fields such as software engineering, database systems, and g ..."
Cited by 15 (0 self)
Add to MetaCart
Abstract. Graphs are widely used for information visualization purposes, since they provide a natural and intuitive representation of complex abstract structures. The automatic generation of drawings
of graphs has applications a variety of fields such as software engineering, database systems, and graphical user interfaces. In this paper, we survey algorithmic techniques for graph drawing that
support the expression and satisfaction of user-defined constraints. 1.
- Lecture Notes in Computer Science , 1997
"... INTRODUCTION Graph drawing addresses the problem of constructing geometric representations of graphs, and has important applications to key computer technologies such as software engineering,
database systems, visual interfaces, and computer-aided-design. Research on graph drawing has been conducte ..."
Cited by 14 (3 self)
Add to MetaCart
INTRODUCTION Graph drawing addresses the problem of constructing geometric representations of graphs, and has important applications to key computer technologies such as software engineering,
database systems, visual interfaces, and computer-aided-design. Research on graph drawing has been conducted within several diverse areas, including discrete mathematics (topological graph theory,
geometric graph theory, order theory), algorithmics (graph algorithms, data structures, computational geometry, vlsi), and human-computer interaction (visual languages, graphical user interfaces,
software visualization). This chapter overviews aspects of graph drawing that are especially relevant to computational geometry. Basic definitions on drawings and their properties are given in
Section 1.1. Bounds on geometric and topological properties of drawings (e.g., area and crossings) are presented in Section 1.2. Section 1.3 deals with the time complexity of fundamental graph drawin
- In Proceedings of ACM Workshop on Effective Abstractions in Multimedia, Layout and Interaction , 1996
"... this paper, we describe the constraintbased layout framework LayLab that has been developed in the context of the IntelliMedia presentation and design system WIP [ Andr'e et al., 1993; Wahlster
Cited by 10 (2 self)
Add to MetaCart
this paper, we describe the constraintbased layout framework LayLab that has been developed in the context of the IntelliMedia presentation and design system WIP [ Andr'e et al., 1993; Wahlster
- In Neural Computers, 393–406, ECKMILLER , 1998
"... The paper presents self-organizing graphs, a novel approach to graph layout based on a competitive learning algorithm. This method is an extension of selforganization strategies known from
unsupervised neural networks, namely from Kohonen's self-organizing map. Its main advantage is that it is very ..."
Cited by 9 (0 self)
Add to MetaCart
The paper presents self-organizing graphs, a novel approach to graph layout based on a competitive learning algorithm. This method is an extension of selforganization strategies known from
unsupervised neural networks, namely from Kohonen's self-organizing map. Its main advantage is that it is very flexibly adaptable to arbitrary types of visualization spaces, for it is explicitly
parameterized by a metric model of the layout space. Yet the method consumes comparatively little computational resources and does not need any heavy-duty preprocessing. Unlike with other stochastic
layout algorithms, not even the costly repeated evaluation of an objective function is required. To our knowledge this is the first connectionist approach to graph layout. The paper presents
applications to 2D-layout as well as to 3D-layout and to layout in arbitrary metric spaces, such as networks on spherical surfaces. 1 Introduction Automatic layout techniques are a crucial component
for any application which...
- In Graph Drawing (GD'96), Berkeley/CA , 1997
"... This paper and the accompanying demo describe a strategy and a software architecture for integrating several declarative approaches. This architecture allows for the interactive specification of
local criteria for each vertex and edge. The Gold ..."
Cited by 7 (1 self)
Add to MetaCart
This paper and the accompanying demo describe a strategy and a software architecture for integrating several declarative approaches. This architecture allows for the interactive specification of
local criteria for each vertex and edge. The Gold
- Graph Drawing. Volume LNCS 3843 , 2005
"... In this paper we present an algorithm for drawing an undirected graph G that takes advantage of the structure of the modular decomposition tree of G. Specifically, our algorithm works by
traversing the modular decomposition tree of the input graph G on n vertices and m edges in a bottom-up fashion u ..."
Cited by 7 (1 self)
Add to MetaCart
In this paper we present an algorithm for drawing an undirected graph G that takes advantage of the structure of the modular decomposition tree of G. Specifically, our algorithm works by traversing
the modular decomposition tree of the input graph G on n vertices and m edges in a bottom-up fashion until it reaches the root of the tree, while at the same time intermediate drawings are computed.
In order to achieve aesthetically pleasing results, we use grid and circular placement techniques, and utilize an appropriate modification of a well-known spring embedder algorithm. It turns out,
that for some classes of graphs, our algorithm runs in O(n + m) time, while in general, the running time is bounded in terms of the processing time of the spring embedder algorithm. The result is a
drawing that reveals the structure of the graph G and preserves certain aesthetic criteria.
, 1996
"... . We introduce graph automata as devices for the recognition of linear graph languages. A graph automaton is the canonical extension of a finite state automaton recognizing a set of connected
labeled graphs. It consists of a finite state control and a collection of heads, which search the input ..."
Cited by 4 (1 self)
Add to MetaCart
. We introduce graph automata as devices for the recognition of linear graph languages. A graph automaton is the canonical extension of a finite state automaton recognizing a set of connected labeled
graphs. It consists of a finite state control and a collection of heads, which search the input graph. In a move the graph automaton reads a new subgraph, checks some consistency conditions, changes
states and moves some of its heads beyond the read subgraph. It proceeds such that the set of currently visited edges is an edge-separator between the visited and the yet undiscovered part of the
input graph. Hence, the graph automaton realizes a graph searching strategy. Our main result states that finite graph automata recognize exactly the set of graph languages generated by connected
linear NCE graph grammars. 1 Introduction The theory of graph languages is based on generative devices, i.e., on graph grammars. A graph grammar consists of a finite set of productions, which are
- Theor. Comp. Sci , 1996
"... The visualization of conceptual structures is a key component of support tools for complex applications in science and engineering. Foremost among the visual representations used are drawings of
graphs and ordered sets. In this talk, we survey recent advances in the theory and practice of graph d ..."
Cited by 4 (0 self)
Add to MetaCart
The visualization of conceptual structures is a key component of support tools for complex applications in science and engineering. Foremost among the visual representations used are drawings of
graphs and ordered sets. In this talk, we survey recent advances in the theory and practice of graph drawing. Specific topics include bounds and tradeoffs for drawing properties, three-dimensional
representations, methods for constraint satisfaction, and experimental studies. 1 Introduction In this paper, we survey selected research trends in graph drawing, and overview some recent results of
the author and his collaborators. Graph drawing addresses the problem of constructing geometric representations of graphs, a key component of support tools for complex applications in science and
engineering. Graph drawing is a young research field that has growth very rapidly in the last decade. One of its distinctive characteristics is to have furthered collaborative efforts between
computer scien... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2058631","timestamp":"2014-04-17T05:23:16Z","content_type":null,"content_length":"36557","record_id":"<urn:uuid:6048c083-bbb6-4b3b-ad22-3fbb739bf589>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
Renting versus Buying a Home
Subject: Real Estate - Renting versus Buying a Home
Last-Revised: 21 Nov 1995
Contributed-By: Jeff Mincy (mincy at rcn.com), Chris Lott (contact me)
This note will explain one way to compare the monetary costs of renting vs. buying a home. It is strongly oriented towards the US system, taking into account tax code issues.
1. Abstract
• If you are guaranteed an appreciation rate that is a few points above inflation, buy.
• If the monthly costs of buying are basically the same as renting, buy.
• The shorter the term, the more advantageous it is to rent.
• Tax consequences in the US are fairly minor in the long term.
2. Introduction
The three important factors that affect the analysis the most are the following:
1. Relative cash flows; e.g., rent compared to monthly ownership expenses
2. Length of term
3. Rate of appreciation
The approach used here is to determine the present value of the money you will pay over the term for the home. In the case of buying, the appreciation rate and thereby the future value of the home is
estimated. For home appreciate rates, find something like the tables published by Case Schiller that show changes in house prices for your region. The real estate section in your local newspaper may
print it periodically. This analysis neglects utility costs because they can easily be the same whether you rent or buy. However, adding them to the analysis is simple; treat them the same as the
costs for insurance in both cases.
Opportunity costs of buying are effectively captured by the present value. For example, pretend that you are able to buy a house without having to have a mortgage. Now the question is, is it better
to buy the house with your hoard of cash or is it better to invest the cash and continue to rent? To answer this question you have to have estimates for rental costs and house costs (see below), and
you have a projected growth rate for the cash investment and projected growth rate for the house. If you project a 4% growth rate for the house and a 15% growth rate for the cash then holding the
cash would be a much better investment.
First the analysis for renting a home is presented, then the analysis for buying. Examples of analyses over a long term and a short term are given for both scenarios.
3. Renting a Home.
Step 1: Gather data
You will need:
□ monthly rent
□ renter's insurance (usually inexpensive)
□ term (period of time over which you will rent)
□ estimated inflation rate to compute present value (historically 4.5%)
□ estimated annual rate of increase in the rent (can use inflation rate)
Step 2: Compute present value of payments
You will compute the present value of the cash stream that you will pay over the term, which is the cost of renting over that term. This analysis assumes that there are no tax consequences
(benefits) associated with paying rent.
3.1 A long-term example of renting
Rent = 990 / month
Insurance = 10 / month
Term = 30 years
Rent increases = 4.5% annually
Inflation = 4.5% annually
For this cash stream, present value = 348,137.17.
3.2 A short-term example of renting
Same numbers, but just 2 years.
Present value = 23,502.38
4. Buying a Home
Step 1: Gather data.
You need a lot to do a fairly thorough analysis:
□ purchase price
□ down payment and closing costs
□ other regular expenses, such as condominium fees
□ amount of mortgage
□ mortgage rate
□ mortgage term
□ mortgage payments (this is tricky for a variable-rate mortgage)
□ property taxes
□ homeowner's insurance (Note: this analysis neglects extraordinary risks such as earthquakes or floods that may cause the homeowner to incur a large loss due to a high deductible in your
policy. All of you people in California know what I'm talking about.)
□ your marginal tax bracket (at what rate is the last dollar taxed)
□ the current standard deduction which the IRS allows
Other values have to be estimated, and they affect the analysis critically:
□ continuing maintenance costs (I estimate 1/2 of PP over 30 years.)
□ estimated inflation rate to compute present value (historically 4.5%)
□ rate of increase of property taxes, maintenance costs, etc. (infl. rate)
□ appreciation rate of the home (THE most important number of all)
Step 2: Compute the monthly expense
This includes the mortgage payment, fees, property tax, insurance, and maintenance. The mortgage payment is fixed, but you have to figure inflation into the rest. Then compute the present value
of the cash stream.
Step 3: Compute your tax savings
This is different in every case, but roughly you multiply your tax bracket times the amount by which your interest plus other deductible expenses (e.g., property tax, state income tax) exceeds
your standard deduction. No fair using the whole amount because everyone, even a renter, gets the standard deduction for free. Must be summed over the term because the standard deduction will
increase annually, as will your expenses. Note that late in the mortgage your interest payments will be be well below the standard deduction. I compute savings of about 5% for the 33% tax
Step 4: Compute the present value
First you compute the future value of the home based on the purchase price, the estimated appreciation rate, and the term. Once you have the future value, compute the present value of that sum
based on the inflation rate you estimated earlier and the term you used to compute the future value. If appreciation is greater than inflation, you win. Else you break even or even lose.
Actually, the math of this step can be simplified as follows:
/periods + appr_rate/100\ ^ (periods * yrs)
prs-value = cur-value * | ----------------------- |
\periods + infl_rate/100/
Step 5: Compute final cost
All numbers must be in present value.
Final-cost = Down-payment + S2 (expenses) - S3 (tax sav) - S4 (prop value)
4.1 Long-term example Nr. 1 of buying: 6% apprecation
Step 1 - the data
□ Purchase price = 145,000
□ Down payment etc = 10,000
□ Mortgage amount = 140,000
□ Mortgage rate = 8.00%
□ Mortgage term = 30 years
□ Mortgage payment = 1027.27 / mo
□ Property taxes = about 1% of valuation; I'll use 1200/yr = 100/mo (Which increases same as inflation, we'll say. This number is ridiculously low for some areas, but hey, it's just an
□ Homeowner's ins. = 50 / mo
□ Condo. fees etc. = 0
□ Tax bracket = 33%
□ Standard ded. = 5600 (Needs to be updated)
□ Maintenance = 1/2 PP is 72,500, or 201/mo; I'll use 200/mo
□ Inflation rate = 4.5% annually
□ Prop. taxes incr = 4.5% annually
□ Home appreciates = 6% annually (the NUMBER ONE critical factor)
Step 2 - the monthly expense
The monthly expense, both fixed and changing components:
Fixed component is the mortgage at 1027.27 monthly. Present value = 203,503.48. Changing component is the rest at 350.00 monthly. Present value = 121,848.01. Total from Step 2: 325,351.49
Step 3 - the tax savings
I use my loan program to compute this. Based on the data given above, I compute the savings: Present value = 14,686.22. Not much when compared to the other numbers.
Step 4 - the future and present value of the home
See data above. Future value = 873,273.41 and present value = 226,959.96 (which is larger than 145k since appreciation is larger than inflation). Before you compute present value, you should
subtract reasonable closing costs for the sale; for example, a real estate broker's fee.
Step 5 - the final analysis for 6% appreciation
Final = 10,000 + 325,351.49 - 14,686.22 - 226,959.96
= 93,705.31
So over the 30 years, assuming that you sell the house in the 30th year for the estimated future value, the present value of your total cost is 93k. (You're 93k in the hole after 30 years, which
means you only paid 260.23/month.)
4.2 Long-term example Nr. 2 of buying: 7% apprecation
All numbers are the same as in the previous example, however the home appreciates 7%/year.
Step 4 now comes out FV=1,176,892.13 and PV=305,869.15
Final = 10,000 + 325,351.49 - 14,686.22 - 305,869.15
= 14796.12
So in this example, 7% was an approximate break-even point in the absolute sense; i.e., you lived for 30 years at near zero cost in today's dollars.
4.3 Long-term example Nr. 3 of buying: 8% apprecation
All numbers are the same as in the previous example, however the home appreciates 8%/year.
Step 4 now comes out FV=1,585,680.80 and PV=412,111.55
Final = 10,000 + 325,351.49 - 14,686.22 - 412,111.55
= -91,446.28
The negative number means you lived in the home for 30 years and left it in the 30th year with a profit; i.e., you were paid to live there.
4.4 Long-term example Nr. 4 of buying: 2% appreciation
All numbers are the same as in the previous example, however the home appreciates 2%/year.
Step 4 now comes out FV=264,075.30 and PV=68,632.02
Final = 10,000 + 325,351.49 - 14,686.22 - 68,632.02
= 252,033.25
In this case of poor appreciation, home ownership cost 252k in today's money, or about 700/month. If you could have rented for that, you'd be even.
4.5 Short-term example Nr. 1 of buying: 6% apprecation
All numbers are the same as long-term example Nr. 1, but you sell the home after 2 years. Future home value in 2 years is 163,438.17
Cost = down+cc + all-pymts - tax-savgs - pv(fut-home-value - remaining debt)
= 10,000 + 31,849.52 - 4,156.81 - pv(163,438.17 - 137,563.91)
= 10,000 + 31,849.52 - 4,156.81 - 23,651.27
= 14,041.44
4.6 Short-term example Nr. 2 of buying: 2% apprecation
All numbers are the same as long-term example Nr. 4, but you sell the home after 2 years. Future home value in 2 years is 150,912.54
Cost = down+cc + all-pymts - tax-savgs - pv(fut-home-value - remaining debt)
= 10,000 + 31,849.52 - 4,156.81 - pv(150912.54 - 137,563.91)
= 10,000 + 31,849.52 - 4,156.81 - 12,201.78
= 25,490.93
5. A Question
Q: Is it true that you can usually rent for less than buying?
Answer 1: It depends. It isn't a binary state. It is a fairly complex set of relationships.
In large metropolitan areas, where real estate is generally much more expensive than elsewhere, then it is usually better to rent, unless you get a good appreciation rate or if you are going to own
for a long period of time. It depends on what you can rent and what you can buy. In other areas, where real estate is relatively cheap, I would say it is probably better to own.
On the other hand, if you are currently at a market peak and the country is about to go into a recession it is better to rent and let property values and rent fall. If you are currently at the bottom
of the market and the economy is getting better then it is better to own.
Answer 2: When you rent from somebody, you are paying that person to assume the risk of homeownership. Landlords are renting out property with the long term goal of making money. They aren't renting
out property because they want to do their renters any special favors. This suggests to me that it is generally better to own.
6. Conclusion
Once again, the three important factors that affect the analysis the most are cash flows, term, and appreciation. If the relative cash flows are basically the same, then the other two factors affect
the analysis the most.
The longer you hold the house, the less appreciation you need to beat renting. This relationship always holds, however, the scale changes. For shorter holding periods you also face a risk of market
downturn. If there is a substantial risk of a market downturn you shouldn't buy a house unless you are willing to hold the house for a long period.
If you have a nice cheap rent controlled apartment, then you should probably not buy.
There are other variables that affect the analysis, for example, the inflation rate. If the inflation rate increases, the rental scenario tends to get much worse, while the ownership scenario tends
to look better.
7. Resources
Here are some resources to help you run your own analyses.
• The New York Times offers an interactive feature "Is it Better to Buy or Rent" where you can enter monthly rent, home price, down payment, mortgage rate, annual property taxes, and annual home
price change. The graph updates itself to show how the numbers work over time.
• For those who prefer command-line programs over GUIs, a few small C programs for computing future value, present value, and loan amortization schedules (used to write this article) are available.
See the article "Software - Investment-Related Programs" elsewhere in this FAQ for information about obtaining them.
Previous article is Real Estate: Investment Trusts (REITs) Category is Real Estate
Next article is Real Estate: VA Loans Index of all articles | {"url":"http://invest-faq.com/articles/real-es-rent-vs-buy.html","timestamp":"2014-04-17T07:04:27Z","content_type":null,"content_length":"20587","record_id":"<urn:uuid:a3ca6cee-3182-4283-81ba-f5b5ad7fe45b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof by pointing
Results 1 - 10 of 33
- Journal of Automated Reasoning, Special , 2006
"... Abstract. Matita is a new, document-centric, tactic-based interactive theorem prover. This paper focuses on some of the distinctive features of the user interaction with Matita, mostly
characterized by the organization of the library as a searchable knowledge base, the emphasis on a high-quality not ..."
Cited by 47 (15 self)
Add to MetaCart
Abstract. Matita is a new, document-centric, tactic-based interactive theorem prover. This paper focuses on some of the distinctive features of the user interaction with Matita, mostly characterized
by the organization of the library as a searchable knowledge base, the emphasis on a high-quality notational rendering, and the complex interplay between syntax, presentation, and semantics.
- JOURNAL OF SYMBOLIC COMPUTATION , 1995
"... In this paper, we present the results of an ongoing effort in building user interfaces for proof systems. Our approach is generic: we are not constructiong a user interface for a particular
proof system, rather we have developed techniques and tools... ..."
Cited by 28 (8 self)
Add to MetaCart
In this paper, we present the results of an ongoing effort in building user interfaces for proof systems. Our approach is generic: we are not constructiong a user interface for a particular proof
system, rather we have developed techniques and tools...
- CIENCIAS EXACTAS, FÍSICAS Y NATURALES, SERIE A: MATEMÁTICAS, 98(1), 2004. SPECIAL ISSUE ON SYMBOLIC COMPUTATION IN LOGIC AND ARTIFICIAL INTELLIGENCE , 2004
"... Frameworks for interactive theorem proving give the user explicit control over the construction of proofs based on meta languages that contain dedicated control structures for describing proof
construction. Such languages are not easy to master and thus contribute to the already long list of skill ..."
Cited by 22 (8 self)
Add to MetaCart
Frameworks for interactive theorem proving give the user explicit control over the construction of proofs based on meta languages that contain dedicated control structures for describing proof
construction. Such languages are not easy to master and thus contribute to the already long list of skills required by prospective users of interactive theorem provers. Most users, however, only need
a convenient formalism that allows to introduce new rules with minimal overhead. On the the other hand, rules of calculi have not only purely logical content, but contain restrictions on the expected
context of rule applications and heuristic information. We suggest a new and minimalist concept for implementing interactive theorem provers called taclet. Their usage can be mastered in a matter of
hours, and they are efficiently compiled into the GUI of a prover. We implemented the KeY system, an interactive theorem prover for the full JAVA CARD language based on taclets.
- Journal of Symbolic Computation , 1995
"... In this paper the interaction between users and the interactive theorem prover HOL is investigated from a human-computer interaction perspective. First, we outline three possible views of
interaction, and give a brief survey of some current interfaces and how they may be described in terms of the ..."
Cited by 12 (3 self)
Add to MetaCart
In this paper the interaction between users and the interactive theorem prover HOL is investigated from a human-computer interaction perspective. First, we outline three possible views of
interaction, and give a brief survey of some current interfaces and how they may be described in terms of these views. Second, we describe and present the results of an empirical study of
intermediate and expert HOL users. The results are analysed for evidence in support of the proposed view of proof activity in HOL. We believe that this approach provides a principled basis for the
assessment and design of interfaces to theorem provers.
- Mathematical Knowledge Management MKM 2005, LNAI 3863 , 2006
"... Abstract. Recently, significant advances have been made in formalised mathematical texts for large, demanding proofs. But although such large developments are possible, they still take an
inordinate amount of effort and time, and there is a significant gap between the resulting formalised machine-ch ..."
Cited by 10 (3 self)
Add to MetaCart
Abstract. Recently, significant advances have been made in formalised mathematical texts for large, demanding proofs. But although such large developments are possible, they still take an inordinate
amount of effort and time, and there is a significant gap between the resulting formalised machine-checkable proof scripts and the corresponding human-readable mathematical texts. We present an
authoring system for formal proof which addresses these concerns. It is based on a central document format which, in the tradition of literate programming, allows one to extract either a formal proof
script or a human-readable document; the two may have differing structure and detail levels, but are developed together in a synchronised way. Additionally, we introduce ways to assist production of
the central document, by allowing tools to contribute backflow to update and extend it. Our authoring system builds on the new PG Kit architecture for Proof General, bringing the extra advantage that
it works in a uniform interface, generically across various interactive theorem provers. 1
- Fakultät für Informatik, Universität Karlsruhe, 2000b. URL http://www.key-project.org/doc/2000/stsr.ps.gz , 2000
"... . This paper presents a framework to make interactive proving over abstract data types (rst order logic plus induction) more comfortable. A language of schematic rules is introduced, yielding
the ability to write, to use, and even to verify these rules for any abstract data type and its theory. ..."
Cited by 8 (2 self)
Add to MetaCart
. This paper presents a framework to make interactive proving over abstract data types (rst order logic plus induction) more comfortable. A language of schematic rules is introduced, yielding the
ability to write, to use, and even to verify these rules for any abstract data type and its theory. The language allows to express the functionality of a rule easily and clearly. Nearly all potential
rule applications are coupled with the occurrence of certain terms or formulas. One can prove with these rules simply by mouse clicks on these terms and formulas. The rule language is expressive
enough to describe even complex induction rules. Nevertheless, the correctness of a rule can be veried within the same theory without use of explicit higher order logic or of a translation to some
kind of meta level. So, in each state of a proof, new rules can be introduced, whenever required, and proven. 1 Introduction An abstract data type can have a rich signature and a complex theory. ...
, 1997
"... A formalisation of pi-calculus in the Coq system is presented. Based on a de Bruijn notation for names, our... ..."
- IN: PROCEEDINGS OF PROOF TRANSFORMATION AND PRESENTATION AND PROOF COMPLEXITIES (PTP’01 , 2001
"... PCOQ is the latest product in a decade-long effort to produce graphical user-interfaces for proof systems. It inherits many characteristics from the previous CTCOQ system... ..."
Cited by 7 (1 self)
Add to MetaCart
PCOQ is the latest product in a decade-long effort to produce graphical user-interfaces for proof systems. It inherits many characteristics from the previous CTCOQ system...
- FORMAL ASPECTS OF COMPUTING , 1998
"... We present issues that arose in the design of the CtCoq user-interface for proof development. Covered issues include multi-processing, data display, mouse interaction, and script management.
Cited by 7 (1 self)
Add to MetaCart
We present issues that arose in the design of the CtCoq user-interface for proof development. Covered issues include multi-processing, data display, mouse interaction, and script management.
- In Proc. AMAST, LNCS 936 , 1995
"... . We address here the problem of automatically translating the Natural Semantics of programming languages to Coq, in order to prove formally general properties of languages. Natural Semantics
[18] is a formalism for specifying semantics of programming languages inspired by Plotkin's Structural Opera ..."
Cited by 7 (0 self)
Add to MetaCart
. We address here the problem of automatically translating the Natural Semantics of programming languages to Coq, in order to prove formally general properties of languages. Natural Semantics [18] is
a formalism for specifying semantics of programming languages inspired by Plotkin's Structural Operational Semantics [22]. The Coq proof development system [12], based on the Calculus of
Constructions extended with inductive types (CCind), provides mechanized support including tactics for building goal-directed proofs. Our representation of a language in Coq is inAEuenced by the
encoding of logics used by Church [6] and in the Edinburgh Logical Framework (ELF) [15, 3]. 1 Introduction The motivation for our work is the need for an environment to help develop proofs in Natural
Semantics. The interactive programming environment generator Centaur [17] allows us to compile a Natural Semantics speciøcation of a given language into executable code (type-checkers, evaluators,
compilers, program t... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=99246","timestamp":"2014-04-21T01:05:04Z","content_type":null,"content_length":"36663","record_id":"<urn:uuid:230b8c08-5f35-4f60-9b59-0e250123a51f>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maurer-Cartan form
up vote 4 down vote favorite
I suppose given a Lie Group (G) and its corresponding Lie Algebra (g) every element in its dual defines a Maurer-Cartan form on the whole Lie Group?
Let $\omega \in g^*$ be a Maurer-Cartan form and let $X$ and $Y$ be two elements of $g$ then in what sense are $\omega(X)$ and $\omega(Y)$ "constant functions" on G ? (such that we can write $X\omega
(Y)=Y\omega(X) =0$)
Assuming the above one can immediately write the Maurer-Cartan equation, $d\omega(X,Y)= -\frac{1}{2}\omega([X,Y])$
Thinking of $\omega$ as Lie Algebra valued 1-form on the Lie Group and using the fact from linear algebra that $V^* \otimes W = Hom(V,W)$ one can write them as $\omega = \sum _i \omega_i \otimes B_i$
where $\omega_i$ are ordinary 1-forms on G and B_i are a basis on g. (should there be some arbitrary coefficients in front of every term in the above sum?)
Say $c^i_{jk}$ are the structure constants of the Lie Algebra then I do not understand how the Maurer-Cartan equations can be recast as,
$$d\omega_i = -\frac{1}{2}\sum_{j,k} c^i_{jk} \omega_j \wedge \omega_k$$
which apparently can be further recast as the equation,
$$d\omega = -\frac{1}{2} [\omega,\omega]$$
I would be happy if someone can explain how the above two forms of the Maurer-Cartan equation can be obtained knowing the first form which is more familiar form to me.
Also finding the structure constants of a Lie Algebra is not so hard for at least the common ones. Knowing that one fully "knows" the Maurer-Cartan Equation. Now is there any sense in which one can
"solve" this to find out the Maurer-Cartan forms? (I would guess a basis might be obtainable)
lie-groups lie-algebras dg.differential-geometry
I did not understand what the Maurer-Cartan form was until I worked it out for a matrix group where it is always just the one for GL(n) restricted to the group. After thar the abstract version is a
lot easier to deal with. – Deane Yang Aug 3 '10 at 22:57
This was one of the things I was looking for. To understand if there is any sense in which once can compute an explicit solution to the Maurer-Cartan equation. I haven't seen anything like that
even hinted towards in any book. If you could enlighten about that. – Anirbit Aug 5 '10 at 15:20
add comment
2 Answers
active oldest votes
Note: As explained below, there is a clash of nomenclature between what Morita calls a Maurer--Cartan form and what Cartan introduced (which is described in the wikipedia page, say).
First of all there are two Maurer-Cartan forms: left-invariant and right-invariant. They are one-forms with values in the Lie algebra. If we identify the Lie algebra (=left-invariant vector
fields) with the tangent space at the identity, then the left-invariant MC form $\omega$ is such that acting on a vector field $\xi$ on $G$ gives for all $g \in G$, $$ \omega(\xi)_g = (L_g)
_*^{-1} \xi_g, $$ where $L_g$ means left multiplication by $g\in G$. There is a also a right-invariant one-form defined similarly but using right multiplication.
Now suppose that $\xi$ is a left-invariant vector field on $G$. This means that $$\xi_g = (L_g)_* \xi_e,$$ where $\xi_e$ is the value of $\xi$ at the identity $e\in G$. In that case, $$\
omega(\xi)_g = (L_g)_*^{-1} (L_g)_* \xi_e = \xi_e,$$ which is constant, since it does not depend on $g$.
Now, as you point out, if $X$ and $Y$ are left-invariant vector fields, then it is immediate that $\omega$ satisfies the structure equation: $$d\omega(X,Y) = -\omega([X,Y]).$$
Now choose a basis $(e_i)$ for the Lie algebra, so that we can write $\omega = \sum_i \omega^i e_i$, where the $\omega^i$ are one-forms on $G$. Notice that $\omega(e_i)=e_i$, whence $\omega^
j(e_i) = \delta^j_i$.
Applying the structure equation to $X=e_i$ and $Y=e_j$ you see that, on the one hand, $$d\omega(e_i,e_j)=-\omega([e_i,e_j]) = - [e_i,e_j] = - f_{ij}{}^k e_k,$$ whence $$d\omega^k(e_i,e_j) =
f_{ij}{}^k.$$ But this is precisely the result of applying $$-\tfrac12 \sum_{i,j} f_{ij}{}^k \omega^i \wedge\omega^j$$ on $e_i$ and $e_j$, hence the identity $$d\omega^k = -\tfrac12 \sum_
{i,j} f_{ij}{}^k \omega^i \wedge\omega^j.$$
up vote To write down explicitly the Maurer-Cartan forms, it is not hard. You have to compute the derivative of $L_g$ in your chosen coordinates. It is particularly easy if the group $G$ is a matrix
9 down group, in which case you have $\omega_g = g^{-1}dg$ and again you have to compute this in your favourite coordinates for $G$.
I just realised that I forgot to answer the bit about the second form of the structure equation. That equation is usually confusing at first because the notation hides the fact that $[\
omega,\omega]$ also involves the wedge product of one-forms. By definition, $[\omega,\omega]$ is the Lie-algebra valued 2-form on $G$ whose value on vector fields $X,Y$ is given by $$[\
omega,\omega](X,Y) = [\omega(X),\omega(Y)] - [\omega(Y),\omega(X)] = 2 [\omega(X),\omega(Y)].$$ If you now take $X=e_i$ and $Y=e_j$, left-invariant vector fields, you see that $$-\tfrac12 [\
omega,\omega](e_i,e_j) = -[e_i,e_j] = -\sum_k f_{ij}{}^k e_k,$$ agreeing again with $d\omega(e_i,e_j)$.
Further addition
This is in response to one of Anirbit's comments. In Morita's book Geometry of Differential Forms, he calls any left-invariant form on $G$ a Maurer--Cartan form. I don't think that this is
standard. For me, as my answer above, the Maurer--Cartan form is Lie algebra valued. The two notions of Maurer--Cartan forms can of course be reconciled. Choose a basis $(e_i)$ for $\
mathfrak{g}$ and a canonical dual basis $e^i$ for $\mathfrak{g}^*$. Let $\omega^i$ be the left-invariant one-form which agrees with $e^i$ at the identity. Then $\omega = \sum_i \omega^i e_i$
is what I have been calling the (left-invariant) Maurer--Cartan form.
While I'm at it, let me explain the nature of my factors of $2$, since that seems also to be in dispute. For me the wedge product is defined as follows $\alpha \wedge \beta := \alpha \otimes
\beta - \beta \otimes \alpha$, without a factor of $\frac12$.
Hmm... perhaps "particularly easy" is not an accurate assessment of the situation, since this can be quite a lengthy calculation. In exponential coordinates there are a couple of tricks
one could use, but it is usually a pain, especially when the Lie algebra has a large semisimple part. For nilpotent and solvable Lie algebras it's not so bad. – José Figueroa-O'Farrill
Aug 3 '10 at 18:40
Thanks for this beautiful explicit explanation. There is one point that looks a bit unfamiliar to me. Could you clarify as to which calculation are you alluding to in your comment as a
"lengthy calculation" ? Do you mean the proof of the formula that $(L_g)_*^{−1}=g^{−1}dg$? I am new to this equation. – Anirbit Aug 5 '10 at 15:25
Thanks -- the lengthy calculation I referred to is simply the one one needs to do in order to write down explicitly what the MC forms look like in your chosen coordinate system. Once you
parametrise a group, in the sense that group elements are given in terms of some local coordinates by $g(x,y,...,z)$, say, then $gˆ{-1}dg$, which is the pullback to the coordinate space
of the MC forms, can be quite a complicated object. – José Figueroa-O'Farrill Aug 5 '10 at 16:11
I strongly recommend just working in $GL(n)$, where you have natural global co-ordinates. You get the same solutions as you would if you restrict to a subgroup. There is no point in using
local co-ordinates for the subgroup itself. That is, as José says, messy, complicated, and painful. – Deane Yang Aug 5 '10 at 19:30
1 I also recommend unwinding all the abstract notation (push-forwards, pull-backs, differential forms, etc.) and figuring out what everything is on $GL(n)$ in terms of plain old matrices. I
found this exercise to be extremely helpful when I struggled with this stuff. – Deane Yang Aug 5 '10 at 19:33
show 10 more comments
The Maurer-Cartan form for matrix groups is very well explained in Santalo's book about Integral Geometry.
up vote 0 down vote
add comment
Not the answer you're looking for? Browse other questions tagged lie-groups lie-algebras dg.differential-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/34418/maurer-cartan-form","timestamp":"2014-04-19T02:12:29Z","content_type":null,"content_length":"66971","record_id":"<urn:uuid:3cdf620a-41c4-4d70-884a-f4ad59572489>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Greenville, TX Math Tutor
Find a Greenville, TX Math Tutor
...I received an A in both semesters of chemistry from the University of Texas at Dallas and have become exceedingly familiar with its basic concepts from valence electrons the the
Henderson–Hasselbalch equation in the semesters of biochemistry and organic chemistry that followed. I am confident in...
15 Subjects: including algebra 1, algebra 2, chemistry, physics
...I love algebra and have tutored over 30 hours of Algebra 2. Algebra 2 is a key foundation for college-level mathematics and science courses, particularly for students interested in science or
engineering majors. I used Algebra 2 concepts in many of my own undergraduate and graduate engineering courses.
15 Subjects: including algebra 1, algebra 2, grammar, geometry
...My focus is always on understanding concepts rather than simply knowing how to get the answers. I can help struggling students figure out the gaps in their skills and prevent future
difficulties. I am currently teaching high school geometry in Plano, but I can tutor evenings and Saturdays in Plano, Allen and McKinney areas.
10 Subjects: including SAT math, algebra 1, geometry, prealgebra
...I missed school so much I returned 2 years after graduation to complete a certificate in Human Resource Management. My husband is taking courses at Collin college and I have assisted him over
the past few years with all core college courses. I have also tutored middle school aged children in math and other subjects.
17 Subjects: including algebra 1, prealgebra, reading, English
...I have worked with a wide range of students who speak another language other than English for their first language. These students range in age from children to adult learners. Also, I have
helped individuals whose first language was Russian, Korean, Mandarin, French and Spanish.
25 Subjects: including algebra 1, ACT Math, ESL/ESOL, geometry
Related Greenville, TX Tutors
Greenville, TX Accounting Tutors
Greenville, TX ACT Tutors
Greenville, TX Algebra Tutors
Greenville, TX Algebra 2 Tutors
Greenville, TX Calculus Tutors
Greenville, TX Geometry Tutors
Greenville, TX Math Tutors
Greenville, TX Prealgebra Tutors
Greenville, TX Precalculus Tutors
Greenville, TX SAT Tutors
Greenville, TX SAT Math Tutors
Greenville, TX Science Tutors
Greenville, TX Statistics Tutors
Greenville, TX Trigonometry Tutors | {"url":"http://www.purplemath.com/Greenville_TX_Math_tutors.php","timestamp":"2014-04-17T00:53:13Z","content_type":null,"content_length":"23876","record_id":"<urn:uuid:6951e69c-3fab-4344-ad4c-743a7ad9314f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
retrieving percentages
Join Date
Feb 2011
Rep Power
i am trying to get the percentage of votes for each candidate for my program.
votes is an array (with # of votes for each candidate, total is the total number of votes, index shows me how many candidates there are (so i only run the loop "i<index" times), and percents
is the initialized array.
Java Code:
percents = percentages(votes, total, index, percents);
this is my method to get each of the percents:
Java Code:
public static int[] percentages(int votes[], int total, int index, int percents[]) {
int i = 0;
for (i=0; i<index; i++) {
percents[i] = (votes[i]/total);
return percents;
after printing out the percents, i get 0 for each one.
Dividing an int by and int returns an int. I assume that votes[i]/total gets you a number between 0 and 1. If you store that as an int, it's going to truncate and store zero.
You probably want to use a double instead.
Integer division seems to be the culprit here. Since the calculated percent will be a number >=0 && < 1, the decimal will be truncated. To get around integer division you will have to cast
one, or both values to a double. You will also have to return a double array.
Java Code:
int i = 10;
int j = 15
j / i == 1;
(double)j / i == 1.5;
(double)j / (double)i == 1.5;
edit: too slow :(
Integer division seems to be the culprit here. Since the calculated percent will be a number >=0 && < 1, the decimal will be truncated. To get around integer division you will have to cast
one, or both values to a double. You will also have to return a double array.
Java Code:
int i = 10;
int j = 15
j / i == 1;
(double)j / i == 1.5;
(double)j / (double)i == 1.5;
edit: too slow :(
Also, if you're going to be checking for equality, I'd give this a read-through: What Every Computer Scientist Should Know About Floating-Point Arithmetic
That looks like a lengthy read. I'll be sure to be checking that out throughout the next few days.
When explaining stuff to people I usually use == to convey equality instead of =, I know = would do fine, but I figure, in case they copy/paste I already used the correct version.
Well, here's the gist:
Floating point arithmetic is rarely exact. While some numbers, such as 0.5, can be exactly represented as a binary (base 2) decimal (since 0.5 equals 2^-1), other numbers, such as 0.1, cannot
be. As a result, floating point operations may result in rounding errors, yielding a result that is close to -- but not equal to -- the result you might expect. For example, the simple
calculation below results in 2.600000000000001, rather than 2.6:
Java Code:
double s=0;
for (int i=0; i<26; i++)
s += 0.1;
Source: Java performance tuning tips
I'm not sure I follow. Single '=' is assignment. Double "==" checks for equality.
Last edited by KevinWorkman; 04-19-2011 at 02:35 PM.
Sorry about the confusion, it just seems natural to use a single = when generally typing, but to programmers you use == so when I'm saying something equals another I just tend to stick to ==
always now instead. I'm just being confusing. Thanks for the summary of the thread. I hope the op finds all this helpful as well.
Join Date
Sep 2008
Voorschoten, the Netherlands
Blog Entries
Rep Power
Sorry about the confusion, it just seems natural to use a single = when generally typing, but to programmers you use == so when I'm saying something equals another I just tend to stick to ==
always now instead. I'm just being confusing. Thanks for the summary of the thread. I hope the op finds all this helpful as well.
Better read that article; chances are higher that'll understand that, say, the following doesn't do what you naively might expect it to do:
Java Code:
public class T {
public static void main(String[] args) {
for (double x= 0.1; x != 1; x+= 0.1)
kind regards,
cenosillicaphobia: the fear for an empty beer glass
Join Date
Feb 2011
Rep Power
here is the data file that i read:
Java Code:
Smith 80,000
Jones 100,000
Scott 75,000
Washington 110,000
Duffy 125,000
Jacobs 67,000
i get the total of all the numbers, then take each number, and divide by the total to get my percentage. i am getting 0 for some reason. i guess my array is empty?
A percentage will result in a number between 0 and 1. It's been explained that you should use a double instead of an int. Also don't forget that double values must have a 1.0 to produce the
Also the article posted is worth the read so your code does not have miscalculations.
☆ Use [code][/code] tags when posting code. That way people don't want to stab their eyes out when trying to help you.
☆ +Rep people for helpful posts.
Join Date
Feb 2011
Rep Power
so i updated my method to doubles, and here is what i have:
Java Code:
public static double[] percentages(int votes[], int total, int index, double percents[]) {
int i = 0;
for (i=0; i<index; i++) {
percents[i] = (votes[i]/total);
return percents;
i still get 0's though.
Join Date
Sep 2008
Voorschoten, the Netherlands
Blog Entries
Rep Power
votes[ i ] and total are still int typed values so the division will be an int divistion. You did read your text book did you?
kind regards,
cenosillicaphobia: the fear for an empty beer glass
My first post explains the problem, and shows you the change to make. Just in case, you should be casting to double.
Join Date
Feb 2011
Rep Power
ok. so i have a bunch of 0's after the decimal when i print out the values now. how do i get rid of these? can i use printf?
Join Date
Feb 2011
Rep Power
got it: System.out.printf("%,.0f", total); | {"url":"http://www.java-forums.org/new-java/42677-retrieving-percentages.html","timestamp":"2014-04-18T17:20:03Z","content_type":null,"content_length":"126322","record_id":"<urn:uuid:76295553-7963-4e07-a78c-f1385b971a02>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of problem-solving
Morphological analysis
General Morphological Analysis
is a method developed by
Fritz Zwicky
(1967, 1969) for exploring all the possible solutions to a multi-dimensional, non-quantified problem complex.
As a problem-structuring and problem-solving technique, morphological analysis was designed for multi-dimensional, non-quantifiable problems where causal modeling and simulation do not function well
or at all. Zwicky developed this approach to address seemingly non-reducible complexity. Using the technique of cross consistency assessment (CCA) (Ritchey, 1998), the system however does allow for
reduction, not by reducing the number of variables involved, but by reducing the number of possible solutions through the elimination of the illogical solution combinations in a grid box. A detailed
introduction to morphological modeling is given in Ritchey (2002, 2006).
Morphology comes from the
classical Greek
, meaning shape or form. Morphological Analysis concerns the arrangement of objects and how they conform to create a whole or
. The objects in question can be a physical system (e.g. anatomy), a
social system
(e.g. an organisation) or a
logical system
(e.g. word forms or a system of ideas).
General morphology was developed by Fritz Zwicky, the Swiss astrophysicist based at the California Institute of Technology. Zwicky applied MA inter alia to astronomical studies and the development of
jet and rocket propulsion systems.
Illustration of the need for Morphological Analysis
Consider a complex real world problem, like those of marketing or making policies for a nation, where there are many governing factors, and most of them cannot be expressed as numerical time series
data, as one would like to have for building mathematical models. The conventional approach here would be to break the system down into parts, isolate the vital parts (dropping the trivial
components) for their contributions to the output and solve the simplified system for creating desired models or scenarios. The disadvantage of this approach is that real world scenarios do not
behave rationally and more often than not a simplified model will breakdown when the contribution of trivial components becomes significant. Also significantly, the behaviour of many components will
be governed by states of, and relations with other components, perhaps minor.
Morphological Analysis on the other hand, does not drop any of the components of the system itself, but works backwards from the output towards the system internals . Again, the interactions and
relations get to play their parts in MP and their effects are accounted for in the analysis.
Further reading
• Ritchey, T. (1998). General Morphological Analysis: A general method for non-quantified modeling
• Ritchey, T. (2006). "Problem Structuring using Computer-Aided Morphological Analysis". Journal of the Operational Research Society (JORS), Vol. 57, No. 7.
• Zwicky, F. (1969). Discovery, Invention, Research - Through the Morphological Approach. Toronto: The Macmillian Company.
• Zwicky, F. & Wilson A. (eds.) (1967). New Methods of Thought and Procedure: Contributions to the Symposium on Methodologies. Berlin: Springer. Reprint available at www.swemorph.com/ma.html
• Levin, Mark Sh. (1998). Combinatorial Engineering of Decomposable Systems, Dordrecht: Kluwer Academic Publishers.
• Levin, Mark Sh. (2006). Composite Systems Decisions. New York: Springer.
See also | {"url":"http://www.reference.com/browse/problem-solving","timestamp":"2014-04-19T01:28:28Z","content_type":null,"content_length":"82060","record_id":"<urn:uuid:9c6e23af-616a-403b-a474-9bf78d04249c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Workshop Abstracts
2006 - Workshop Abstracts
Workshop Abstracts
W1 – Increasing students’ success in college preparatory chemistry and in college general chemistry by remediation of requisite basic math skills
William Cary Kilner
University of New Hampshire
Much valuable research has been done to determine why students have trouble solving chemistry and physics problems, why they resort to algorithmic techniques without understanding the concepts
involved, and why they cannot transfer acquired problem-solving skills to novel situations.
Such research has shown that many students come to high school and college chemistry with serious gaps in their mathematical understanding. This jeopardizes all attempts to teach them problem
solving. A manifestation of this is the unorganized way in which many students set up and execute chemistry problems, despite repeated instruction in good practices. Sloppy work may also indicate
sloppy thinking and weak conceptual understanding. Even high-achieving math students can have serious missing pieces of mathematical understanding, which we assume without verification that they have
previously acquired.
Knowledge and skill in abstract mathematics will not transfer to chemistry and physics problem solving unless students are assisted in learning explicit procedures that clearly connect to the science
involved. And consistently emphasizing these procedures can shore up a fragmented understanding of basic mathematics.
In my current work with college freshmen in general chemistry, and in my previous work in high school college-preparatory chemistry, I have identified 17 sequential mathematical skill areas that must
be addressed in order to provide this support for student learning in chemistry. I have also developed concomitant techniques for instruction, including context -rich practice problems and worksheets
that provide the necessary drill and reinforcement in the most efficient manner. Student feedback has validated the usefulness of this approach.
In this workshop I will summarize my list of “chem-math” skills, and provide examples of intervention, illustrated with student examples whenever possible. Upon showing each of my slides in turn, we
will collectively discuss the validity and effectiveness of my claims. Bring your own list of skills, concerns, and examples to see where they fit into my organizational scheme.
W2 – Exploring ways to visualize mathematics
David Meel
Bowling Green State University
One of the more difficult aspects of teaching mathematics is engaging students in problem solving and one of the most difficult ele ments of this endeavor is visualizing the mathematics. This
workshop will explore with participants a variety of potential ways to help students visualize mathematics and make them active participants in constructing their understanding. In particular,
participants will be exposed to various tools such as web applets, concept mapping, story telling, and other innovative ways to get students to experience mathematics in new and multisensory ways.
Participants will be expected to actively participate in various activities, some of which may necessitate the use of a graphing calculator.
W3 – Two eyes seeing and two eyes hearing
Ed Galindo
University of Idaho
This workshop will be aimed at sharing with folks different points of view on not only science and math instruction, but on life as well. An Elder once said “Teachers don’t teach subjects, they teach
people. To teach people, they must make an honest effort to get to know them, spend time with, care about them, and believe in them”. That is what my workshop will be about. Several team building
activities that work with students to get to know them.
W4 – Playing cards and thinking about race, class and culture in the classroom
Eric Hsu
San Francisco State University
We will play a silent card game together, which will lead us into a conversation about issues of power and communication. This will be followed by a brief reading of some research and another
conversation of the ways issues of race, class and culture enrich and sabotage the classroom learning experience. The intention is for strategies and questions to be shared in a safe workshop
W5 – Science fiction in the science classroom
Kelly McCullough
Are you interested in using science fiction as a tool in the classroom? Just interested in science fiction? Kelly’s an author who has written science fiction short stories specifically for
educational purposes as well as novels for Penguin/Putnam. Both reading and writing science fiction can help build student excitement about science and teach valuable lessons. In this workshop Kelly
will talk about his experiences working with an NSF-funded middle school science curriculum with a science fiction context and help you explore some of the ways you can use science fiction in your
own classrooms. Please bring a topic or unit that you think might be fun to explore with the tools of science fiction.
W6 – Symmetry and patterns in contemporary Native American art
Michelle Zandieh
Arizona State University
This hands-on workshop is intended for anyone who likes patterns. No formal knowledge of symmetry will be assumed. We will begin by considering the symmetries of simple shapes (e.g. letters) and use
this to lead into a discussion of the symmetries possible for a belt or rug border (strip patterns). Both types of symmetries will be used to characterize and classify pictures of Native American
artis try of the southwest (e.g. Hopi pottery, Navajo rugs and Navajo sand paintings).
W7 – Inquiry-based, hands -on in-class Astronomy activities
Rebecca Lindell
Department of Physics and Office of Science and Mathematics Education
Southern Illinois University, Edwardsville
At Southern Illinois University Edwardsville, we have recently restructured our introductory astronomy course to include hands-on inquiry-based in-class group activities. These activities utilize a
learning cycle approach to cover specific astronomical concepts that traditionally resist conceptual change, such as phases of the moon and seasons, or that students have difficulty mastering, such
as Hubble’s law and the Hertzsprung-Russell diagram. Each group activity is designed to be completed during one 50-minute class period and utilize hands-on equipment whenever possible. In this
workshop, we will discuss the design and implementation of these group activities into our introductory astronomy course, as well as allow participants a chance to explore some of the activities.
W8 – Using the Conceptual Change Model (CCM) of learning in the science classroom: Implications for engendering robust nature of science (NOS) understandings
Scott Sowell
Cleveland State University
The nature of science (NOS) is essentially, a set of underlying principles determining what makes science “science.” Instead of seeing science as merely a body of knowledge to be memorized (i.e., the
products), students should also acquire an understanding of how that knowledge was produced (i.e., the process). In order to make informed personal and societal judgments, one must understand how
science works and how those processes shape the nature of scientific knowledge. This workshop is designed to provide science teachers with instructional strategies, each influenced by the conceptual
change model of learning, that are specifically designed to improve students’ understandings of the nature of science (or more accurately, to change students’ prior alternative NOS concepts). In
addition to an overview of the conceptual change theory, we will be working with a variety specific teaching tools (e.g., hand-held technology) as well as related pedagogies (e.g., journaling) which
promote students’ reflection on their changing ideas.
Target audience: pre-service and in -service middle/secondary science teachers
W9 – Teaching physics and mathematics using critical agency: A hands on workshop for teachers
Apriel K. Hodari
The CNA Corporation
Many students find learning mathematics and physics challenging. For students from groups whose participation in physics and mathematics courses is significantly lower than their representation in
the population – minorities, low-SES students, girls, and students with disabilities – these challenges are even greater. Critical agency is one way both to create learning opportunities for all
students (equity in education) and to empower students to improve the conditions in which they live (equity through education). In this workshop, we will provide an opportunity for the participants
to experience sample exercises involving critical physics agency and critical mathematics agency.
W10: Science in Native American communities
Eric Riggs
Purdue University
In this workshop we will discuss issues in Native American teaching and learning and how school curricula can be adapted to suit local needs and learning styles in indigenous communities.
W11: Experiencing math in a cultural context: From everyday activities to videotape analysis
Jerry Lipka
University of Alaska, Fairbanks
To provide participants with an overview of this complex project, the workshop will begin with videotape footage (different from the talk) showing Yup’ik elders creating patterns used in clothing.
The two-way process of the collaboration will be highlighted as well as the Interdisciplinary nature of this project. After the context of the project is established, participants will engage in a
few activities. Debriefing the activities will include a few components: where /what is the math; how can it be extended; and bringing the pedagogy to light.
Additional classroom data will be shared with the participants as well as classroom videotape that highlights how and why we believe that MCC has improved students’ math performance.
W12: A constructive approach to teaching trigonometric functions
Keith Weber
Rutgers University
In a talk that I am giving at this conference, I argue that students need to understand trigonometric operations, such as sine and cosine, as functions that map angles to real numbers if they are to
fully understand trigonometry. However, most students only understand sine and cosine as ratios of lengths of right triangles. In this workshop, we will examine and complete a series of activities
that can be used to help students understand trigonometric operations as functions. These activities are hands-on constructive activities involving the construction of geometric figures and taking
measurements of figures that are constructed. For each activity, we will discuss what it is that children are expected to learn from completing it, how the activity will enable them to learn
effectively, and how the activity can be implemented in the classroom.
W13: Creating gender neutral problems
Laura McCullough
University of Wisconsin, Stout
Want to find out how to change your homework problems to be more gender-neutral? In this workshop we will discuss what makes a problem more inviting to everyone, I will show examples of traditional
textbook problems that have been edited to be more inviting, and workshop participants will make changes to their own problems with guidance from the workshop leader. If possible, participants should
bring some example problems to the workshop.
W14: A modified approach to lesson study for secondary science and math teachers
Nicole Gillespie
Knowles Science Teaching Foundation
The 1999 Trends in International Math and Science Study (TIMMS) revealed striking differences between math and science education in Japan and in the United States. One of the key differences found
was that Japanese teachers routinely spend time working together to improve their teaching through a process of cooperatively planning, observing, analyzing, and refining actual classroom lessons.
This process, known as “lesson study” is widely credited for the steady improvement of Japanese elementary mathematics and science instruction. Since the release of the 1999 TIMMS results, the
practice of lesson study has emerged in many schools in the U.S. Typically the practice of lesson study involves commitment at the school level in order to give teachers time away from teaching
requirements in order to plan, observe and revise the research lesson. Additionally, lesson study is most often practiced in elementary schools, where multiple teachers in the same school teach the
same content at the same grade level. Thus, secondary math and science teachers who want to practice lesson study face challenges at multiple levels. The Knowles Science Teaching Foundation Teaching
Fellows are secondary math and science teachers in schools across the U.S. who practice a modified lesson study process during the five-year tenure of their fellowship. Their process is adaptable to
small groups of teachers who want to begin a lesson study process on their own but don’t yet have the buy in from their school, or may want to work with colleagues teaching the same subject in
different schools. This workshop will introduce participants to the practice of lesson study and what research shows are the effects on teacher practice and student learning. Participants will have
the opportunity to engage in a lesson study sequence with lessons and classroom evidence from KSTF teaching fellows lessons and will leave with strategies and resources for beginning their own lesson
study process.
W15: Project Lead The Way: A solution to increasing student interest in math and science
Patrick Leaveck
Project Lead the Way
Project Lead The Way (PLTW) is The National Alliance for Pre-engineering Programs. As a non-profit organization, it helps public schools across the country implement a high school pre-engineering
program and a middle school technology program. This workshop will explain the PLTW program, and show why over 1800 schools in 46 states have joined. PLTW helps students learn about technology
careers by providing a project based learning curriculum that integrates math, science, language arts, and technology standards. The PLTW program have invested over 8 million dollars in a fully
developed curriculum at both the high school and middle school level that is free to schools. A complete teacher and counselor training program is also provided to participating schools at a leading
national college of engineering/engineering technology. The PLTW program provides support to bring together community leaders from schools, colleges & universities, and industry, to help students
achieve higher academic success. At the same time addresses the nation’s need for a technology workforce. Find out why the National Academy Sciences designed PLTW as a “World Class Curricula” and
said that PLTW fosters high quality teaching, standards, and assessments of student learning.
W16: That ain’t no way to treat a lady: Gender equity in the science and math classroom
Stephanie Blaisdell
A good teacher treats all their students the same, right? This workshop will challenge that belief. Students have all kinds of individual needs, and among them are gender-based differences that
affect the interaction and learning in a classroom. This workshop will provide illustrations of how gender affects your classroom, and how you can respond to gender-based needs.
W17: AER 101: A beginners’ guide to conducting astronomy education research
Rebecca Lindell
Southern Illinois University – Edwardsville
Astronomy Education Research (AER) is a science and as a science it is bound by the same traditions and expectations of any other science. Teaching and modifying class instruction should be a common
practice, however
rarely are accurate and precise measures made which could be peer reviewed and replications conducted. Therefore instructional modifications, assessment creations, and curriculum development are not
sciences unless they follow a rigorous scientific process, like AER. Participants in this workshop will receive a quick overview of AER standards and practices and then explore how to design and
execute meaningful AER, if only for their own classrooms. | {"url":"http://umaine.edu/center/conferences-workshops/national-conferences/2006-2/workshop-abstracts/","timestamp":"2014-04-21T07:38:29Z","content_type":null,"content_length":"32727","record_id":"<urn:uuid:4cb6bef4-ed5a-4b63-a960-d67692ac623f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advogato: Blog for vicious
We have to kill you to save you
This is funny in a very dark sort of way. Let’s do some numbers: Suppose that after some time about 100 people a year die from extra radiation exposure due to all these scanners. That’s a fairly low
number, it would be impossible to
even notice this number from the cancer statistics. Given the millions of people that get scanned every year and that spend sufficient time near those scanners, that’s a tiny percentage. Now let’s
see, let’s suppose we take the last 50 years of plane travel before 2001 and we notice that number of people killed in terrorist attacks including 2001 is a little more than 3000. That’s about 60 a
year. Let’s suppose that no terrorist attack ever happens again (yeah right). Still we’d be killing almost twice as many by our security measures.
I wish my granddad were alive, this is what he did (nuclear hygiene). It would be nice to ask if 100 is a reasonable estimate. | {"url":"http://www.advogato.org/person/vicious/diary/302.html","timestamp":"2014-04-18T08:41:22Z","content_type":null,"content_length":"6131","record_id":"<urn:uuid:1c2ea835-b8ea-4d28-beed-bccb9b2cb326>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Binary-Search-Based Merge to Add the Sort Margin
3.3. Binary-Search-Based Merge to Add the Sort Margin
• Instead of qsorting the sort margin in, keep it sorted and then use a binary-search based merge to merge it with the main array.
• The reason I used a binary search based merge, instead of a linear merge, is because I reasoned that the main array would become much larger than the margin (which has a constant size), and so
this would result in a lot of comparisons.
• O(log(n)) lookup, O(n) insertion, and O(n^2) accumulative complexity.
• Noticeably faster than qsorting.
Written by Shlomi Fish | {"url":"http://shlomifish.org/lecture/Freecell-Solver/slides/states_collection/b_search_merge.html","timestamp":"2014-04-20T13:19:30Z","content_type":null,"content_length":"3005","record_id":"<urn:uuid:66e1ef78-e33b-4f49-981c-6594278753d7>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Hollywood Algebra 1 Tutor
Find a North Hollywood Algebra 1 Tutor
...I have come to understand myself that every child learns slightly differently, and as a tutor it is my job to play to their strengths and transform their weaknesses. Additionally, I help
students develop better study skills so that they will continue to succeed beyond our sessions. I have worke...
11 Subjects: including algebra 1, chemistry, biology, algebra 2
...Three of my SAT Math II students were boarding school students in Russia; I tutored them online via Skype-esque video conferencing, advanced document mark-up & screen-sharing, and other
technological tools. Each of these girls saw improvements in their SAT Math II scores after two weeks of rigor...
60 Subjects: including algebra 1, reading, Spanish, chemistry
...I am an ABA therapist who works with children with Aspergers and their families. I am a professional Applied Behavior Analysis (ABA) Therapist for children and young adults with autism spectrum
disorders. I have worked with clients in the home and school settings.
20 Subjects: including algebra 1, English, writing, geometry
...I am calm, friendly, easy to work with and have been tutoring for many years. Although I have tutored high school and college science subjects such as Biology, Physics, Chemistry, Microbiology,
Physiology and Neuroscience, I specialize in tutoring math. I scored 2280 on the SAT Reasoning Test (with a perfect 800 on the math section) and did very well on the GRE quantitative section as
13 Subjects: including algebra 1, chemistry, physics, geometry
I have always been asked for help in many subjects, from high school until graduating college as a Mechanical Engineer. Friends and family always asked for my help and I have had the patience to
show them step by step until they could do it on their own, which will be my goal with any student needing help. As a Mechanical Engineer I specialize in math and science at almost any level.
9 Subjects: including algebra 1, calculus, differential equations, physical science | {"url":"http://www.purplemath.com/North_Hollywood_algebra_1_tutors.php","timestamp":"2014-04-18T14:15:00Z","content_type":null,"content_length":"24547","record_id":"<urn:uuid:57fa3d6e-f608-4b87-ac82-cacb18fbf959>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00414-ip-10-147-4-33.ec2.internal.warc.gz"} |
[bind10-dev] rbtree handle non-terminal design
feng hanfeng at cnnic.cn
Sat Jan 1 15:27:01 UTC 2011
For non-terminal handling, I have do some experiment, the following is the
general idea.
The key point for this task is to find the next node of one node, and for
that we need the node path (all the ancestor nodes)
During find process, we can get the node path( typedef stack<const RBNode<T>
*> NodeChain), if we got the node path, to find
The next node of node, the steps are:
(tree in following sentences means the one level tree, and the whole rbtree
consists of bunch of trees using down point connected)
If node has down tree, return the smallest(left most)node in the down tree
Otherwise, find the not null successor of node in the tree which node
Otherwise, find the not null successor of the top node in node chain, if no
null successor we will keep pop the node chain, until the node chain gets
Return null node
Successor node of one node, is that
If node has right node, then it is the leftmost node of right node
Otherwise find the first left branch on the path to root. If found, the
parent of the branch is the successor.
Otherwise return null node.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.isc.org/pipermail/bind10-dev/attachments/20110101/fe524863/attachment.html>
More information about the bind10-dev mailing list | {"url":"https://lists.isc.org/pipermail/bind10-dev/2011-January/001798.html","timestamp":"2014-04-20T01:00:44Z","content_type":null,"content_length":"3794","record_id":"<urn:uuid:b8b942ff-6cf7-4545-acd3-dd41fc3beef3>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parameter estimation of ODE’s via nonparametric estimators
Nicolas J-B. Brunel
Electronic Journal of Statistics Volume 2, , 2008.
Ordinary differential equations (ODE’s) are widespreadmodels in physics, chemistry and biology. In particular, this mathematical formal- ism is used for describing the evolution of complex systems
and it might consist of high-dimensional sets of coupled nonlinear differential equations. In this setting, we propose a general method for estimating the parame- ters indexing ODE’s from times
series. Our method is able to alleviate the computational difficulties encountered by the classical parametricmethods. These difficulties are due to the implicit definition of the model.We propose
the use of a nonparametric estimator of regression functions as a first-step in the construction of an M-estimator, and we show the consistency of the derived estimator under general conditions. In
the case of spline estimators, we prove asymptotic normality, and that the rate of convergence is the usual root n-rate for parametric estimators. Some perspectives of refinements of this new family
of parametric estimators are given.
PDF - Requires Adobe Acrobat Reader or other PDF viewer. | {"url":"http://eprints.pascal-network.org/archive/00005221/","timestamp":"2014-04-17T13:07:25Z","content_type":null,"content_length":"8125","record_id":"<urn:uuid:93fce52b-e159-416e-9073-91039db31a92>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Concordance Genus
The Concordance Genus
For knot K, the concordance genus is the least genus of among all knots concordant to K. Casson gave the first example of a knot of concordance genus greater than the 4-ball genus, by demonstrating
that the knot 6[2] has 4-ball genus 1, since it has unknotting number 1, but it cannot be concordant to a knot of genus 1, for the following reason. Its Alexander polynomial is irreducible, of degree
4. If it were concordant to a knot of genus 1, its polynomial, times a polynomial of degree at most 2, would factor as f(t)*f(t^-1), clearly an impossibility.
For other results concerning the concordance genus, see the reference below. Many of the initial results for 11 crossing knots were found by John McAtee. This were checked and expanded on by Kate
Kearney in the reference below.
For knots that are concordant to lower genus knots, the simplest such knot is listed in a document linked to the number in the table.
Clearly the concordance genus is dependent on the category, smooth or topological, locally flat. The first known example of this occurs for knots that are topologically slice, and thus topological
concordance genus = 0, but are not smoothly slice, so smooth concordance genus > 0. Since, as of yet, this distinction is extremely rare, and we concentrate on the smooth case. (The only known
distinction on the chart occurs with 11n[34], which is topologically slice, but we do not know its smooth concordance genus.)
K. Kearney, The Concordance Genus of 11--Crossing Knots, Arxiv preprint.
C. Livingston, The concordance genus of knots, Algebr. Geom. Topol. 4 (2004) 1-22. Arxiv:math.GT/0107141
Further information on particular knots. | {"url":"http://www.indiana.edu/~knotinfo/descriptions/topological_concordance_genus.html","timestamp":"2014-04-19T09:42:27Z","content_type":null,"content_length":"2452","record_id":"<urn:uuid:0931d482-6de0-49ce-bcad-c80afeb909ba>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Selected Publications
• X. Zhu, D. W. Pack, and R. D. Braatz. Modelling intravascular delivery from drug-eluting stents with biodurable coating: Investigation of anisotropic vascular drug diffusivity and arterial drug
distribution. Computer Methods in Biomechanics and Biomedical Engineering, 17(3):187-198, 2014.
• J. Min, R. D. Braatz, and P. T. Hammond. Tunable staged release of therapeutics from layer-by-layer coatings with clay barrier interlayer. Biomaterials, 35(8), 2507-2517, 2014.
• B. Suthar, V. Ramadesigan, S. De, R. D. Braatz, and V. R. Subramanian. Optimal charging profiles for mechanically constrained lithium-ion batteries. Physical Chemistry Chemical
Physics,16:277-287, 2014.
• M. Kishida, A. N. Ford Versypt, D. W. Pack, and R. D. Braatz. Optimal control of one-dimensional cellular uptake in tissue engineering. Optimal Control Applications and Methods, 34 (6):680-695,
• S. Mascia, P. L. Heider, H. Zhang, R. Lakerveld, B. Benyahia, P. I. Barton, R. D. Braatz, C. L. Cooney, J. M. B. Evans, T. F. Jamison, K. F. Jensen, A. S. Myerson, and B. L. Trout. End-to-end
continuous manufacturing of pharmaceuticals: Integrated synthesis, purification, and final dosage formation. Angewandte Chemie, 52(47):12359-12363, 2013. Hot Paper. Research Highlight in Nature,
502:274, 2013 (doi:10.1038/502274d).
• K.-K. K. Kim, D. E. Shen, Z. K. Nagy, and R. D. Braatz. Wiener's polynomial chaos for the analysis and control of nonlinear dynamical systems with probabilistic uncertainties. IEEE Control
Systems, 33(5):58-67, 2013.
• R. Lakerveld, B. Benyahia, R. D. Braatz, and P. I. Barton. Model-based design of a plant-wide control strategy for a continuous pharmaceutical plant, AIChE Journal, 59:3671-3685, 2013.
• E. P. Chang, R. D. Braatz, and A. T. Hatton. Pervaporation of emulsion droplets for the templated assembly of spherical particles: A population balance model. AIChE Journal, 59:3975-3985, 2013.
• K.-K. K. Kim and R. D. Braatz. Generalized polynomial chaos expansion approaches to approximate stochastic model predictive control. International Journal of Control, 86, 1324-1337, 2013.
• L. Zhou, M. Su, B. Benyahia, A. Singh, P. I. Barton, B. L. Trout, A. S. Myerson and R. D. Braatz. Mathematical modeling and design of layer crystallization in a concentric annulus with and
without recirculation. AIChE Journal, 59:1308-1321, 2013.
• Z. W. Ulissi, M. S. Strano and R. D. Braatz. Control of nano and microchemical systems. Computers and Chemical Engineering, 51:149-156, 2013.
• A. N. Ford Versypt, D. W. Pack and R. D. Braatz. Mathematical modeling of drug delivery from autocatalytically degradable PLGA microspheres — A review. Journal of Controlled Release, 165:29-37,
• K. K. K. Kim, E. Rios-Patron and R. D. Braatz. Robust nonlinear internal model control of stable Wiener systems. Journal of Process Control, 22:1468-1477, 2012.
• K. Chen, L. M. Goh, G. W. He, V. Bhamidi, P. J. A. Kenis, C.F. Zukoski, and R. D. Braatz. Identification of nucleation rates in droplet-based microfluidic systems. Chem. Eng. Sci., 77:235-241,
• L. Goh, M. Kishida and R. D. Braatz. On the analysis of robust stability of metabolic pathways., IEEE Control Systems Magazine, 92-94, Aug. 2012.
• Z. K. Nagy and R. D. Braatz. Advances and new directions in crystallization control. Annu. Rev. Chem. Biomol. Eng., 3:55-75, 2012.
• V. Ramadesigan, P. W. C. Northrop, S. De, S. Santhanagopalan, R. D. Braatz, V. R. Subramanian. Modeling and simulation of lithium-ion batteries from a systems engineering perspective. J.
Electrochem. Soc., 159:R31-R45, 2012.
• M. Jiang, M. Fujiwara, M. H. Wong, Z. Zhu, J. Zhang, L. Zhou, K. Wang, A. N. Ford, T. Si, L. M. Hasenberg, and R. D. Braatz. Towards achieving a flattop crystal size distribution by continuous
seeding and controlled growth. Chem. Eng. Sci., 77:2-9, 2012.
• A. A. Boghossian, J. Zhang, F. T. L. Floch-Yin, Z. W. Ulissi, P. Bojo, J. Han, J. Kim, J. R. Arkalgud, N. F. Reuel, R. D. Braatz, and M. S. Strano. The chemical dynamics of nanosensors capable of
single-molecule detection. The Journal of Chemical Physics, 135:8, 2011.
• R. N. Methekar, K. Chen, R. D. Braatz, and V. R. Subramanian. Kinetic Monte Carlo simulation of surface heterogeneity in graphite electrodes for lithium-ion batteries: Passive layer formation. J.
Electrochem. Soc., 158:A363-A370, 2011.
• J. C. Pirkle, Jr. and R. D. Braatz. Instabilities and multiplicities in non-isothermal blown film extrusion including the effects of crystallization. J. of Process Control, 21:405-414, 2011.
• V. Ramadesigan, K. Chen, N. A. Burns, V. Boovaragavan, R. D. Braatz and V. R. Subramanian. Parameter estimation and capacity fade analysis of lithium-ion batteries using reformulated models. J.
of The Electrochemical Society, 158:A1048-A1054, 2011.
• Z. W. Ulissi, J. Q. Zhang, A. A. Boghossian, N. F. Reuel, S. F. E. Shimizu, R. D. Braatz, and M. S. Strano. Applicability of birth-death Markov modeling for single-molecule counting using
single-walled carbon nanotube fluorescent sensor arrays. J. of Physical Chemistry Letters, 2:1690-1694, 2011.
• N. C. S. Kee, P. D. Arendt, L. M. Goh, R. B. H. Tan, and R. D. Braatz. Nucleation and growth kinetics estimation for L-phenylalanine hydrate and anhydrate crystallization. CrystEngComm,
13:1197-1209, 2011.
• X. Y. Woo, R. B. H. Tan, and R. D. Braatz. Precise tailoring of the crystal size distribution by controlled growth and continuous seeding from impinging jet crystallizers. CrystEngComm,
13:2006-2014, 2011.
• N. C. S. Kee, R. B. H. Tan, and R. D. Braatz. Semiautomated identification of the phase diagram for enantiotropic crystallizations using ATR-FTIR spectroscopy and laser backscattering. Ind. Eng.
Chem. Res., 50:1488-1495, 2011.
• M. W. Hermanto, R. D. Braatz, and M.-S. Chiu. Integrated batch-to-batch and nonlinear model predictive control for polymorphic transformation in pharmaceutical crystallization. AIChE J.,
57:1008-1019, 2011.
• V. Ramadesigan, R. N. Methekar, V. R. Subramanian, F. Latinwo, and R. D. Braatz. Optimal porosity distribution for minimized Ohmic drop across a porous electrode. J. Electrochem. Soc.,
157:A1328-A1334, 2010.
• K. Chen, N. Nair, M. S. Strano, and R. D. Braatz. Identification for chirality-dependent adsorption kinetics in single-walled carbon nanotube reaction networks. J. of Computational & Theoretical
Nanoscience, 7:2581-2585, 2010.
• M. Kishida and R. D. Braatz. Worst-case analysis of distributed parameter systems with application to the 2D reaction-diffusion equation. Special Issue on Optimal Process Control, Optimal Control
Applications & Methods, 31:433-449, 2010.
• J. C. Pirkle, Jr., M. Fujiwara, and R. D. Braatz. A maximum-likelihood parameter estimation for the thin-shell quasi-Newtonian model for a laboratory blown film extruder. Ind. Eng. Chem. Res.,
47:8007-8015, 2010.
• L. M. Goh, K. J. Chen, V. Bhamidi, G. He, N. C. S. Kee, P. J. A. Kenis, C. F. Zukoski, and R. D. Braatz. A stochastic model for nucleation kinetics determination in droplet-based microfluidic
systems. Crystal Growth & Design, 10:2515-2521, 2010.
• V. R. Subramanian and R. D. Braatz. Current needs in electrochemical engineering education. Electrochemical Society Interface, 19 (1):37-38, 2010 (invited).
• J. C. Pirkle, Jr. and R. D. Braatz. A thin-shell two-phase microstructural model for blown film extrusion. J. of Rheology, 54:471-505, 2010.
• C. T. M. Kwok, R. D. Braatz, S. Paul, W. Lerch, and E. G. Seebauer. An improved model for boron diffusion and activation in silicon. AIChE J., 56:515-521, 2010.
• K. Chen, R. Vaidyanathan, E. G. Seebauer, and R. D. Braatz. General expression for effective diffusivity of foreign atoms migrating via a fast intermediate, J. Appl. Phys., 107:026101, 2010.
• T. Jin, Y. Ito, X. Luan, S. Dangaria, C. Walker, M. Allen, A. Kulkarni, C. Gibson, R. Braatz, X. Liao, and T. Diekwisch. Elongated polyproline motifs facilitate enamel evolution through matrix
subunit compaction. PLoS Biology, 7(12):e1000262, 2009.
• M. W. Hermanto, M.-S. Chiu, and R. D. Braatz. Nonlinear model predictive control for the polymorphic crystallization of L-glutamic acid crystals. AIChE J., 55:2631-2645, 2009.
• N. C. S. Kee, P. D. Arendt, R. B. H. Tan, and R. D. Braatz. Selective crystallization of the metastable anhydrate form in the enantiotropic pseudo-dimorph system of L-phenylalanine using
concentration feedback control. Crystal Growth & Design, 9:3052-3061, 2009.
• N. C. S. Kee, R. B. H. Tan, and R. D. Braatz. Selective crystallization of the metastable alpha-form of L-glutamic acid using concentration feedback control. Crystal Growth & Design, 9:3044-3051,
• C. T. M. Kwok, R. D. Braatz, S. Paul, W. Lerch, and E. G. Seebauer. Mechanistic benefits of millisecond annealing for diffusion and activation of boron in silicon. J. Appl. Phys., 105: art. no.
063514, 2009.
• M. W. Hermanto, R. D. Braatz, and M.-S. Chiu. High-order simulation of polymorphic crystallization using weighted essentially non- oscillatory methods. AIChE J., 55:122-131, 2009.
• X. Y. Woo, Z. K. Nagy, R. B. H. Tan, and R. D. Braatz. Adaptive concentration control of cooling and antisolvent crystallization with laser backscattering measurement. Crystal Growth & Design,
9:182-191, 2009.
• X. Y. Woo, R. B. H. Tan, and R. D. Braatz. Modeling and computational fluid dynamics-population balance equation-micromixing simulation of impinging jet crystallizers. Crystal Growth & Design,
9:156-164, 2009.
• N. C. S. Kee, X. Y. Woo, L. M. Goh, E. Rusli, G. He, V. Bhamidi, R. B. H. Tan, P. J. A. Kenis, C. F. Zukoski, and R. D. Braatz. Design of crystallization processes from laboratory research and
development to the manufacturing scale: Parts I and II. American Pharmaceutical Review, 11(6):110-115 and 11(7):66-74, 2008 (invited).
• M. W. Hermanto, N. C. Kee, R. B. H. Tan, M.-S. Chiu, and R. D. Braatz. Robust Bayesian estimation of the kinetics of the polymorphic crystallization of L-glutamic acid crystals. AIChE J.,
54:3248-3259, 2008.
• Z. K. Nagy, M. Fujiwara, and R. D. Braatz, Modelling and control of combined cooling and antisolvent crystallization processes. J. of Process Control, 18:856-864, 2008 (invited).
• R. Gunawan, I. Fusman, and R. D. Braatz. Parallel high-resolution finite volume simulation of particulate processes. AIChE J., 54:1449-1458, 2008.
• Z. Zheng, R. Stephens, R. D. Braatz, R. C. Alkire, and L. R. Petzold. A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition. J. Comput. Phys., 227:5184-5199,
• Z. K. Nagy, M. Fujiwara, X. Y. Woo, and R. D. Braatz. Determination of the kinetic parameters for the crystallization of paracetamol from water using metastable zone width experiments. Ind. Eng.
Chem. Res., 47:1245-1252, 2008.
• Y. Qin, X. Li, F. Xue, P. Vereecken, P. Andricacos, H. Deligianni, R. D. Braatz, and R. C. Alkire. The effect of additives on shape evolution during copper electrodeposition. Part III. Trench
infill for on-chip interconnects. J. Electrochem. Soc., 155:D223-233, 2008.
• C. T. M. Kwok, K. Dev, E. G. Seebauer, and R. D. Braatz. Maximum a posteriori estimation of activation energies that control silicon self-diffusion. Automatica, 44:2241-2247, 2008.
• Z. K. Nagy, M. Fujiwara, J. W. Chew, and R. D. Braatz. Comparative performance of concentration and temperature controlled batch crystallizations. J. of Process Control, 18:399-407, 2008
• N. Nair, W.-J. Kim, R. D. Braatz, and M. S. Strano. Dynamics of surfactant-suspended single walled carbon nanotubes in a centrifugal field. Langmuir, 24:1790-1795, 2008.
• T. O. Drews, R. D. Braatz, and R. C. Alkire. Monte Carlo simulation of kinetically-limited electrodeposition on a surface with metal seed clusters. Special Issue in honor of Dieter Kolb, Z. Phys.
Chem., 221:1287-1305, 2007.
• M. W. Hermanto, X. Y. Woo, R. D. Braatz, and M.-S. Chiu. Robust optimal control of polymorphic transformation in batch crystallization. AIChE J., 53:2643-2650, 2007.
• E. Rusli, F. Xue, T. O. Drews, P. Vereecken, P. Andracacos, H. Deligianni, R. D. Braatz, and R. C. Alkire. Effect of additives on shape evolution during electrodeposition. Part II: Parameter
estimation from roughness evolution experiments. J. Electrochem. Soc., 154:D584- D597, 2007.
• Z. K. Nagy and R. D. Braatz. Distributional uncertainty analysis using power series and polynomial chaos expansions. Special Issue on Advanced Control of Chemical Processes, J. of Process
Control, 17:229-240, 2007.
• X. Li, T. O. Drews, E. Rusli, F. Xue, Y. He, R. D. Braatz, and R. C. Alkire. The effect of additives on shape evolution during electrodeposition. Part I: Multiscale simulation with dynamically
coupled kinetic Monte Carlo and moving-boundary finite-volume codes. J. Electrochem. Soc., 154:D230-240, 2007. Correction, J. Electrochem. Soc., 154:S15, 2007.
• J. G. VanAntwerp, A. P. Featherstone, B. A. Ogunnaike, and R. D. Braatz. Cross-directional control of sheet and film processes. Automatica, 43:191-211, 2007.
• X. Zhang, M. Yu, C. T. M. Kwok, R. Vaidyanathan, R. D. Braatz, and E. G. Seebauer. Precursor mechanism for interaction of bulk interstitial atoms with Si(100). Phys. Rev. B, 74:235301, 2006.
• N. Nair, M. L. Usrey, W.-J. Kim, R. D. Braatz, and M. S. Strano. Deconvolution of the photo-absorption spectrum of single-walled carbon nanotubes with (n,m) resolution. Analytical Chemistry,
78:7689-7696, 2006.
• E. G. Seebauer, K. Dev, M. Y. L. Jung, R. Vaidyanathan, C. T. M. Kwok, J. W. Ager, E. E. Haller, and R. D. Braatz. Control of defect concentrations within a semiconductor through adsorption.
Physical Review Letters. 97:055503, 2006.
• R. D. Braatz, R. C. Alkire, E. G. Seebauer, T. O. Drews, E. Rusli, M. Karulkar, F. Xue, Y. Qin, M. Y. L. Jung, and R. Gunawan. A multiscale systems approach to microelectronic processes. Special
Issue on Chemical Process Control, Comp. & Chem. Eng., 30:1643- 1656, 2006 (invited).
• X. Y. Woo, R. B. H. Tan, P. S. Chow, and R. D. Braatz. Simulation of mixing effects in antisolvent crystallization using a coupled CFD-PDF-PBE approach. Crystal Growth & Design, 6:1291-1303,
• T. O. Drews, A. Radisic, J. Erlebacher, R. D. Braatz, P. C. Searson, and R. C. Alkire. Stochastic simulation of the early stages of kinetically limited electrodeposition. J. Electrochem. Soc.,
153:C434-C441, 2006.
• G. X. Zhou, M. Fujiwara, X. Y. Woo, E. Rusli, H.-H. Tung, C. Starbuck, O. Davidson, Z. Ge, and R. D. Braatz. Direct design of pharmaceutical antisolvent crystallization through concentration
control. Crystal Growth & Design, 6:892-898, 2006.
• E. Rusli, T. O. Drews, D. L. Ma, R. C. Alkire, and R. D. Braatz. Robust nonlinear feedforward-feedback control of a coupled kinetic Monte Carlo-finite difference simulation. J. of Process
Control, 16:409-417, 2006.
• R. D. Braatz, R. C. Alkire, E. G. Seebauer, E. Rusli, R. Gunawan, T. O. Drews, and Y. He. Perspectives on the design and control of multiscale systems. DYCOPS Special Issue, J. of Process
Control, 16:193-204, 2006 (invited).
• R. Vaidyanathan, M. Y. L. Jung, R. D. Braatz, and E. G. Seebauer. Measurement of defect-mediated diffusion: The case of silicon self-diffusion. AIChE J., 52:366-370, 2006.
• E. J. Hukkanen, J. A. Wieland, D. E. Leckband, A. Gewirth, and R. D. Braatz. Multiple-bond kinetics from single-molecule pulling experiments: Evidence of multiple NCAM bonds. Biophysical J.,
89:3434-3445, 2005.
• C. T. M. Kwok, K. Dev, R. D. Braatz, and E. G. Seebauer. A method for quantifying annihilation rates of bulk point defects at surfaces. J. Appl. Phys., 98:013524, 2005.
• M. Fujiwara, Z. K. Nagy, J. W. Chew, and R. D. Braatz. First-principles and direct design approaches for the control of pharmaceutical crystallization. J. of Process Control, 15:493-504, 2005
• T. O. Drews, S. Krishnan, J. Alameda, D. Gannon, R. D. Braatz, and R. C. Alkire. Multi-scale simulations of copper electrodeposition onto a resistive substrate. IBM J. Res. & Dev., 49:49-63,
• M. Y. L. Jung, C. T. M. Kwok, R. D. Braatz, and E. G. Seebauer. Interstitial charge states in boron-implanted silicon. J. Appl. Phys., 97:063520, 2005.
• H. An, J. W. Eheart, and R. D. Braatz. Stability-oriented programs for regulating water withdrawals in riparian regions. Water Resources Research, 40:W12301, 2004.
• R. D. Braatz, R. C. Alkire, T. O. Drews, and E. Rusli. Multiscale systems engineering with applications to chemical reaction processes. ISCRE Special Issue, Chem. Eng. Sci., 59:5623-5628, 2004.
• E. Rusli, T. O. Drews, and R. D. Braatz. Systems analysis and design of dynamically coupled multiscale reactor simulation codes. ISCRE Special Issue, Chem. Eng. Sci., 59:5607-5613, 2004.
• M. Y. L. Jung, R. Gunawan, R. D. Braatz, and E. G. Seebauer. Pair diffusion and kick-out: Contributions to diffusion of boron in silicon. AIChE J., 50:3248-3256, 2004.
• R. Gunawan, I. Fusman, and R. D. Braatz. High resolution algorithms for multidimensional population balance equations. AIChE J., 50:2738-2749, 2004.
• T. Togkalidou, H.-H. Tung, Y. Sun, A. Andrews, and R. D. Braatz. Parameter estimation and optimization of a loosely-bound aggregating pharmaceutical crystallization using in-situ infrared and
laser backscattering measurements. Ind. Eng. Chem. Res., 43:6168- 6181, 2004.
• T. O. Drews, R. D. Braatz, and R. C. Alkire. Coarse-grained kinetic Monte Carlo simulation of copper electrodeposition with additives. Int. J. Multiscale Computational Engineering, 2:313-327,
• J. C. Pirkle, Jr. and R. D. Braatz. Comparison of the dynamic thin shell and quasi-cylindrical models for blown film extrusion. Polymer Engineering & Science, 44:1267-1276, 2004.
• R. C. Alkire and R. D. Braatz. Electrochemical engineering in an age of discovery and innovation. AIChE J., 50:2000-2007, 2004 (invited, cover article).
• T. O. Drews, E. G. Webb, D. L. Ma, J. Alameda, R. D. Braatz, and R. C. Alkire. Coupled mesoscale-continuum simulations of copper electrodeposition in a trench. AIChE J., 50:226-240, 2004.
• M. Y. L. Jung, R. Gunawan, R. D. Braatz, and E. G. Seebauer. Effect of near-surface band bending on dopant profiles in ion-implanted silicon. J. Appl. Phys., 95:1134-1139, 2004.
• M. Hovd and R. D. Braatz. Handling state and output constraints in MPC using time-dependent weights. Modeling, Identification, and Control, 25:67-84, 2004.
• R. Gunawan, M. Y. L. Jung, E. G. Seebauer, and R. D. Braatz. Optimal control of transient enhanced diffusion in a semiconductor process. J. of Process Control, 14:423-430, 2004.
• Z. K. Nagy and R. D. Braatz. Open-loop and closed-loop robust optimal control of batch processes using distributional and worst-case analysis. J. of Process Control, 14:411-422, 2004.
• M. Y. L. Jung, R. Gunawan, R. D. Braatz, and E. G. Seebauer. A simplified picture for transient enhanced diffusion of boron in silicon. J. Electrochem. Soc., 151:G1-G7, 2004.
• M. Kamrunnahar, R. D. Braatz, and R. C. Alkire. Parameter sensitivity analysis of pit initiation at single sulfide inclusions in stainless steel. J. Electrochem. Soc., 151:B90-B97, 2004.
• K. Dev, M. Y. L. Jung, R. Gunawan, R. D. Braatz, and E. G. Seebauer. Mechanism for coupling between properties of interfaces and bulk semiconductors. Phys. Rev. B, 68:195311, 2003.
• T. J. McAvoy and R. D. Braatz. Controllability limitations for processes with large singular values. Ind. Eng. Chem. Res., 42:6155-6165, 2003.
• E. J. Hukkanen and R. D. Braatz. Measurement of particle size distribution in suspension polymerization using in situ laser backscattering. Sensors and Actuators B, 96:451-459, 2003.
• M. Y. L. Jung, R. Gunawan, R. D. Braatz, and E. G. Seebauer. Ramp-rate effects in transient enhanced diffusion and dopant activation. J. Electrochem. Soc., 150:G838-G842, 2003.
• R. Gunawan, M. Y. L. Jung, R. D. Braatz, and E. G. Seebauer. Parameter sensitivity analysis of boron activation and transient enhanced diffusion in silicon. J. Electrochem. Soc., 150:G758-G765,
• T. O. Drews, R. D. Braatz, and R. C. Alkire. Parameter sensitivity analysis of Monte Carlo simulations of copper electrodeposition with multiple additives. J. Electrochem. Soc., 150:C807-C812,
• Z. K. Nagy and R. D. Braatz. Worst-case and distributional robustness analysis of finite-time control trajectories for nonlinear distributed parameter systems. IEEE Trans. on Control Syst. Tech.,
11:494-504, 2003.
• R. Gunawan, M. Y. L. Jung, E. G. Seebauer, and R. D. Braatz. Maximum a posteriori estimation of transient enhanced diffusion energetics. AIChE J., 49:2114-2123, 2003.
• Z. K. Nagy and R. D. Braatz. Robust nonlinear model predictive control of batch processes. AIChE J., 49:1776-1786, 2003.
• D. L. Ma and R. D. Braatz. Robust identification and control of batch processes. Special Issue on 2nd Pan American Workshop in Process Systems Engineering, Comp. & Chem. Eng., 27:1175-1184, 2003
• M. Hovd, D. L. Ma, and R. D. Braatz. On the computation of disturbance rejection measures. Ind. Eng. Chem. Res., 42:2183-2188, 2003.
• J. C. Pirkle, Jr. and R. D. Braatz. Dynamic modeling of blown film extrusion. Polymer Engineering & Science, 43:398-418, 2003.
• L. H. Chiang and R. D. Braatz. Process monitoring using causal map and multivariate statistics: Fault detection and identification. Chemom. Int. Lab. Syst., 65:159-178, 2003.
• R. D. Braatz. Advanced control of crystallization processes. Annual Reviews in Control, 26:87-99, 2002 (invited).
• D. L. Ma, J. G. VanAntwerp, M. Hovd, and R. D. Braatz. Quantifying the potential benefits of constrained control for a large scale system. Special Section on Cross Directional Control, IEE
Proceedings - Control Theory and Applications, 149:423-432, 2002.
• M. Fujiwara, P. S. Chow, D. L. Ma, and R. D. Braatz. Paracetamol crystallization using laser backscattering and ATR-FTIR spectroscopy: Metastability, agglomeration, and control. Crystal Growth &
Design, 2:363-370, 2002.
• T. Togkalidou, H.-H. Tung, Y. Sun, A. Andrews, and R. D. Braatz. Solution concentration prediction for pharmaceutical crystallization processes using robust chemometrics and ATR FTIR
spectroscopy. Org. Process Res. Dev., 6:317-322, 2002.
• D. L. Ma, D. K. Tafti, and R. D. Braatz. Optimal control and simulation of multidimensional crystallization processes. Special Issue on Distributed Parameter Systems, Comp. & Chem. Eng.,
26:1103-1116, 2002 (invited).
• D. L. Ma, D. K. Tafti, and R. D. Braatz. High resolution simulation of multidimensional crystal growth. Special Issue in Honor of William R. Schowalter, Ind. Eng. Chem. Res., 41:6217-6223, 2002
• D. L. Ma, D. Tafti, and R. D. Braatz. Compartmental modeling of multidimensional crystallization. Special Issue on Crystallization and Interfacial Processes, Int. J. of Modern Physics B,
16:383-390, 2002.
• R. Gunawan, D. L. Ma, M. Fujiwara, and R. D. Braatz. Identification of kinetic parameters in a multidimensional crystallization process. Special Issue on Crystallization and Interfacial
Processes, Int. J. of Modern Physics B, 16:367-374, 2002.
• R. D. Braatz, M. Fujiwara, D. L. Ma, T. Togkalidou, and D. K. Tafti. Simulation and new sensor technologies for industrial crystallization: A review. Special Issue on Crystallization and
Interfacial Processes, Int. J. of Modern Physics B, 16:346-353, 2002 (invited).
• E. L. Russell and R. D. Braatz. The average-case identifiability and controllability of large scale systems. J. of Process Control, 12:823-829, 2002.
• T. Togkalidou, M. Fujiwara, S. Patel, and R. D. Braatz. Solute concentration prediction using chemometrics and ATR-FTIR spectroscopy. J. of Crystal Growth, 231:534-543, 2001.
• D. L. Ma and R. D. Braatz. Worst-case analysis of finite-time control policies. IEEE Trans. on Control Syst. Tech., 9:766-774, 2001.
• R. Gunawan, E. L. Russell, and R. D. Braatz. Comparison of theoretical and computational characteristics of dimensionality reduction methods for large scale uncertain systems. J. of Process
Control, 11:543-552, 2001.
• J. G. VanAntwerp, A. P. Featherstone, and R. D. Braatz. Robust cross-directional control of large scale sheet and film processes. J. of Process Control, 11:149-178, 2001.
• T. Togkalidou, R. D. Braatz, B. K. Johnson, O. Davidson, and A. Andrews. Experimental design and inferential modeling in pharmaceutical crystallization. AIChE J., 47:160-168, 2001.
• E. L. Russell, L. H. Chiang, and R. D. Braatz. Fault detection in industrial processes using canonical variate analysis and dynamic principal component analysis. Chemom. Int. Lab. Syst.,
51:81-93, 2000.
• L. H. Chiang, E. L. Russell, and R. D. Braatz. Fault diagnosis in chemical processes using Fisher discriminant analysis, discriminant partial least squares, and principal component analysis.
Chemom. Int. Lab. Syst., 50:243-252, 2000.
• J. G. VanAntwerp and R. D. Braatz. Model predictive control of large scale processes. J. of Process Control, 10:1-8, 2000.
• J. G. VanAntwerp and R. D. Braatz. A tutorial on linear and bilinear matrix inequalities. J. of Process Control, 10:363-385, 2000.
• J. G. VanAntwerp and R. D. Braatz. Fast model predictive control of sheet and film processes. IEEE Trans. on Control Syst. Tech., 8:408-417, 2000.
• S. H. Chung, D. L. Ma, and R. D. Braatz. Optimal experimental design in batch crystallization. Chemom. Int. Lab. Syst., 50:83-90, 2000.
• R. D. Braatz and E. L. Russell. Robustness margin computation for large scale systems. Comp. & Chem. Eng., 23:1021-1030, 1999.
• C. L. Mangun, R. D. Braatz, J. Economy, and A. J. Hall. Fixed bed adsorption of acetone and ammonia onto oxidized activated carbon fibers. Ind. Eng. Chem. Res., 38:3499-3504, 1999.
• D. L. Ma, S. H. Chung, and R. D. Braatz. Worst-case performance analysis of optimal batch control trajectories. AIChE J., 45:1469- 1476, 1999.
• S. H. Chung, D. L. Ma, and R. D. Braatz. Optimal seeding in batch crystallization. Can. J. of Chem. Eng., 77:590-596, 1999.
• J. G. VanAntwerp, R. D. Braatz, and N. V. Sahinidis. Globally optimal robust process control. J. of Process Control, 9:375-383, 1999.
• R. D. Braatz and O. D. Crisalle. Robustness analysis for systems with ellipsoidal uncertainty. Int. J. of Robust and Nonlinear Control, 8:1113-1117, 1998.
• E. L. Russell and R. D. Braatz. Model reduction for the robustness margin computation of large scale uncertain systems. Comp. & Chem. Eng., 22:913-926, 1998.
• A. P. Featherstone and R. D. Braatz. Input design for large scale sheet and film processes. Ind. Eng. Chem. Res., 37:449-454, 1998.
• C. L. Mangun, M. A. Daley, R. D. Braatz, and J. Economy. Effect of pore size on adsorption of hydrocarbons in phenolic-based activated carbon fibers. Carbon, 36:123-131, 1998.
• A. P. Featherstone and R. D. Braatz. Integrated robust identification and control of large scale processes. Ind. Eng. Chem. Res., 37:97-106, 1998.
• A. P. Featherstone and R. D. Braatz. Control-oriented modeling of sheet and film processes. AIChE J., 43:1989-2001, 1997.
• E. Rios-Patron and R. D. Braatz. On the identification and control of dynamical systems using neural networks. IEEE Trans. on Neural Networks, 8:452, 1997.
• E. L. Russell, C. P. H. Power, and R. D. Braatz. Multidimensional realization of large scale uncertain systems for multivariable stability margin computation. Int. J. of Robust and Nonlinear
Control, 7:113-125, 1997.
• R. D. Braatz and M. Morari. On the stability of systems with mixed time-varying parameters. Int. J. of Robust and Nonlinear Control, 7:105-112, 1997.
• I. G. Horn, J. R. Arulandu, C. J. Gombas, J. G. VanAntwerp, and R. D. Braatz. Improved filter design in internal model control. Ind. Eng. Chem. Res., 35:3437-3441, 1996.
• M. Hovd, R. D. Braatz, and S. Skogestad. SVD controllers for H2, H-infinity, and mu-optimal control. Automatica, 33:433-439, 1996.
• R. D. Braatz, M. Morari, and S. Skogestad. Loopshaping for robust performance. Int. J. of Robust and Nonlinear Control, 6:805-823, 1996.
• R. D. Braatz, J. H. Lee, and M. Morari. Screening plant designs and control structures for uncertain systems. Comp. & Chem. Eng., 20:463-468, 1996.
• J. H. Lee, R. D. Braatz, M. Morari, and A. Packard. Screening tools for robust control structure selection. Automatica, 31:229-235, 1995.
• R. D. Braatz, P. M. Young, J. C. Doyle, and M. Morari. Computational complexity of mu calculation. IEEE Trans. on Auto. Control, 39:1000-1002, 1994.
• R. D. Braatz and M. Morari. Minimizing the Euclidean condition number. SIAM J. on Control and Optim., 32:1763-1768, 1994.
• D. Laughlin, M. Morari, and R. D. Braatz. Robust performance of cross-directional basis-weight control in paper machines. Automatica, 29:1395-1410, 1993. | {"url":"http://web.mit.edu/braatzgroup/publications.html","timestamp":"2014-04-17T16:28:59Z","content_type":null,"content_length":"58294","record_id":"<urn:uuid:b623e6b5-e036-4114-8eb5-af2771f69293>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cannot Solve this Differential Equation
February 17th 2010, 06:44 PM
Cannot Solve this Differential Equation
I would like some help solving this differential equation if possible:
y'' + 2y' + y = xe^(-x)
I end up with the complementary solution of the equation being:
y = C1*e^(-x) + C2*xe^(-x) + yp
However, here I get stumped for how to solve for yp. I set the trial solution as (Ax+B)e^(-x) and then differentiat to find y' and y'' and plug it into the equation. But then everything cancels
out giving me x = 0.
Can someone explain how to solve the rest of this?
February 17th 2010, 08:19 PM
I would like some help solving this differential equation if possible:
y'' + 2y' + y = xe^(-x)
I end up with the complementary solution of the equation being:
y = C1*e^(-x) + C2*xe^(-x) + yp
However, here I get stumped for how to solve for yp. I set the trial solution as (Ax+B)e^(-x) and then differentiat to find y' and y'' and plug it into the equation. But then everything cancels
out giving me x = 0.
Can someone explain how to solve the rest of this?
Since your complimentry solution has the same form as the particular solution you need to increase the degree in the polynomials of $y_p$ by 2.
So the form will be $(Ax^3+Bx^2+Cx+D)e^{-x}$
If you write the ODE out as an operator you get
To annihilate the right hand side you need to act on the equation by $(D+1)^2$ again. This gives
This gives the form
as above | {"url":"http://mathhelpforum.com/differential-equations/129387-cannot-solve-differential-equation-print.html","timestamp":"2014-04-21T07:16:45Z","content_type":null,"content_length":"6369","record_id":"<urn:uuid:15873a97-c812-48e8-9395-e78d28270304>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
Professor Gerard Milburn
• Generation and stabilization of a three-qubit entangled W state in circuit QED via quantum feedback control
Huang, Shang-Yu, Goan, Hsi-Sheng, Li, Xin-Qi and Milburn G.J. (2013) Generation and stabilization of a three-qubit entangled W state in circuit QED via quantum feedback control. Physical Review A
- Atomic, Molecular, and Optical Physics, 88 6: 062311.1-062311.11.
• Entangled mechanical cat states via conditional single photon optomechanics
Akram, Uzma, Bowen, Warwick P. and Milburn, G. J. (2013) Entangled mechanical cat states via conditional single photon optomechanics. New Journal of Physics, 15 : 093007.1-093007.21.
• Deterministic many-resonator W entanglement of nearly arbitrary microwave states via attractive Bose-Hubbard simulation
Gangat, A. A., McCulloch, I. P. and Milburn, G. J. (2013) Deterministic many-resonator W entanglement of nearly arbitrary microwave states via attractive Bose-Hubbard simulation. Physical Review
X, 3 3: 031009.1-031009.11.
• Dynamical tunneling with ultracold atoms in magnetic microtraps
Lenz, Martin, Wüster, Sebastian, Vale, Christopher J., Heckenberg, Norman R., Rubinsztein-Dunlop, Halina, Holmes, C. A., Milburn, G. J. and Davis, Matthew J. (2013) Dynamical tunneling with
ultracold atoms in magnetic microtraps. Physical Review A: Atomic, Molecular and Optical Physics, 88 1: 013635.1-013635.13.
• The 20th anniversary of quantum state engineering
Blatt, Rainer, Milburn, Gerard J. and Lvovksy, Alex (2013) The 20th anniversary of quantum state engineering. Journal of Physics B: Atomic, Molecular, and Optical Physics, 46 10:
• Quantum state preparation of a mechanical resonator using an optomechanical geometric phase
Khosla, K. E., Vanner, M. R., Bowen, W. P. and Milburn, G. J. (2013) Quantum state preparation of a mechanical resonator using an optomechanical geometric phase. New Journal of Physics, 15 :
• Milburn, Gerard J. (2013) Demonstrating uncertainty. Science, 339 6121: 770-771.
• Breakdown of the cross-kerr scheme for photon counting
Fan, Bixuan, Kockum, Anton F., Combes, Joshua, Johansson, Goran, Hoi, Io-chun, Wilson, C. M., Delsing, Per, Milburn, G. J. and Stace, Thomas M. (2013) Breakdown of the cross-kerr scheme for
photon counting. Physical Review Letters, 110 5: .
• Photon-phonon entanglement in coupled optomechanical arrays
Akram, Uzma, Munro, William, Nemoto, Kae and Milburn, G. J. (2012) Photon-phonon entanglement in coupled optomechanical arrays. Physical Review A, 86 4: .
• Decoherence and the conditions for the classical control of quantum systems
Milburn, G. J. (2012) Decoherence and the conditions for the classical control of quantum systems. Philosophical Transactions of the Royal Society A: Mathematical Physical and Engineering
Sciences, 370 1975: 4469-4486.
• Reversible optical-to-microwave quantum interface
Barzanjeh, Sh, Abdi, M., Milburn, G. J., Tombesi, P. and Vitali, D. (2012) Reversible optical-to-microwave quantum interface. Physical Review Letters, 109 13: 130503.1-130503.5.
• Phonon number measurements using single photon opto-mechanics
Basiri-Esfahani, S., Akram, U. and Milburn, G. J. (2012) Phonon number measurements using single photon opto-mechanics. New Journal of Physics, 14 : .
• Multiscale photosynthetic and biomimetic excitation energy transfer
Ringsmuth, A. K., Milburn, G. J. and Stace, T. M. (2012) Multiscale photosynthetic and biomimetic excitation energy transfer. Nature Physics, 8 7: 562-567.
• Synchronization of many nanomechanical resonators coupled via a common cavity field
Holmes, C.A., Meaney, C.P. and Milburn, G.J. (2012) Synchronization of many nanomechanical resonators coupled via a common cavity field. Physical Review E, 85 6: 066203-1-066203-15.
• Quantum interface between an electrical circuit and a single atom
Kielpinski, D., Kafri, D., Woolley, M. J., Milburn, G. J. and Taylor, J. M. (2012) Quantum interface between an electrical circuit and a single atom. Physical Review Letters, 108 13:
• An introduction to quantum optomechanics
Milburn, G. J. and Woolley, M. J. (2011) An introduction to quantum optomechanics. Acta Physica Slovaca, 61 5: 483-602.
• Efficient quantum computing using coherent photon conversion
Langford, N. K., Ramelow, S., Prevedel, R., Munro, W. J., Milburn, G. J. and Zeilinger, A. (2011) Efficient quantum computing using coherent photon conversion. Nature, 478 7369: 360-363.
• Entangling optical and microwave cavity modes by means of a nanomechanical resonator
Barzanjeh, Sh., Vitali, D., Tombesi, P. and Milburn, G. J. (2011) Entangling optical and microwave cavity modes by means of a nanomechanical resonator. Physical Review A, 84 4: 042342.1-042342.6.
• Quantum entanglement between a nonlinear nanomechanical resonator and a microwave field
Meaney, Charles P., McKenzie, Ross H. and Milburn, G. J. (2011) Quantum entanglement between a nonlinear nanomechanical resonator and a microwave field. Physical Review E, 83 5:
• Phonon number quantum jumps in an optomechanical system
Gangat, A. A., Stace, T. M. and Milburn, G. J. (2011) Phonon number quantum jumps in an optomechanical system. New Journal of Physics, 13 : .
• Quantum measurement and control of single spins in diamond
Milburn, Gerard J. (2010) Quantum measurement and control of single spins in diamond. Science, 330 6008: 1188-1189.
• Linear amplification and quantum cloning for non-Gaussian continuous variables
Nha, Hyunchul, Milburn, G. J. and Carmichael, H. J. (2010) Linear amplification and quantum cloning for non-Gaussian continuous variables. New Journal of Physics, 12 10: 103010-1-103010-10.
• Continuous quantum nondemolition measurement of Fock states of a nanoresonator using feedback-controlled circuit QED
Woolley, M. J., Doherty, A. C. and Milburn, G. J. (2010) Continuous quantum nondemolition measurement of Fock states of a nanoresonator using feedback-controlled circuit QED. Physical Review B,
82 9: 1-5.
• Simulating quantum effects of cosmological expansion using a static ion trap
Menicucci, Nicolas C., Olson, S. Jay and Milburn, Gerard J. (2010) Simulating quantum effects of cosmological expansion using a static ion trap. New Journal of Physics, 12 : 095019 - 1-095019 -
• Single-photon opto-mechanics in the strong coupling regime
Akram, U., Kiesel, N., Aspelmeyer, M. and Milburn, G. J. (2010) Single-photon opto-mechanics in the strong coupling regime. New Journal of Physics, 12 : 083030 - 1-083030 - 13.
• Vibration-enhanced quantum transport
Semiao, F.L., Furuya, K and Milburn, G.J. (2010) Vibration-enhanced quantum transport. New Journal of Physics, 12 Article # 083033: 1-14.
• Jahn-Teller instability in dissipative quantum systems
Meaney, Charles P., Duty, Tim, McKenzie, Ross H. and Milburn, G. J. (2010) Jahn-Teller instability in dissipative quantum systems. Physical Review A - Atomic, Molecular, and Optical Physics, 81
4: 1-11.
• Intracavity weak nonlinear phase shifts with single photon driving
Munro, W. J., Nemoto, K. and Milburn, G. J. (2010) Intracavity weak nonlinear phase shifts with single photon driving. Optics Communications, 283 5: 741-746.
• Giant kerr nonlinearities in circuit quantum electrodynamics
Rebic, Stojan, Twamley, Jason and Milburn, Gerard J. (2009) Giant kerr nonlinearities in circuit quantum electrodynamics. Physics Review Letters, 103 15: 150503.1-150503.4.
• Parametric self pulsing in a quantum opto-mechanical system
Holmes, C. A. and Milburn, G.J. (2009) Parametric self pulsing in a quantum opto-mechanical system. Fortschritte der Physik, 57 11-12: 1052-1063.
• Kerr nonlinearities and nonclassical states with superconducting qubits and nanomechanical resonators
Semiao, F. L., Furuya, K. and Milburn, G. J. (2009) Kerr nonlinearities and nonclassical states with superconducting qubits and nanomechanical resonators. Physical Review A (Atomic, Molecular and
Optical Physics), 79 6: 063811.1-063811.5.
• Quantum connectivity of space-time and gravitationally induced decorrelation of entanglement
Ralph, T. C., Milburn, G. J. and Downes, T. (2009) Quantum connectivity of space-time and gravitationally induced decorrelation of entanglement. Physical Review A, 79 2: 022121.1-022121.8.
• Dissipative quantum dynamics in low-energy collisions of complex nuclei
Diaz-Torres, A., Hinde, D. J., Dasgupta, M., Milburn, G. J. and Tostevin, J. A. (2008) Dissipative quantum dynamics in low-energy collisions of complex nuclei. Physical Review C (Nuclear Physics)
, 78 6: 064604-1-064604-6.
• Nanomechanical squeezing with detection via a microwave cavity
Woolley, M. J., Doherty, A. C., Milburn, G. J. and Schwab, K. C. (2008) Nanomechanical squeezing with detection via a microwave cavity. Physical Review A, 78 6: 062303-1-062303-11.
• Nonlinear quantum metrology using coupled nanomechanical resonators
Woolley, M. J., Milburn, G. J. and Caves, Carlton M. (2008) Nonlinear quantum metrology using coupled nanomechanical resonators. New Journal of Physics, 10 Article Number: 125018: 1-13.
• Milburn, G. J. and Woolley, M. J. (2008) Quantum nanoscience. Contemporary Physics, 49 6: 413-433.
• Quantum noise in a nanomechanical Duffing resonator
Babourina, E., Doherty, A. C. and Milburn, G. J. (2008) Quantum noise in a nanomechanical Duffing resonator. New Journal of Physics, 10 Article Number: 105020: 105020-1-105020-14.
• Noise in a superconducting single-electron transistor resonator driven by an external field
Rodrigues, D. A. and Milburn, G. J. (2008) Noise in a superconducting single-electron transistor resonator driven by an external field. Physical Review B, 78 10: Article Number: 104302.
• Hybrid quantum computation in quantum optics
van Loock, P., Munro, W. J., Nemoto, Kae, Spiller, T. P., Ladd, T. D., Braunstein, Samuel L. and Milburn, G. J. (2008) Hybrid quantum computation in quantum optics. Physical Review A : Atomic,
Molecular and Optical Physics), 78 2: Article Number: 022303.
• Reply to "Comment on 'Decoherence and dissipation of a quantum harmonic oscillator coupled to two-level systems'"
Schlosshauer, Maximilian, Hines, A. P. and Milburn, G. J. (2008) Reply to "Comment on 'Decoherence and dissipation of a quantum harmonic oscillator coupled to two-level systems'". Physical Review
A, 78 1: 016102-1-016102-1.
• Wigner function evolution of quantum states in the presence of self-Kerr interaction
Stobinska, Magdalena, Milburn, G. J. and Wodkiewicz, Krzyztof (2008) Wigner function evolution of quantum states in the presence of self-Kerr interaction. Physical Review A (Atomic, Molecular and
Optical Physics), 78 1: 013810-1-013810-9.
• Photon Counting: Avalanche inspiration
Milburn, G. J. (2008) Photon Counting: Avalanche inspiration. Nature Photonics, 2 7: 392-393.
• Coherent control of single photon states
Milburn, G. J. (2008) Coherent control of single photon states. The European Physical Journal Special Topics, 159 1: 113-117.
• Decoherence and dissipation of a quantum harmonic oscillator coupled to two-level systems
Schlosshauer, Maximilian, Hines, A. P. and Milburn, G. J. (2008) Decoherence and dissipation of a quantum harmonic oscillator coupled to two-level systems. Physical Review A, 77 2:
• Single trapped ion as a time-dependent harmonic oscillator
Nicolas C Menicucci and Milburn, Gerard J. (2007) Single trapped ion as a time-dependent harmonic oscillator. Physical Review A, 76 5: 052105-1-052105-5.
• Beyond the Coherent Coupled Channels Description of Nuclear Fusion
Dasgupta, M., Hinde, D. J., Diaz-Torres, A., Bouriquet, B., Low, C. I., Milburn, G. J. and Newton, J. O. (2007) Beyond the Coherent Coupled Channels Description of Nuclear Fusion. Physical Review
Letters, 99 19: 192701-1-192701-4.
• Entangling a Nanomechanical Resonator and a Superconducting Microwave Cavity
Vitali, D, Tombesi, P., Woolley, M. J., Doherty, A. C. and Milburn, G. J. (2007) Entangling a Nanomechanical Resonator and a Superconducting Microwave Cavity. Physical Reveiw A, 76 4:
• Improving single-photon sources with Stark tuning
Fernee, M. J., Rubinsztein-Dunlop, H. and Milburn, G. J. (2007) Improving single-photon sources with Stark tuning. Physical Review A, 75 4: 043815-1-043815-9.
• Entangling a Nanomechanical Resonator with a Microwave Field
Ringsmuth, A. and Milburn, G. J. (2007) Entangling a Nanomechanical Resonator with a Microwave Field. Journal of Modern Optics, 54 13-15: 2223-2235.
• Effective Generation of cat and kitten states
Stobinska, M., Milburn, Gerard J. and Wodkiewicz, K. (2007) Effective Generation of cat and kitten states. Open Systems and Information Dynamics, 14 1: 81-90.
• Quasi-probability methods for multimode conditional optical gates (Invited)
Milburn, G. J. (2007) Quasi-probability methods for multimode conditional optical gates (Invited). Journal of The Optical Society of America B: Optical Physics, 24 2: 020167-1-020167-4.
• Linear optical quantum computing with photonic qubits
Kok, Pieter, Munro, W. J., Nemoto, K.ae, Ralph, T. C.,, Dowling, Jonathon P. and Milburn, G. J. (2007) Linear optical quantum computing with photonic qubits. Reviews of Modern Physics, 79 1:
• Model for an irreversible bias current in the superconducting qubit measurement process
Hutchinson, G. D., Holmes, C. A., Stace, T. M., Spiller, T. P., Milburn, G. J., Barrett, S. D., Hasko, D. G. and Williams, D. A. (2006) Model for an irreversible bias current in the
superconducting qubit measurement process. Physical Review A, 74 6: 062302-1-062302-17.
• Twamley, J. and Milburn, G. J. (2006) The quantum Mellin transform. New Journal of Physics, 8 : 328.1-328.20.
• Optimal estimation of one-parameter quantum channels
Sarovar, M. and Milburn, G. J. (2006) Optimal estimation of one-parameter quantum channels. Journal of Physics A-mathematical And General, 39 26: 8487-8505.
• Lorentz invariant intrinsic decoherence
Milburn, G. J. (2006) Lorentz invariant intrinsic decoherence. New Journal of Physics, 8 96: 96.1-96.18.
• Quantum computation by communication
Spiller, T. P., Nemoto, K., Braunstein, S. L., Munro, W. J., van Loock, P. and Milburn, G. J. (2006) Quantum computation by communication. New Journal of Physics, 8 : 30.1-30.26.
• Relational time for systems of oscillators
Milburn, G. J. and Poulin, D. (2006) Relational time for systems of oscillators. International Journal of Quantum Information, 4 1: 151-159.
• Quantum noise in the electromechanical shuttle: Quantum master equation treatment
Wahyu Utami, D., Goan, Hsi-Sheng, Holmes, C. A. and Milburn, G. J. (2006) Quantum noise in the electromechanical shuttle: Quantum master equation treatment. Physical Review B, 74 1: 014303.
• Quantum-information processing via a lossy bus
Barrett, S. D. and Milburn, G. J. (2006) Quantum-information processing via a lossy bus. Physical Review A, 74 6: 060302-1-060302-4.
• Spin-detection in a quantum electromechanical shuttle system
Twamley, J., Wahyu Utami, D., Goan, H. S. and Milburn, G. J. (2006) Spin-detection in a quantum electromechanical shuttle system. New Journal of Physics, 8 : 1-27.
• Teleportation via Multi-Qubit Channels
Links, J., Barjaktarevic, J. P., Milburn, G. J. and McKenzie, R. H. (2006) Teleportation via Multi-Qubit Channels. Quantum Information and Computation, 6 7: 641-670.
• Comment on "Evidence for quantized displacement in macroscopic nanomechanical oscillators"
Schwab, K., Blencowe, M. P., Roukes, M. L., Cleland, A. N., Girvin, S. M., Milburn, G. J. and Ekinci, K. L. (2005) Comment on "Evidence for quantized displacement in macroscopic nanomechanical
oscillators". Physical Review Letters, 95 24: 248901-248901.
• Communicating continuous quantum variables between different Lorentz frames
Kok, P., Ralph, T. C. and Milburn, G. J. (2005) Communicating continuous quantum variables between different Lorentz frames. Quantum Information & Computation, 5 3: 239-246.
• Continuous quantum error correction by cooling
Sarovar, M. and Milburn, G. J. (2005) Continuous quantum error correction by cooling. Physical Review A, 72 1: 012306.
• Dynamical creation of entanglement by homodyne-mediated feedback
Wang, J., Wiseman, H. M. and Milburn, G. J. (2005) Dynamical creation of entanglement by homodyne-mediated feedback. Physical Review A, 71 4: 042309.
• Entanglement sharing and decoherence in the spin-bath
Dawson, C. M., Hines, A. P., McKenzie, R. H. and Milburn, G. J. (2005) Entanglement sharing and decoherence in the spin-bath. Physical Review A, 71 5: 052321.
• Fast simulation of a quantum phase transition in an ion-trap realizable unitary map
Barjaktarevic, J. P., Milburn, G. J. and McKenzie, R. H. (2005) Fast simulation of a quantum phase transition in an ion-trap realizable unitary map. Physical Review A, 71 1: 012335.
• Foundations of quantum technology
Milburn, GJ (2005) Foundations of quantum technology. Journal of Computational And Theoretical Nanoscience, 2 2: 161-179.
• High-fidelity measurement and quantum feedback control in circuit QED
Sarovar, M., Goan, H. S., Spiller, T. P. and Milburn, G. J. (2005) High-fidelity measurement and quantum feedback control in circuit QED. Physical Review A, 72 6: 062327.
• Ion trap simulations of quantum fields in an expanding universe
Alsing, P. M., Dowling, J. P. and Milburn, G. J. (2005) Ion trap simulations of quantum fields in an expanding universe. Physical Review Letters, 94 22: 220401.
• Ion trap transducers for quantum electromechanical oscillators
Hensinger, W. K., Utami, D. W., Goan, H. S., Schwab, K., Monroe, C. and Milburn, G. J. (2005) Ion trap transducers for quantum electromechanical oscillators. Physical Review A, 72 4:
• Measurement-based teleportation along quantum spin chains
Barjaktarevic, J. P., McKenzie, R. H., Links, J. and Milburn, G. J. (2005) Measurement-based teleportation along quantum spin chains. Physical Review Letters, 95 23: 230501.
• Quantum entanglement and fixed-point bifurcations
Hines, A. P., McKenzie, R. H. and Milburn, G. J. (2005) Quantum entanglement and fixed-point bifurcations. Physical Review A, 71 4: 042303.
• Parity measurement of one- and two-electron double well systems
Stace, T. M., Barrett, S. D., Goan, H. S. and Milburn, G. J. (2004) Parity measurement of one- and two-electron double well systems. Physical Review B, 70 20: 205342-1-205342-14.
• Anharmonic effects on a phonon-number measurement of a quantum-mesoscopic-mechanical oscillator
Santamore, D. H., Goan, H. S., Milburn, G. J. and Roukes, M. L. (2004) Anharmonic effects on a phonon-number measurement of a quantum-mesoscopic-mechanical oscillator. Physical Review A, 70 5 A:
• Charge transport in a quantum electromechanical system
Wahyu Utami, D., Hsi-Sheng, Goan. and Milburn, G. J. (2004) Charge transport in a quantum electromechanical system. Physical Review B, 70 7: 075303-1-075303-10.
• Teleportation in a non-inertial frame
Alsing, P. M., McMahon, D. and Milburn, G. J. (2004) Teleportation in a non-inertial frame. Journal of Optics B-quantum And Semiclassical Optics, 6 8: S834-S843.
• Bitwise bell-inequality violations for an entangled state involving 2N ions
Pope, D. T. and Milburn, G. J. (2004) Bitwise bell-inequality violations for an entangled state involving 2N ions. Physical Review A, 69 5: 052102-1-052102-11.
• Charge-based quantum computing using single donors in semiconductors
Hollenberg, L. C. L., Dzurak, A. S., Wellard, C., Hamilton, A. R., Reilly, D. J., Milburn, G. J. and Clark, R. G. (2004) Charge-based quantum computing using single donors in semiconductors.
Physical Review B, B69 11: 113301-1-113301-4.
• Deterministic generation of tailored-optical-coherent-state superpositions
Ritsch, H., Milburn, G. J. and Ralph, T. C. (2004) Deterministic generation of tailored-optical-coherent-state superpositions. Physical Review A, 70 3: 033804-1-033804-4.
• Entanglement and bifurcations in Jahn-Teller models
Hines, A. P., Dawson, C. M., McKenzie, R. H. and Milburn, G. J. (2004) Entanglement and bifurcations in Jahn-Teller models. Physical Review A, 70 2: 022303.
• Mesoscopic one-way channels for quantum state transfer via the quantum hall effect
Stace, T. M., Barnes, C. H. W. and Milburn, G. J. (2004) Mesoscopic one-way channels for quantum state transfer via the quantum hall effect. Physical Review Letters, 93 12: 126804-1-126804-4.
• Nonlinear quantum optical computing via measurement
Hutchinson, G. D. and Milburn, G. J. (2004) Nonlinear quantum optical computing via measurement. Journal of Modern Optics, 51 8: 1211-1222.
• Practical scheme for error control using feedback
Sarovar, M., Ahn, C. A., Jacobs, K. and Milburn, G. J. (2004) Practical scheme for error control using feedback. Physical Review A, 69 5: 05234-1-05234-11.
• Schrodinger cats and their power for quantum information processing
Gilchrist, A., Nemoto, K., Munro, W. J., Ralph, T. C., Glancy, S., Braunstein, S. L. and Milburn, G. J. (2004) Schrodinger cats and their power for quantum information processing. Journal of
Optics B- Quantum and Semiclassical Optics, 6 8: S828-S833.
• Teleportation with a uniformly accelerated partner
Alsing, P. M. and Milburn, G. J. (2003) Teleportation with a uniformly accelerated partner. Physical Review Letters, 91 18: 180404.
• Measuring the decoherence rate in a semiconductor charge qubit
Barrett, S. D. and Milburn, G. J. (2003) Measuring the decoherence rate in a semiconductor charge qubit. Physical Review B, 68 15: 155307-1-155307-9.
• Milburn, G. (2003) Quantum-dot computing. Physics World, 16 10: 24-24.
• Quantum technology: the second quantum revolution
Dowling, Jonathan P. and Milburn, Gerard J. (2003) Quantum technology: the second quantum revolution. Philosophical Transactions of The Royal Society of London Series A, 361 1809: 1655-1674.
• Experimental tests of quantum nonlinear dynamics in atom optics
Hensinger, Winfried K., Heckenberg, Norman R., Milburn, Gerard J. and Rubinsztein-Dunlop, Halina (2003) Experimental tests of quantum nonlinear dynamics in atom optics. Journal of Optics B:
Quantum and Semiclassical Optics, 5 2: R83-R120.
• Entangled two-photon source using biexciton emission of an asymmetric quantum dot in a cavity
Stace, T. M., Milburn, G. J. and Barnes, C. H. W. (2003) Entangled two-photon source using biexciton emission of an asymmetric quantum dot in a cavity. Physical Review B, 67 8:
• Entanglement of two-mode Bose-Einstein condensates
Hines, A. P., McKenzie, R. H. and Milburn, G. J. (2003) Entanglement of two-mode Bose-Einstein condensates. Physical Review A, 67 1: 013609.
• Experimental requirements for Grover's algorithm in optical quantum computation
Dodd, J. L., Ralph, T. C. and Milburn, G. J. (2003) Experimental requirements for Grover's algorithm in optical quantum computation. Physical Review A, 68 4: 042328-1-042328-8.
• Introduction to the issue on quantum Internet technologies
Jackson, D. J., Franson, J. D., Gilbert, G. and Milburn, G. (2003) Introduction to the issue on quantum Internet technologies. Ieee Journal of Selected Topics In Quantum Electronics, 9 6:
• Multipartite entanglement and quantum state exchange
Pope, D. T. and Milburn, G. J. (2003) Multipartite entanglement and quantum state exchange. Physical Review A, 67 5: 052107-1-052107-12.
• Phonon-assisted tunnelling in coupled quantum dots
Sun, H. and Milburn, G. J. (2003) Phonon-assisted tunnelling in coupled quantum dots. Physica Status Solidi (C), 4: 1301-1304.
• Progress in silicon-based quantum computation
Clark, R. G., McCallum, J. C., Milburn, G. J., O'Brien, J. L., Brenner, R., Chan, V., Buehler, T.M., Curson, N.J., Dzurak, A. S., Gauja, E., Goan, H., Greentree, A. D., Hallam, T., Hamilton, A.
R., Hollenberg, L. C. L., Jamieson, D. N., Oberbeck, L., Pakes, C. O., Prawer, S. D., Reilly, D. J. et al. (2003) Progress in silicon-based quantum computation. Philosophical transactions of the
Royal Society of London. Series A, Mathematical, physical and engineering sciences, 361 1808: 1451-1471.
• Quantum computation with optical coherent states
Ralph, T. C., Gilchrist, A., Milburn, G. J., Munro, W. J. and Glancy, S. (2003) Quantum computation with optical coherent states. Physical Review A, 68 4: 042319-1-042319-11.
• Quantum control and quantum entanglement
Ahn, C., Wiseman, H. M. and Milburn, G. J. (2003) Quantum control and quantum entanglement. European Journal of Control, 9 2-3: 279-284.
• Quantum error correction for continuously detected errors
Ahn, C., Wiseman, H. M. and Milburn, G. J. (2003) Quantum error correction for continuously detected errors. Physical Review A, 67 5: 052310-1-052310-11.
• Testing integrability with a single bit of quantum information
Poulin, D., Laflamme, R., Milburn, G. J. and Paz, J. P. (2003) Testing integrability with a single bit of quantum information. Physical Review A, 68 2: 022302.
• Dynamics of a strongly driven two-component Bose-Einstein condensate
Salmond, G. L., Holmes, C. A. and Milburn, G. J. (2002) Dynamics of a strongly driven two-component Bose-Einstein condensate. Physical Review A (Atomic, Molecular and Optical Physics), 65 3:
• Discrete teleportation protocol of continuum spectra field states
de Oliveira, M. C. and Milburn, G. J. (2002) Discrete teleportation protocol of continuum spectra field states. Physical Review A, 65 3: 032304-1-032304-5.
• Entanglement in the steady state of a collective-angular-momentum (Dicke) model
Schneider, S. and Milburn, G. J. (2002) Entanglement in the steady state of a collective-angular-momentum (Dicke) model. Physical Review A, 65 4: 042107.
• Implementing the quantum random walk
Travaglione, B. C. and Milburn, G. J. (2002) Implementing the quantum random walk. Physical Review A, 65 3: 032310.
• On entanglement and Lorentz transformations
Alsing, P. M. and Milburn, G. J. (2002) On entanglement and Lorentz transformations. Quantum Information & Computation, 2 6: 487-512.
• Preparing encoded states in an oscillator
Travaglione, B. C. and Milburn, G. J. (2002) Preparing encoded states in an oscillator. Physical Review A, 66 5: 052322.
• Quantum dynamics of two coupled qubits
Milburn, G. J., Laflamme, R., Sanders, B. C. and Knill, E. (2002) Quantum dynamics of two coupled qubits. Physical Review A, 65 3: 032316.
• Simple scheme for efficient linear optics quantum gates
Ralph, T. C., White, A. G., Munro, W. J. and Milburn, G. J. (2002) Simple scheme for efficient linear optics quantum gates. Physical Review A, 65 1: 012314-1-012314-6.
• Teleportation improvement by conditional measurements on the two-mode squeezed vacuum
Cochrane, P. T., Ralph, T. C. and Milburn, G. J. (2002) Teleportation improvement by conditional measurements on the two-mode squeezed vacuum. Physical Review A, 65 6: 062306-1-062306-6.
• Weak-force detection with superposed coherent states
Munro, W. J., Nemoto, K., Milburn, G. J. and Braunstein, S. L. (2002) Weak-force detection with superposed coherent states. Physical Review A, 66 2: 023819-1-023819--6.
• Quantum dynamics of three coupled atomic Bose-Einstein condensates
Nemoto, K., Holmes, C. A., Milburn, G. J. and Munro, W. J. (2001) Quantum dynamics of three coupled atomic Bose-Einstein condensates. Physical Review A, 63 1: 013604-1-013604-6.
• Hamiltonian mappings and circle packing phase spaces.
Scott, A. J., Holmes, C. A. and Milburn, G. J. (2001) Hamiltonian mappings and circle packing phase spaces.. Physica D - Nonlinear Phenomena, 155 1-2: 34-50.
• A scheme for efficient quantum computation wtih linear optics
Knill, Emmanuel, Laflamme, Raymond and Milburn, Gerard James (2001) A scheme for efficient quantum computation wtih linear optics. Nature, 409 1861: 46-52.
• Continuous quantum measurement of two coupled quantum dots using a point contact: A quantum trajectory approach
Goan, H., Milburn, G. J., Sun, H. and Wiseman, H. M. (2001) Continuous quantum measurement of two coupled quantum dots using a point contact: A quantum trajectory approach. Physical Review B:
Condensed Matter and Materials Physics, 63 12: 125326-125337.
• Dynamical tunnelling of ultracold atoms
Hensinger, W. K., Haffer, H., Browaeys, A., Heckenberg, N. R., Helmerson, K., McKenzie, C., Milburn, G. J., Phillips, W. D., Rolston, S. L., Rubinsztein-Dunlop, H. and Upcroft, B. (2001)
Dynamical tunnelling of ultracold atoms. Nature, 412 6842: 52-55.
• Dynamics of a mesoscopic charge quantum bit under continuous quantum measurement
Goan, H. and Milburn, G. J. (2001) Dynamics of a mesoscopic charge quantum bit under continuous quantum measurement. Physical Review B, 64 23: 235307-1-235307-12.
• Efficient linear optics quantum coputation
Milburn, G. J., Ralph, T. C., White, A. G., Knill, E and Laflamme, R (2001) Efficient linear optics quantum coputation. Implementation of Quantum Computation, 1 : 13-19.
• Generation of eigenstates using the phase-estimation algorithm
Travaglione, B. C. and Milburn, G. J. (2001) Generation of eigenstates using the phase-estimation algorithm. Physical Review A, 63 3: 032301-1-032301-5.
• Multiple bifurcations in atom optics
Hensinger, W. K., Upcroft, B., Holmes, C. A., Heckenberg, N. R., Milburn, G. J. and Rubinsztein-Dunlop, H. (2001) Multiple bifurcations in atom optics. Physical Review A, 64 6: .
• Non-Markovian homodyne-mediated feedback on a two-level atom: a quantum trajectory treatment
Wang, J., Wiseman, H. M. and Milburn, G. J. (2001) Non-Markovian homodyne-mediated feedback on a two-level atom: a quantum trajectory treatment. Chemical Physics, 268 1: 221-235.
• Nonconventional computing paradigms in the new millennium: A roundtable
Zomaya, AY, Anderson, JA, Fogel, DB, Milburn, GJ and Rozenberg, G (2001) Nonconventional computing paradigms in the new millennium: A roundtable. Computing In Science & Engineering, 3 6: 82-99.
• Periodic orbit quantization of a Hamiltonian map on the sphere
Scott, A. and Milburn, G. J. (2001) Periodic orbit quantization of a Hamiltonian map on the sphere. Journal of Physics A: Mathematical and General, 34 37: 7541-7562.
• Quantum measurement of coherent tunnelling between quantum dots
Wiseman, H. M., Wahyu Utami, D., Sun, H., Milburn, G. J., Kane, B. E., Dzurak, A. and Clark, R. G. (2001) Quantum measurement of coherent tunnelling between quantum dots. Physical Review B, 63
23: 235308-235319.
• Quantum nonlinear dynamics of continuously measured systems
Scott, A. and Milburn, G. J. (2001) Quantum nonlinear dynamics of continuously measured systems. Physical Review A, 63 4: 042101-1-042101-10.
• Hug, M. and Milburn, G. J. (2001) Quantum slow motion. Physical Review A, 63 2: .
• Single-electron measurements with a micromechanical resonator
Polkinghorne, R. and Milburn, G. J. (2001) Single-electron measurements with a micromechanical resonator. Physical Review A, 64 4: 042318-1-042318-9.
• Teleportation with the entangled states of a beam splitter
Cochrane, P. T. and Milburn, G. J. (2001) Teleportation with the entangled states of a beam splitter. Physical Review A, 64 6: .
• Universal state inversion and concurrence in arbitrary dimensions
Rungta, P., Buzek, V., Caves, C. M., Hillery, M. and Milburn, G. J. (2001) Universal state inversion and concurrence in arbitrary dimensions. Physical Review A, 64 4: .
• Teleportation using coupled oscillator states
Cochrane, P. T., Milburn, G. J. and Munro, W. J. (2000) Teleportation using coupled oscillator states. Physical Review A, 62 6: 062307-1-062307-8.
• Universal teleportation with a twist
Braunstein, S. L., DAriano, G. M., Milburn, G. J. and Sacchi, M. F. (2000) Universal teleportation with a twist. Physical Review Letters, 84 15: 3486-3489.
• A new approach to current and noise in double quantum dot systems
Sun, H. B. and Milburn, G. J. (2000) A new approach to current and noise in double quantum dot systems. Physica E, 6 1-4: 664-667.
• Absorptive quantum measurements via coherently coupled quantum dots
Milburn, G. J., Sun, H. B. and Upcroft, B. (2000) Absorptive quantum measurements via coherently coupled quantum dots. Australian Journal of Physics, 53 4: 463-476.
• Entangled coherent-state qubits in an ion trap
Munro, W. J., Milburn, G. J. and Sanders, B. C. (2000) Entangled coherent-state qubits in an ion trap. Physical Review A, 62 052108: 1-4.
• High-frequency acousto-electric single-photon source
Foden, C. L., Talyanskii, V. I., Milburn, G. J., Leadbeater, M. L. and Pepper, M. (2000) High-frequency acousto-electric single-photon source. Physical Review A, 62 011803: 011803-1-011803-4.
• Ion Trap Quantum Computing with Warm Ions
Milburn, G. J., Schneider, S. and James, D. F. (2000) Ion Trap Quantum Computing with Warm Ions. Fortschritte der Physik, 48 9-11: 801-810.
• Milburn, GJ and Dyrting, S (2000) Quantum chaos in atom optics. Philosophical Magazine B-physics of Condensed Matter Statistical Mechanics Electronic Optical And Magnetic Properties, 80 12:
• Quantum computing using a neutral atom optical lattice: An appraisal
Milburn, GJ (2000) Quantum computing using a neutral atom optical lattice: An appraisal. Fortschritte der Physik, 48 9-11: 957-964.
• Quantum controlled-NOT gate with 'hot' trapped ions
Schneider, S., James, D. F. and Milburn, G. J. (2000) Quantum controlled-NOT gate with 'hot' trapped ions. Journal of Modern Optics, 47 2/3: 499-505.
• Quantum measurement and stochastic processes in mesoscopic conductors
Milburn, GJ (2000) Quantum measurement and stochastic processes in mesoscopic conductors. Australian Journal of Physics, 53 4: 477-487.
• Quantum phase transitions in a linear ion trap
Milburn, G. J. and Alsing, P. (2000) Quantum phase transitions in a linear ion trap. Directions in Quantum Optics, 561 : 303-312.
• Rungta, P., Munro, W. J., Nemoto, K., Deuar, P. and Milburn, G. J. (2000) Qudit entanglement. Directions in Quantum Optics, 561 : 149-164.
• Caves, CM and Milburn, GJ (2000) Qutrit entanglement. Optics Communications, 179 25 May 2000: 439-446.
• Sensitivity to measurement perturbation of single-atom dynamics in cavity QED
Liu, X. M., Hug, M. and Milburn, G. J. (2000) Sensitivity to measurement perturbation of single-atom dynamics in cavity QED. Physical Review A, 62 043801: 043801-1-043801-6.
• Single-spin measurement using single-electron transistors to probe two-electron systems
Kane, B. E., McAlpine, N. S., Dzurak, A. S., Clark, R. G., Milburn, G. J., Sun, H. B. and Wiseman, H. M. (2000) Single-spin measurement using single-electron transistors to probe two-electron
systems. Physical Review B, 61 4: 2961-2972.
• Two-dimensional nonlinear dynamics of evanescent-wave guided atoms in a hollow fiber
Liu, X. M. and Milburn, G. J. (2000) Two-dimensional nonlinear dynamics of evanescent-wave guided atoms in a hollow fiber. Physical Review A, 61 053401: 053401-1-053401-7.
• Quantum and classical chaos for a single trapped ion
Scott, A. J., Holmes, C. A. and Milburn, G. J. (1999) Quantum and classical chaos for a single trapped ion. Physical Review A, 61 1: (013401) 1-(013401) 7.
• A new approach to shot noise in resonant tunnelling current
Sun, HB and Milburn, GJ (1999) A new approach to shot noise in resonant tunnelling current. International Journal of Modern Physics B, 13 5 & 6: 505-509.
• Chaotic dynamics of cold atoms in far-off-resonant donut beams
Liu, X. and Milburn, G. J. (1999) Chaotic dynamics of cold atoms in far-off-resonant donut beams. Physical Review E, 59 3: 2842-2845.
• Milburn, G. (1999) Daniel F Walls 1942-1999. Physics World, 12 8: 48-48.
• Decoherence and fidelity in ion traps with fluctuating trap parameters
Schneider, S. and Milburn, G. J. (1999) Decoherence and fidelity in ion traps with fluctuating trap parameters. Physical Review A, 59 5: 3766-3774.
• Macroscopically distinct quantum-superposition states as a bosonic code for amplitude damping
Cochrane, PT, Milburn, GJ and Munro, WJ (1999) Macroscopically distinct quantum-superposition states as a bosonic code for amplitude damping. Physical Review A, 59 4: 2631-2634.
• Quantum open-systems approach to current noise in resonant tunneling junctions
Sun, H. and Milburn, G. J. (1999) Quantum open-systems approach to current noise in resonant tunneling junctions. Physical Review B, 59 16: 10748-10756.
• Quantum teleportation with squeezed vacuum states
Milburn, G. J. and Braunstein, S. L. (1999) Quantum teleportation with squeezed vacuum states. Physical Review A, 60 2: 937-942.
• Sensitivity to measurement errors in the quantum kicked top
Breslin, JK and Milburn, GJ (1999) Sensitivity to measurement errors in the quantum kicked top. Physical Review A, 59 3: 1781-1787.
• Weak-force detection using a double Bose-Einstein condensate
Corney, J. F., Milburn, G. J. and Zhang, W. P. (1999) Weak-force detection using a double Bose-Einstein condensate. Physical Review A, 59 6: 4630-4635.
• Characterizing Greenberger-Horne-Zeilinger correlations in nondegenerate parametric oscillation via phase measurements
Munro, W. J. and Milburn, G. J. (1998) Characterizing Greenberger-Horne-Zeilinger correlations in nondegenerate parametric oscillation via phase measurements. Physical Review Letters, 81 20:
• Measurement and state preparation via ion trap quantum computing
Schneider, S., Wiseman, H. M., Munro, W. J. and Milburn, G. J. (1998) Measurement and state preparation via ion trap quantum computing. Fortschritte Der Physik-Progress of Physics, 46 4-5:
• Milburn, GJ (1998) An atom-optical Maxwell demon. Australian Journal of Physics, 51 1: 1-8.
• Classical and quantum noise in electronic systems
Milburn, GJ and Sun, HB (1998) Classical and quantum noise in electronic systems. Contemporary Physics, 39 1: 67-79.
• Decoherence in ion traps due to laser intensity and phase fluctuations
Schneider, S. and Milburn, G. J. (1998) Decoherence in ion traps due to laser intensity and phase fluctuations. Physical Review A, 57 5: 3748-3752.
• Homodyne measurements on a Bose-Einstein condensate
Corney, J. F. and Milburn, G. J. (1998) Homodyne measurements on a Bose-Einstein condensate. Physical Review A, 58 3: 2399-2406.
• Interference in hyperbolic space
Chaturvedi, S, Milburn, GJ and Zhang, ZX (1998) Interference in hyperbolic space. Physical Review A, 57 3: 1529-1535.
• Measurements on trapped laser-cooled ions using quantum computations
DHelon, C and Milburn, GJ (1998) Measurements on trapped laser-cooled ions using quantum computations. Fortschritte Der Physik-progress of Physics, 46 6-8: 707-712.
• Quantum theory for current noise in resonant tunnelling devices
Sun, HB and Milburn, GJ (1998) Quantum theory for current noise in resonant tunnelling devices. Superlattices and Microstructures, 23 3-4: 883-891.
• Quantum-state protection in cavities
Vitali, D, Tombesi, P and Milburn, GJ (1998) Quantum-state protection in cavities. Physical Review A, 57 6: 4930-4944.
• Conditional variance reduction by measurements on correlated field modes
Breslin, JK and Milburn, GJ (1997) Conditional variance reduction by measurements on correlated field modes. Physical Review A, 55 2: 1430-1436.
• Controlling the decoherence of a ''meter'' via stroboscopic feedback
Vitali, D, Tombesi, P and Milburn, GJ (1997) Controlling the decoherence of a ''meter'' via stroboscopic feedback. Physical Review Letters, 79 13: 2442-2445.
• Correcting the effects of spontaneous emission on cold-trapped ions
DHelon, C and Milburn, GJ (1997) Correcting the effects of spontaneous emission on cold-trapped ions. Physical Review A, 56 1: 640-644.
• Optimal quantum measurements for phase-shift estimation in optical interferometry
Sanders, BC, Milburn, GJ and Zhang, Z (1997) Optimal quantum measurements for phase-shift estimation in optical interferometry. Journal of Modern Optics, 44 7: 1309-1320.
• Optimal quantum trajectories for discrete measurements
Breslin, JK and Milburn, GJ (1997) Optimal quantum trajectories for discrete measurements. Journal of Modern Optics, 44 11-12: 2469-2484.
• Protecting Schrodinger cat states using feedback
Vitali, D, Tombesi, P and Milburn, GJ (1997) Protecting Schrodinger cat states using feedback. Journal of Modern Optics, 44 11-12: 2033-2041.
• Quantum chaos in the atomic gravitational cavity
Chen, WY and Milburn, GJ (1997) Quantum chaos in the atomic gravitational cavity. Physical Review E, 56 1: 351-354.
• Quantum dynamics of an atomic Bose-Einstein condensate in a double-well potential
Milburn, G. J., Corney, J., Wright, E. M. and Walls, D. F. (1997) Quantum dynamics of an atomic Bose-Einstein condensate in a double-well potential. Physical Review A, 55 6: 4318-4324.
• Quantum signatures of chaos in the dynamics of a trapped ion
Breslin, J. K., Holmes, C. A. and Milburn, G. J. (1997) Quantum signatures of chaos in the dynamics of a trapped ion. Physical Review A, 56 4: 3022-3027.
• Measurements on trapped laser-cooled ions using quantum computations
Dhelon, C. and Milburn, G. J. (1996) Measurements on trapped laser-cooled ions using quantum computations. Physical Review A, 54 6: 5141-5146.
• Creating metastable Schrodinger cat states (vol 75, pg 418, 1995)
Slosser, JJ and Milburn, GJ (1996) Creating metastable Schrodinger cat states (vol 75, pg 418, 1995). Physical Review Letters, 77 11: 2344-2344.
• Creating metastable Schrodinger cat states - Reply
Milburn, GJ (1996) Creating metastable Schrodinger cat states - Reply. Physical Review Letters, 77 11: 2337-2337.
• Effect of noise and modulation on the reflection of atoms from an evanescent wave
Chen, W. Y., Milburn, G. J. and Dyrting, S. (1996) Effect of noise and modulation on the reflection of atoms from an evanescent wave. Physical Review A, 54 2: 1510-1515.
• Reconstructing the vibrational state of a trapped ion
D'Helon, C. and Milburn, G. J. (1996) Reconstructing the vibrational state of a trapped ion. Physical Review A, 54 1: R25-R28.
• Quantum chaos in atom optics: Using phase noise to model continuous momentum and position measurement
Dyrting, S. and Milburn, G. J. (1996) Quantum chaos in atom optics: Using phase noise to model continuous momentum and position measurement. Quantum and Semiclassical Optics, 8 3: 541-555.
• Generalized uncertainty relations: Theory, examples, and Lorentz invariance
Braunstein, SL, Caves, CM and Milburn, GJ (1996) Generalized uncertainty relations: Theory, examples, and Lorentz invariance. Annals of Physics, 247 1: 135-173.
• Classical and quantum conditional statistical dynamics
Milburn, GJ (1996) Classical and quantum conditional statistical dynamics. Quantum and Semiclassical Optics, 8 1: 269-276.
• Measuring the vibrational energy of a trapped ion
Dhelon, C. and Milburn, G. J. (1995) Measuring the vibrational energy of a trapped ion. Physical Review A, 52 6: 4755-4762.
• Effect of Dissipation and Measurement On a Tunneling System
Wielinga, B and Milburn, GJ (1995) Effect of Dissipation and Measurement On a Tunneling System. Physical Review a, 52 4: 3323-3332.
• Optimal Quantum Measurements for Phase Estimation
Sanders, BC and Milburn, GJ (1995) Optimal Quantum Measurements for Phase Estimation. Physical Review Letters, 75 16: 2944-2947.
• Creating Metastable Schrodinger Cat States
Slosser, JJ and Milburn, GJ (1995) Creating Metastable Schrodinger Cat States. Physical Review Letters, 75 3: 418-421.
• Optimal Quantum Trajectories for Continuous Measurement
Breslin, JK, Milburn, GJ and Wiseman, HM (1995) Optimal Quantum Trajectories for Continuous Measurement. Physical Review Letters, 74 24: 4827-4830.
• Dynamics of Statistical Distance - Quantum Limits for 2-Level Clocks
Braunstein, SL and Milburn, GJ (1995) Dynamics of Statistical Distance - Quantum Limits for 2-Level Clocks. Physical Review a, 51 3: 1820-1826.
• Fractional Quantum Revivals in the Atomic Gravitational Cavity
Chen, WY and Milburn, GJ (1995) Fractional Quantum Revivals in the Atomic Gravitational Cavity. Physical Review a, 51 3: 2328-2333.
• Creating Number States in the Micromaser Using Feedback
Liebman, A and Milburn, GJ (1995) Creating Number States in the Micromaser Using Feedback. Physical Review a, 51 1: 736-751.
• Hyperbolic Phase and Squeeze-Parameter Estimation
Milburn, G. J., Chen, W. Y. and Jones, K. R. (1994) Hyperbolic Phase and Squeeze-Parameter Estimation. Physical Review A, 50 1: 801-804.
• Noise-Reduction in the Nondegenerate Parametric Oscillator with Direct-Detection Feedback
Slosser, JJ and Milburn, GJ (1994) Noise-Reduction in the Nondegenerate Parametric Oscillator with Direct-Detection Feedback. Physical Review a, 50 1: 793-800.
• Quantum-Theory of Optical Feedback Via Homodyne Detection - Reply
Wiseman, HM and Milburn, GJ (1994) Quantum-Theory of Optical Feedback Via Homodyne Detection - Reply. Physical Review Letters, 72 25: 4054-4054.
• All-optical versus electro-optical quantum-limited feedback
Wiseman, H. M. and Milburn, G. J. (1994) All-optical versus electro-optical quantum-limited feedback. Physical Review A, 49 5: 4110-4125.
• Quantum scattering of a two-level atom in the limit of large detuning
Dyrting, S. and Milburn, G. J. (1994) Quantum scattering of a two-level atom in the limit of large detuning. Physical Review A, 49 5: 4180-4188.
• Wiseman, H. M. and Milburn, G. J. (1994) Squeezing Via Feedback. Physical Review A, 49 2: 1350-1366.
• Interference in a Spherical Phase-Space and Asymptotic-Behavior of the Rotation Matrices
Lassig, CC and Milburn, GJ (1993) Interference in a Spherical Phase-Space and Asymptotic-Behavior of the Rotation Matrices. Physical Review a, 48 3: 1854-1860.
• Quantum Tunneling in a Kerr Medium with Parametric Pumping
Wielinga, B. and Milburn, G. J. (1993) Quantum Tunneling in a Kerr Medium with Parametric Pumping. Physical Review A, 48 3: 2494-2496.
• Nonlinear Quantum Dynamics At a Classical 2nd-Order Resonance
Dyrting, S, Milburn, GJ and Holmes, CA (1993) Nonlinear Quantum Dynamics At a Classical 2nd-Order Resonance. Physical Review E, 48 2: 969-978.
• Continuous Position Measurements and the Quantum Zeno Effect
Gagen, MJ, Wiseman, HM and Milburn, GJ (1993) Continuous Position Measurements and the Quantum Zeno Effect. Physical Review a, 48 1: 132-142.
• Quantum Features in the Scattering of Atoms From An Optical Standing Wave
Dyrting, S and Milburn, GJ (1993) Quantum Features in the Scattering of Atoms From An Optical Standing Wave. Physical Review a, 47 4: R2484-R2487.
• Interpretation of Quantum Jump and Diffusion-Processes Illustrated On the Bloch Sphere
Wiseman, HM and Milburn, GJ (1993) Interpretation of Quantum Jump and Diffusion-Processes Illustrated On the Bloch Sphere. Physical Review a, 47 3: 1652-1666.
• Intrinsic Decoherence in Quantum-Mechanics - Reply
Milburn, GJ (1993) Intrinsic Decoherence in Quantum-Mechanics - Reply. Physical Review a, 47 3: 2415-2416.
• Quantum-Theory of Optical Feedback Via Homodyne Detection
Wiseman, HM and Milburn, GJ (1993) Quantum-Theory of Optical Feedback Via Homodyne Detection. Physical Review Letters, 70 5: 548-551.
• Eavesdropping Using Quantum-Nondemolition Measurements
Werner, MJ and Milburn, GJ (1993) Eavesdropping Using Quantum-Nondemolition Measurements. Physical Review a, 47 1: 639-641.
• Quantum Theory of Field-Quadrature Measurements
Wiseman, H. M. and Milburn, G. J. (1993) Quantum Theory of Field-Quadrature Measurements. Physical Review A, 47 1: 642-662.
• Quantum-Noise Reduction in a Driven Cavity with Feedback
Liebman, A. and Milburn, G. J. (1993) Quantum-Noise Reduction in a Driven Cavity with Feedback. Physical Review A, 47 1: 634-638.
• Reduction in Laser-Intensity Fluctuations by a Feedback-Controlled Output Mirror
Wiseman, HM and Milburn, GJ (1992) Reduction in Laser-Intensity Fluctuations by a Feedback-Controlled Output Mirror. Physical Review a, 46 5: 2853-2858.
• Rydberg-Atom Phase-Sensitive Detection and the Quantum Zeno Effect
Milburn, GJ and Gagen, MJ (1992) Rydberg-Atom Phase-Sensitive Detection and the Quantum Zeno Effect. Physical Review a, 46 3: 1578-1585.
• Chaos and Coherence in An Optical-System Subject to Photon Nondemolition Measurement
Wielinga, B and Milburn, GJ (1992) Chaos and Coherence in An Optical-System Subject to Photon Nondemolition Measurement. Physical Review a, 46 2: 762-770.
• Quantum Limits to All-Optical Switching in the Nonlinear Mach-Zehnder Interferometer
Sanders, BC and Milburn, GJ (1992) Quantum Limits to All-Optical Switching in the Nonlinear Mach-Zehnder Interferometer. Journal of the Optical Society of America B-Optical Physics, 9 6: 915-920.
• Quantum Zeno Effect Induced by Quantum-Nondemolition Measurement of Photon Number
Gagen, MJ and Milburn, GJ (1992) Quantum Zeno Effect Induced by Quantum-Nondemolition Measurement of Photon Number. Physical Review a, 45 7: 5228-5236.
• Quantum Limits to All-Optical Phase-Shifts in a Kerr Nonlinear Medium
Sanders, BC and Milburn, GJ (1992) Quantum Limits to All-Optical Phase-Shifts in a Kerr Nonlinear Medium. Physical Review a, 45 3: 1919-1923.
• Noise-Reduction in a Laser by Nonlinear Damping
Wiseman, HM and Milburn, GJ (1991) Noise-Reduction in a Laser by Nonlinear Damping. Physical Review a, 44 11: 7815-7819.
• Intrinsic Decoherence in Quantum-Mechanics
Milburn, GJ (1991) Intrinsic Decoherence in Quantum-Mechanics. Physical Review a, 44 9: 5401-5406.
• Quantum Coherence and Classical Chaos in a Pulsed Parametric Oscillator with a Kerr Nonlinearity
Milburn, GJ and Holmes, CA (1991) Quantum Coherence and Classical Chaos in a Pulsed Parametric Oscillator with a Kerr Nonlinearity. Physical Review a, 44 7: 4704-4711.
• Quantum Noise-Reduction by Photodetection with Feedback
Milburn, G. J. (1991) Quantum Noise-Reduction by Photodetection with Feedback. Journal of Modern Optics, 38 10: 1973-1980.
• Quantum Nondemolition Measurements Using a Fully Quantized Parametric Interaction
Gagen, MJ and Milburn, GJ (1991) Quantum Nondemolition Measurements Using a Fully Quantized Parametric Interaction. Physical Review a, 43 11: 6177-6186.
• Photon Statistics of 2-Mode Squeezed States and Interference in 4-Dimensional Phase-Space
Caves, CM, Zhu, C, Milburn, GJ and Schleich, W (1991) Photon Statistics of 2-Mode Squeezed States and Interference in 4-Dimensional Phase-Space. Physical Review a, 43 7: 3854-3861.
• Interpretation for a Positive-P Representation
Braunstein, SL, Caves, CM and Milburn, GJ (1991) Interpretation for a Positive-P Representation. Physical Review a, 43 3: 1153-1159.
• Coherence and Chaos in a Quantum Optical-System
Milburn, GJ (1990) Coherence and Chaos in a Quantum Optical-System. Physical Review a, 41 11: 6567-6570.
• Correlated Photons in Parametric Frequency-Conversion with Initial 2-Mode Squeezed States
Gagen, MJ and Milburn, GJ (1990) Correlated Photons in Parametric Frequency-Conversion with Initial 2-Mode Squeezed States. Optics Communications, 76 3-4: 253-255.
• Quantum Nondemolition Measurement of Quantum Beats and the Enforcement of Complementarity
Sanders, BC and Milburn, GJ (1989) Quantum Nondemolition Measurement of Quantum Beats and the Enforcement of Complementarity. Physical Review a, 40 12: 7087-7092.
• Squeezed-State Superpositions in a Damped Nonlinear Oscillator
Milburn, GJ, Mecozzi, A and Tombesi, P (1989) Squeezed-State Superpositions in a Damped Nonlinear Oscillator. Journal of Modern Optics, 36 12: 1607-1614.
• Destruction of Quantum Coherence in a Nonlinear Oscillator Via Attenuation and Amplification
Daniel, DJ and Milburn, GJ (1989) Destruction of Quantum Coherence in a Nonlinear Oscillator Via Attenuation and Amplification. Physical Review a, 39 9: 4628-4640.
• Milburn, GJ (1989) Quantum Optical Fredkin Gate. Physical Review Letters, 62 18: 2124-2127.
• Photon-Number-State Preparation in Nondegenerate Parametric Amplification
Holmes, CA, Milburn, GJ and Walls, DF (1989) Photon-Number-State Preparation in Nondegenerate Parametric Amplification. Physical Review a, 39 5: 2493-2501.
• Quantum Chaotic System in the Generalized Husimi Representation - Comment
Milburn, GJ (1989) Quantum Chaotic System in the Generalized Husimi Representation - Comment. Physical Review a, 39 5: 2749-2750.
• Complementarity in a Quantum Nondemolition Measurement
Sanders, BC and Milburn, GJ (1989) Complementarity in a Quantum Nondemolition Measurement. Physical Review a, 39 2: 694-702.
• The Effect of Measurement On the Quantum Features of a Chaotic System
Sanders, BC and Milburn, GJ (1989) The Effect of Measurement On the Quantum Features of a Chaotic System. Zeitschrift Fur Physik B-Condensed Matter, 77 3: 497-510.
• Effect of Dissipation On Interference in Phase-Space
Milburn, GJ and Walls, DF (1988) Effect of Dissipation On Interference in Phase-Space. Physical Review a, 38 2: 1087-1090.
• Quantum Zeno Effect and Motional Narrowing in a 2-Level System
Milburn, GJ (1988) Quantum Zeno Effect and Motional Narrowing in a 2-Level System. Journal of the Optical Society of America B-Optical Physics, 5 6: 1317-1322.
• Quantum Nondemolition Measurements in Optical Cavities
Alsing, P., Milburn, G. J. and Walls, D. F. (1988) Quantum Nondemolition Measurements in Optical Cavities. Physical Review A, 37 8: 2970-2978.
• Quantum Measurement Theory of Optical Heterodyne-Detection
Milburn, GJ (1987) Quantum Measurement Theory of Optical Heterodyne-Detection. Physical Review a, 36 11: 5271-5279.
• Quantum-Mechanical Model for Continuous Position Measurements
Caves, CM and Milburn, GJ (1987) Quantum-Mechanical Model for Continuous Position Measurements. Physical Review a, 36 12: 5543-5555.
• Optical-Fiber Media for Squeezed-State Generation
Milburn, G. J., Levenson, M. D., Shelby, R. M., Perlmutter, S. H., Devoe, R. G. and Walls, D. F. (1987) Optical-Fiber Media for Squeezed-State Generation. Journal of the Optical Society of
America B-Optical Physics, 4 10: 1476-1489.
• Kicked Quantized Cavity Mode - An Open-Systems-Theory Approach
Milburn, GJ (1987) Kicked Quantized Cavity Mode - An Open-Systems-Theory Approach. Physical Review a, 36 2: 744-749.
• Linear-Amplifiers with Phase-Sensitive Noise
Milburn, GJ, Stevnross, ML and Walls, DF (1987) Linear-Amplifiers with Phase-Sensitive Noise. Physical Review a, 35 10: 4443-4445.
• Atomic-Level Shifts in a Squeezed Vacuum
Milburn, GJ (1986) Atomic-Level Shifts in a Squeezed Vacuum. Physical Review a, 34 6: 4882-4885.
• Generation of squeezed states of light with a fiber-optic ring interferometer
Shelby, R. M., Levenson, M. D., Walls, D. F., Aspect, A. and Milburn, G. J. (1986) Generation of squeezed states of light with a fiber-optic ring interferometer. Physical Review A, 33 6:
• Dissipative Quantum and Classical Liouville Mechanics of the Anharmonic-Oscillator
Milburn, GJ and Holmes, CA (1986) Dissipative Quantum and Classical Liouville Mechanics of the Anharmonic-Oscillator. Physical Review Letters, 56 21: 2237-2240.
• Quantum and Classical Liouville Dynamics of the Anharmonic-Oscillator
Milburn, GJ (1986) Quantum and Classical Liouville Dynamics of the Anharmonic-Oscillator. Physical Review a, 33 1: 674-685.
• Analysis of a Quantum Measurement
Walls, DF, Collet, MJ and Milburn, GJ (1985) Analysis of a Quantum Measurement. Physical Review D, 32 12: 3208-3215.
• Effect of Dissipation On Quantum Coherence
Walls, D. F. and Milburn, G. J. (1985) Effect of Dissipation On Quantum Coherence. Physical Review A, 31 4: 2403-2408.
• Interaction of Squeezed States with Nonlinear Optical-Systems
Milburn, GJ (1985) Interaction of Squeezed States with Nonlinear Optical-Systems. Acta Physica Austriaca, 57 2: 95-109.
• Interaction of a 2-Level Atom with Squeezed Light
Milburn, GJ (1984) Interaction of a 2-Level Atom with Squeezed Light. Optica Acta, 31 6: 671-679.
• Multimode Minimum Uncertainty Squeezed States
Milburn, G. J. (1984) Multimode Minimum Uncertainty Squeezed States. Journal of Physics A-Mathematical and General, 17 4: 737-745.
• Quantum Phase Fluctuations and Squeezing in Degenerate 4-Wave Mixing
Milburn, G. J., Walls, D. F. and Levenson, M. D. (1984) Quantum Phase Fluctuations and Squeezing in Degenerate 4-Wave Mixing. Journal of the Optical Society of America B-Optical Physics, 1 3:
• Squeezing in a detuned parametric amplifier
Carmichael, H. J., Milburn, G. J. and Walls, D. F. (1984) Squeezing in a detuned parametric amplifier. Journal of Physics A - Mathematical and General, 17 2: 469-480.
• State Reduction in Quantum-Counting Quantum Nondemolition Measurements
Milburn, G. J. and Walls, D. F. (1984) State Reduction in Quantum-Counting Quantum Nondemolition Measurements. Physical Review A, 30 1: 56-60.
• Quantum Non Demolition Measurements
Walls, DF and Milburn, GJ (1983) Quantum Non Demolition Measurements. Lecture Notes in Physics, 182 : 249-263.
• Quantum Nondemolition Measurements On Coupled Harmonic-Oscillators
Milburn, G. J., Lane, A. S. and Walls, D. F. (1983) Quantum Nondemolition Measurements On Coupled Harmonic-Oscillators. Physical Review A, 27 6: 2804-2816.
• Quantum Nondemolition Measurements Via Quadratic Coupling
Milburn, G. J. and Walls, D. F. (1983) Quantum Nondemolition Measurements Via Quadratic Coupling. Physical Review A, 28 4: 2065-2070.
• Quantum Nondemolition Measurements Via Quantum Counting
Milburn, GJ and Walls, DF (1983) Quantum Nondemolition Measurements Via Quantum Counting. Physical Review a, 28 5: 2646-2648.
• Quantum Solutions of the Damped Harmonic-Oscillator
Milburn, G. J. and Walls, D. F. (1983) Quantum Solutions of the Damped Harmonic-Oscillator. American Journal of Physics, 51 12: 1134-1136.
• Squeezed States and Intensity Fluctuations in Degenerate Parametric Oscillation
Milburn, G. J. and Walls, D. F. (1983) Squeezed States and Intensity Fluctuations in Degenerate Parametric Oscillation. Physical Review A, 27 1: 392-394.
• Photo-Electron Counting Probabilities for Non-Classical Fields
Walls, DF, Milburn, GJ and Carmichael, HJ (1982) Photo-Electron Counting Probabilities for Non-Classical Fields. Optica Acta, 29 9: 1179-1182. | {"url":"http://researchers.uq.edu.au/researcher/8","timestamp":"2014-04-20T00:41:32Z","content_type":null,"content_length":"262181","record_id":"<urn:uuid:b6ffe5df-a593-442c-8331-3ebe8742f767>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/mashy/medals/1","timestamp":"2014-04-18T21:22:06Z","content_type":null,"content_length":"106232","record_id":"<urn:uuid:c652d4a3-c9f3-47ca-b3e3-ca89bc1c555e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interesting Question
March 4th 2006, 12:51 PM
Interesting Question
This is from a Mu Alpha Theta Competition I took today.
19. Let f and g be functions such that $f(x)^2=g^{-1}(4x-1)$. What is $f(x)f'(x)g'(f(x)^2)$?
A) Not enough information
C) $\frac{2}{4x-1}$
I didn't really know how to approach this, so a little push would be great.
March 4th 2006, 01:23 PM
Originally Posted by Jameson
This is from a Mu Alpha Theta Competition I took today.
19. Let f and g be functions such that $f(x)^2=g^{-1}(4x-1)$. What is $f(x)f'(x)g'(f(x)^2)$?
A) Not enough information
C) $\frac{2}{4x-1}$
I didn't really know how to approach this, so a little push would be great.
$<br /> f(x)^2=g^{-1}(4x-1)<br />$
Rewrite as:
$<br /> g([f(x)]^2)=4x-1<br />$
Now differentiate wrt x.
March 4th 2006, 03:05 PM
Originally Posted by CaptainBlack
$<br /> f(x)^2=g^{-1}(4x-1)<br />$
Rewrite as:
$<br /> g([f(x)]^2)=4x-1<br />$
Now differentiate wrt x.
How do you know it is differenciable :D
March 4th 2006, 10:04 PM
Originally Posted by ThePerfectHacker
How do you know it is differenciable :D
1. It is a linear form in $x$ and so differentiable wrt $x$.
2. If $f$ and $g$ are differentiable then this is differentiable. Asking the
question implies we may assume that $f$ and $g$ are differentiable, just
as it allows us to assume that $'$ denotes the derivative.
March 5th 2006, 09:58 AM
I was making a remark that in analysis you always need to show diffrenciability before taking the derivative, I just found that funny.
March 5th 2006, 01:11 PM
Originally Posted by CaptainBlack
$<br /> f(x)^2=g^{-1}(4x-1)<br />$
Rewrite as:
$<br /> g([f(x)]^2)=4x-1<br />$
Now differentiate wrt x.
So I get that $g'[f(x)^2]*2f(x)*f'(x)=4$, so my answer is 2.
March 5th 2006, 01:24 PM
Originally Posted by Jameson
So I get that $g'[f(x)^2]*2f(x)*f'(x)=4$, so my answer is 2.
That's what I get :) | {"url":"http://mathhelpforum.com/calculus/2079-interesting-question-print.html","timestamp":"2014-04-20T11:13:08Z","content_type":null,"content_length":"11232","record_id":"<urn:uuid:09ae08a8-b884-4184-9519-7b82bfcb2269>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Lattice Computations on The Gauge Connection
Nailing down the Yang-Mills problem
Millennium problems represent a major challenge for physicists and mathematicians. So far, the only one that has been solved was the Poincaré conjecture (now a theorem) by Grisha Perelman. For
people working in strong interactions and quantum chromodynamics, the most interesting of such problems is the Yang-Mills mass gap and existence problem. The solutions of this problem would imply a
lot of consequences in physics and one of the most important of these is a deep understanding of confinement of quarks inside hadrons. So far, there seems to be no solution to it but things do not
stay exactly in this way. A significant number of researchers has performed lattice computations to obtain the propagators of the theory in the full range of energy from infrared to ultraviolet
providing us a deep understanding of what is going on here (see Yang-Mills article on Wikipedia). The propagators to be considered are those for the gluon and the ghost. There has been a significant
effort from theoretical physicists in the last twenty years to answer this question. It is not so widely known in the community but it should because the work of this people could be the starting
point for a great innovation in physics. In these days, on arxiv a paper by Axel Maas gives a great recount of the situation of these lattice computations (see here). Axel has been an important
contributor to this research area and the current understanding of the behavior of the Yang-Mills theory in two dimensions owes a lot to him. In this paper, Axel presents his computations on large
volumes for Yang-Mills theory on the lattice in 2, 3 and 4 dimensions in the SU(2) case. These computations are generally performed in the Landau gauge (propagators are gauge dependent quantities)
being the most favorable for them. In four dimensions the lattice is $(6\ fm)^4$, not the largest but surely enough for the aims of the paper. Of course, no surprise comes out with respect what
people found starting from 2007. The scenario is well settled and is this:
1. The gluon propagator in 3 and 4 dimensions dos not go to zero with momenta but is just finite. In 3 dimensions has a maximum in the infrared reaching its finite value at 0 from below. No such
maximum is seen in 4 dimensions. In 2 dimensions the gluon propagator goes to zero with momenta.
2. The ghost propagator behaves like the one of a free massless particle as the momenta are lowered. This is the dominant behavior in 3 and 4 dimensions. In 2 dimensions the ghost propagator is
enhanced and goes to infinity faster than in 3 and 4 dimensions.
3. The running coupling in 3 and 4 dimensions is seen to reach zero as the momenta go to zero, reach a maximum at intermediate energies and goes asymptotically to 0 as momenta go to infinity
(asymptotic freedom).
Here follows the figure for the gluon propagator
and for the running coupling
There is some concern for people about the running coupling. There is a recurring prejudice in Yang-Mills theory, without any support both theoretical or experimental, that the theory should be not
trivial in the infrared. So, the running coupling should not go to zero lowering momenta but reach a finite non-zero value. Of course, a pure Yang-Mills theory in nature does not exist and it is very
difficult to get an understanding here. But, in 2 and 3 dimensions, the point is that the gluon propagator is very similar to a free one, the ghost propagator is certainly a free one and then, using
the duck test: If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck, the theory is really trivial also in the infrared limit. Currently, there are two people
in the World that have recognized a duck here: Axel Weber (see here and here) using renormalization group and me (see here, here and here). Now, claiming to see a duck where all others are
pretending to tell a dinosaur does not make you the most popular guy in the district. But so it goes.
These lattice computations are an important cornerstone in the search for the behavior of a Yang-Mills theory. Whoever aims to present to the World his petty theory for the solution of the Millennium
prize must comply with these results showing that his theory is able to reproduce them. Otherwise what he has is just rubbish.
What appears in the sight is also the proof of existence of the theory. Having two trivial fixed points, the theory is Gaussian in these limits exactly as the scalar field theory. A Gaussian theory
is the simplest example we know of a quantum field theory that is proven to exist. Could one recover the missing part between the two trivial fixed points as also happens for the scalar theory? In
the end, it is possible that a Yang-Mills theory is just the vectorial counterpart of the well-known scalar field, the workhorse of all the scholars in quantum field theory.
Axel Maas (2014). Some more details of minimal-Landau-gauge Yang-Mills propagators arXiv arXiv: 1402.5050v1
Axel Weber (2012). Epsilon expansion for infrared Yang-Mills theory in Landau gauge Phys. Rev. D 85, 125005 arXiv: 1112.1157v2
Axel Weber (2012). The infrared fixed point of Landau gauge Yang-Mills theory arXiv arXiv: 1211.1473v1
Marco Frasca (2007). Infrared Gluon and Ghost Propagators Phys.Lett.B670:73-77,2008 arXiv: 0709.2042v6
Marco Frasca (2009). Mapping a Massless Scalar Field Theory on a Yang-Mills Theory: Classical
Case Mod. Phys. Lett. A 24, 2425-2432 (2009) arXiv: 0903.2357v4
Marco Frasca (2010). Mapping theorem and Green functions in Yang-Mills theory PoS FacesQCD:039,2010 arXiv: 1011.3643v3
Back to CUDA
It is about two years ago when I wrote my last post about CUDA technology by NVIDIA (see here). At that time I added two new graphic cards to my PC, being on the verge to reach 3 Tflops in single
precision for lattice computations. Indeed, I have had an unlucky turn of events and these cards went back to the seller as they were not working properly and I was completely refunded. Meantime,
also the motherboard failed and the hardware was largely changed and so, I have been for a lot of time without the opportunity to work with CUDA and performing intensive computations as I planned.
As it is well-known, one can find a lot of software exploiting this excellent technology provided by NVIDIA and, during these years, it has been spreading largely, both in academia and industry,
making life of researchers a lot easier. Personally, I am using it also at my workplace and it is really exciting to have such a computational capability at your hand at a really affordable price.
Now, I am newly able to equip my personal computer at home with a powerful Tesla card. Some of these cards are currently dismissed as they are at the end of activity, due to upgrades of more modern
ones, and so can be found at a really small price in bid sites like ebay. So, I bought a Tesla M1060 for about 200 euros. As the name says, this card has not been conceived for a personal computer
but rather for servers produced by some OEMs. This can also be realized when we look at the card and see a passive cooler. This means that the card should have a proper physical dimension to enter
into a server while the active dissipation through fans should be eventually provided by the server itself. Indeed, I added an 80mm Enermax fan to my chassis (also Enermax Enlobal) to be granted
that the motherboard temperature does not reach too high values. My motherboard is an ASUS P8P67 Deluxe. This is a very good card, as usual for ASUS, providing three PCIe 2.0 slots and, in
principle, one can add up to three video cards together. But if you have a couple of NVIDIA cards in SLI configuration, the slots work at x8. A single video card will work at x16. Of course, if you
plan to work with these configurations, you will need a proper PSU. I have a Cooler Master Silent Pro Gold 1000 W and I am well beyond my needs. This is what remains from my preceding configuration
and is performing really well. I have also changed my CPU being this now an Intel i3-2125 with two cores at 3.30 GHz and 3Mb Cache. Finally, I added 16 Gb of Corsair Vengeance DDR3 RAM.
The installation of the card went really smooth and I have got it up and running in a few minutes on Windows 8 Pro 64 Bit, after the installation of the proper drivers. I checked with Matlab 2011b
and PGI compilers with CUDA Toolkit 5.0 properly installed. All worked fine. I would like to spend a few words about PGI compilers that are realized by The Portland Group. I have got a trial license
at home and tested them while at my workplace we have a fully working license. These compilers make the realization of accelerated CUDA code absolutely easy. All you need is to insert into your C or
Fortran code some preprocessing directives. I have executed some performance tests and the gain is really impressive without ever writing a single line of CUDA code. These compilers can be easily
introduced into Matlab to yield mex-files or S-functions even if they are not yet supported by Mathworks (they should!) and also this I have verified without too much difficulty both for C and
Finally, I would like to give you an idea on the way I will use CUDA technology for my aims. What I am doing right now is porting some good code for the scalar field and I would like to use it in the
limit of large self-interaction to derive the spectrum of the theory. It is well-known that if you take the limit of the self-interaction going to infinity you recover the Ising model. But I would
like to see what happens with intermediate but large values as I was not able to get any hint from literature on this, notwithstanding this is the workhorse for any people doing lattice computations.
What seems to matter today is to show triviality at four dimensions, a well-acquired evidence. As soon as the accelerate code will run properly, I plan to share it here as it is very easy to get good
code to do lattice QCD but it is very difficult to get good code for scalar field theory as well. Stay tuned!
Large-N gauge theories on the lattice
Today I have found on arXiv a very nice review about large-N gauge theories on the lattice (see here). The authors, Biagio Lucini and Marco Panero, are well-known experts on lattice gauge theories
being this their main area of investigation. This review, to appear on Physics Report, gives a nice introduction to this approach to manage non-perturbative regimes in gauge theories. This is
essential to understand the behavior of QCD, both at zero and finite temperatures, to catch the behavior of bound states commonly observed. Besides this, the question of confinement is an open
problem yet. Indeed, a theoretical understanding is lacking and lattice computations, especially in the very simplifying limit of large number of colors N as devised in the ’70s by ‘t Hooft, can make
the scenario clearer favoring a better analysis.
What is seen is that confinement is fully preserved, as one gets an exact linear increasing potential in the limit of N going to infinity, and also higher order corrections are obtained diminishing
as N increases. They are able to estimate the string tension obtaining (Fig. 7 in their paper):
$\centering{\frac{\Lambda_{\bar{MS}}}{\sigma^\frac{1}{2}}\approx a+\frac{b}{N^2}}.$
This is a reference result for whoever aims to get a solution to the mass gap problem for a Yang-Mills theory as the string tension must be an output of such a result. The interquark potential has
the form
$m(L)=\sigma L-\frac{\pi}{3L}+\ldots$
This ansatz agrees with numerical data to distances $3/\sqrt{\sigma}$! Two other fundamental results these authors cite for the four dimensional case is the glueball spectrum:
Again, these are reference values for the mass gap problem in a Yang-Mills theory. As my readers know, I was able to get them out from my computations (see here). More recently, I have also obtained
higher order corrections and the linear rising potential (see here) with the string tension in a closed form very similar to the three-dimensional case. Finally, they give the critical temperature
for the breaking of chiral symmetry. The result is
This result is rather interesting because the constant is about $\sqrt{3/\pi^2}$. This result has been obtained initially by Norberto Scoccola and Daniel Gómez Dumm (see here) and confirmed by me
(see here). This result pertains a finite temperature theory and a mass gap analysis of Yang-Mills theory should recover it but here the question is somewhat more complex. I would add to these
lattice results also the studies of propagators for a pure Yang-Mills theory in the Landau gauge, both at zero and finite temperatures. The scenario has reached a really significant level of maturity
and it is time that some of the theoretical proposals put forward so far compare with it. I have just cited some of these works but the literature is now becoming increasingly vast with other really
meaningful techniques beside the cited one.
As usual, I conclude this post on such a nice paper with the hope that maybe time is come to increase the level of awareness of the community about the theoretical achievements on the question of the
mass gap in quantum field theories.
Biagio Lucini, & Marco Panero (2012). SU(N) gauge theories at large N arXiv arXiv: 1210.4997v1
Marco Frasca (2008). Yang-Mills Propagators and QCD Nuclear Physics B (Proc. Suppl.) 186 (2009) 260-263 arXiv: 0807.4299v2
Marco Frasca (2011). Beyond one-gluon exchange in the infrared limit of Yang-Mills theory arXiv arXiv: 1110.2297v4
D. Gomez Dumm, & N. N. Scoccola (2004). Characteristics of the chiral phase transition in nonlocal quark models Phys.Rev. C72 (2005) 014909 arXiv: hep-ph/0410262v2
Marco Frasca (2011). Chiral symmetry in the low-energy limit of QCD at finite temperature Phys. Rev. C 84, 055208 (2011) arXiv: 1105.5274v4
Today in arXiv (2)
Today I have found some papers in the arXiv daily that makes worthwhile to talk about. The contribution by Attilio Cucchieri and Tereza Mendes at Ghent Conference “The many faces of QCD” is out (see
here). They study the gluon propagator in the Landau gauge at finite temperature at a significantly large lattice. The theory is SU(2) pure Yang-Mills. As you know, the gluon propagator in the Landau
gauge at finite temperature is assumed to get two contributions: a longitudinal and a transverse one. This situation is quite different form the zero temperature case where such a distinction does
not exist. But, of course, such a conclusion could only be drawn if the propagator is not the one of massive excitations and we already know from lattice computations that massive solutions are those
supported. In this case we should expect that, at finite temperature, one of the components of the propagator must be suppressed and a massive gluon is seen again. Tereza and Attilio see exactly this
behavior. I show you a picture extracted from their paper here
The effect is markedly seen as the temperature is increased. The transverse propagator is even more suppressed while the longitudinal propagator reaches a plateau, as for the zero temperature case,
but with the position of the plateau depending on the temperature making it increase. Besides, Attilio and Tereza show how the computation of the longitudinal component is really sensible to the
lattice dimensions and they increase them until the behavior settles to a stable one. In order to perform this computation they used their new CUDA machine (see here). This result is really
beautiful and I can anticipate that agrees quite well with computations that I and Marco Ruggieri are performing but yet to be published. Besides, they get a massive gluon of the right value but
with a mass decreasing with temperature as can be deduced from the moving of the plateau of the longitudinal propagator that indeed is the one of the decoupling solution at zero temperature.
As an aside, I would like to point out to you a couple of works for QCD at finite temperature on the lattice from the Portuguese group headed by Pedro Bicudo and participated by Nuno Cardoso and
Marco Cardoso. I have already pointed out their fine work on the lattice that was very helpful for my studies that I am still carrying on (you can find some links at their page). But now they moved
to the case of finite temperature (here and here). These papers are worthwhile to read.
Finally, I would like to point out a really innovative paper by Arata Yamamoto (see here). This is again a lattice computation performed at finite temperature with an important modification: The
chiral chemical potential. This is an important concept introduced, e.g. here and here, by Kenji Fukushima, Marco Ruggieri and Raoul Gatto. There is a fundamental reason to introduce a chiral
chemical potential and this is the sign problem seen in lattice QCD at finite temperature. This problem makes meaningless lattice computations unless some turn-around is adopted and the chiral
chemical potential is one of these. Of course, this implies some relevant physical expectations that a lattice computation should confirm (see here). In this vein, this paper by Yamamoto is a really
innovative one facing such kind of computations on the lattice using for the first time a chiral chemical potential. Being a pioneering paper, it appears at first a shortcoming the choice of too
small volumes. As we already have discussed above for the gluon propagator in a pure Yang-Mills theory, the relevance to have larger volumes to recover the right physics cannot be underestimated. As
a consequence the lattice spacing is 0.13 fm corresponding to a physical energy of 1.5 GeV that is high enough to miss the infrared region and so the range of validity of a possible
Polyakov-Nambu-Jona-Lasinio model as currently used in literature. So, while the track is open by this paper, it appears demanding to expand the lattice at least to recover the range of validity of
infrared models and grant in this way a proper comparison with results in the known literature. Notwithstanding these comments, the methods and the approach used by the author are a fundamental
starting point for any future development.
Attilio Cucchieri, & Tereza Mendes (2011). Electric and magnetic Landau-gauge gluon propagators in
finite-temperature SU(2) gauge theory arXiv arXiv: 1105.0176v1
Nuno Cardoso, Marco Cardoso, & Pedro Bicudo (2011). Finite temperature lattice QCD with GPUs arXiv arXiv: 1104.5432v1
Pedro Bicudo, Nuno Cardoso, & Marco Cardoso (2011). The chiral crossover, static-light and light-light meson spectra, and
the deconfinement crossover arXiv arXiv: 1105.0063v1
Arata Yamamoto (2011). Chiral magnetic effect in lattice QCD with chiral chemical potential arXiv arXiv: 1105.0385v1
Fukushima, K., Ruggieri, M., & Gatto, R. (2010). Chiral magnetic effect in the Polyakov–Nambu–Jona-Lasinio model Physical Review D, 81 (11) DOI: 10.1103/PhysRevD.81.114031
Fukushima, K., & Ruggieri, M. (2010). Dielectric correction to the chiral magnetic effect Physical Review D, 82 (5) DOI: 10.1103/PhysRevD.82.054001
CUDA: Upgrading to 3 Tflops
When I was a graduate student I heard a lot about the wonderful performances of a Cray-1 parallel computer and the promises to explore unknown fields of knowledge with this unleashed power. This
admirable machine reached a peak of 250 Mflops. Its near parent, Cray-2, performed at 1700 Mflops and for scientists this was indeed a new era in the help to attack difficult mathematical problems.
But when you look at QCD all these seem just toys for a kindergarten and one is not even able to perform the simplest computations to extract meaningful physical results. So, physicists started to
project very specialized machines to hope to improve the situation.
Today the situation is changed dramatically. The reason is that the increasing need for computation to perform complex tasks on a video output requires extended parallel computation capability for
very simple mathematical tasks. But these mathematical tasks is all one needs to perform scientific computations. The flagship company in this area is Nvidia that produced CUDA for their graphic
cards. This means that today one can have outperforming parallel computation on a desktop computer and we are talking of some Teraflops capability! All this at a very affordable cost. With few bucks
you can have on your desktop a machine performing thousand times better than a legendary Cray machine. Now, a counterpart machine of a Cray-1 is a CUDA cluster breaking the barrier of Petaflops!
Something people were dreaming of just a few years ago. This means that you can do complex and meaningful QCD computations in your office, when you like, without the need to share CPU time with
anybody and pushing your machine at its best. All this with costs that are not a concern anymore.
So, with this opportunity in sight, I jumped on this bandwagon and a few months ago I upgraded my desktop computer at home into a CUDA supercomputer. The first idea was just to buy old material from
Ebay at very low cost to build on what already was on my machine. On 2008 the top of the GeForce Nvidia cards was a 9800 GX2. This card comes equipped with a couple of GPUs with 128 cores each one,
0.5 Gbyte of ram for each GPU and support for CUDA architecture 1.1. No double precision available. This option started to be present with cards having CUDA architecture 1.3 some time later. You can
find a card of this on Ebay for about 100-120 euros. You will also need a proper motherboard. Indeed, again on 2008, Nvidia produced nForce 790i Ultra properly fitted for these aims. This card is
fitted for a 3-way SLI configuration and as my readers know, I installed till 3 9800 GX2 cards on it. I have got this card on Ebay for a similar pricing as for the video cards. Also, before to start
this adventure, I already had a 750 W Cooler Master power supply. It took no much time to have this hardware up and running reaching the considerable computational power of 2 Tflops in single
precision, all this with hardware at least 3 years old! For the operating system I chose Windows 7 Ultimate 64 bit after an initial failure with Linux Ubuntu 64 bit.
There is a wide choice in the web for software to run for QCD. The most widespread is surely the MILC code. This code is written for a multi-processor environment and represents the effort of several
people spanning several years of development. It is well written and rather well documented. From this code a lot of papers on lattice QCD have gone through the most relevant archival journals. Quite
recently they started to port this code on CUDA GPUs following a trend common to all academia. Of course, for my aims, being a lone user of CUDA and having no much time for development, I had the no
much attractive perspective to try the porting of this code on GPUs. But, in the same time when I upgraded my machine, Pedro Bicudo and Nuno Cardoso published their paper on arxiv (see here) and made
promptly available their code for SU(2) QCD on CUDA GPUs. You can download their up-to-date code here (if you plan to use this code just let them know as they are very helpful). So, I ported this
code, originally written for Linux, to Windows 7 and I have got it up and running obtaining a right output for a lattice till $56^4$ working just in single precision as, for this hardware
configuration, no double precision was available. The execution time was acceptable to few seconds on GPUs and some more at the start of the program due to CPU and GPUs exchanges. So, already at this
stage I am able to be productive at a professional level with lattice computations. Just a little complain is in order here. In the web it is very easy to find good code to perform lattice QCD but
nothing is possible to find for post-processing of configurations. This code is as important as the former: Without computation of observables one can do nothing with configurations or whatever else
lattice QCD yields on whatever powerful machine. So, I think it would be worthwhile to have both codes available to get spectra, propagators and so on starting by a standard configuration file
independently on the program that generated it. Similarly, it appears almost impossible to get lattice code for computations on lattice scalar field theory (thank you a lot to Colin Morningstar for
providing me code for 2+1dimensions!). This is a workhorse for people learning lattice computation and would be helpful, at least for pedagogical reasons, to make it available in the same way QCD
code is. But now, I leave aside complains and go to the most interesting part of this post: The upgrading.
In these days I made another effort to improve my machine. The idea is to improve in performance like larger lattices and shorter execution times while reducing overheating and noise. Besides, the
hardware I worked with was so old that the architecture did not make available double precision. So, I decided to buy a couple of GeForce 580 GTX. This is the top of the GeForce cards (590 GTX is a
couple of 580 GTX on a single card) and yields 1.5 Tflops in single precision (9800 GX2 stopped at 1 Tflops in single precision). It has Fermi architecture (CUDA 2.0) and grants double precision at a
possible performance of at least 0.5 Tflops. But as happens for all video cards, a model has several producers and these producers may decide to change something in performance. After some
difficulties with the dealer, I was able to get a couple of high-performance MSI N580GTX Twin Frozr II/OC at a very convenient price. With respect to Nvidia original card, these come overclocked,
with a proprietary cooler system that grants a temperature reduced of 19°C with respect to the original card. Besides, higher quality components were used. I received these cards yesterday and I have
immediately installed them. In a few minutes Windows 7 installed the drivers. I recompiled my executable and finally I performed a successful computation to $66^4$ with the latest version of Nuno and
Pedro code. Then, I checked the temperature of the card with Nvidia System Monitor and I saw a temperature of 60° C for each card and the cooler working at 106%. This was at least 24°C lesser than my
9800 GX2 cards! Execution times were at least reduced to a half on GPUs. This new configuration grants 3 Tflops in single precision and at least 1 Tflops in double precision. My present hardware
configuration is the following:
So far, I have had no much time to experiment with the new hardware. I hope to say more to you in the near future. Just stay tuned!
Nuno Cardoso, & Pedro Bicudo (2010). SU(2) Lattice Gauge Theory Simulations on Fermi GPUs J.Comput.Phys.230:3998-4010,2011 arXiv: 1010.4834v2
CUDA: The upgrade
As promised (see here) I am here to talk again about my CUDA machine. I have done the following upgrade:
• Added 4 GB of RAM and now I have 8 GB of DDR3 RAM clocked at 1333 MHz. This is the maximum allowed by my motherboard.
• Added the third 9800 GX2 graphics card. This is a XFX while the other twos that I have already installed are EVGA and Nvidia respectively. These three cards are not perfectly identical as the
EVGA is overclocked by the manufacturer and, for all, the firmware could not be the same.
At the start of the upgrade process things were not so straight. Sometime BIOS complained at the boot about the position of the cards in the three PCI express 2.0 slots and the system did not start
at all. But after that I have found the right combination in permuting the three cards, Windows 7 recognized all of them, latest Nvidia drivers installed as a charm and the Nvidia system monitor
showed the physical situation of all the GPUs. Heat is a concern here as the video cards work at about 70 °C while the rest of the hardware is at about 50 °C. The box is always open and I intend to
keep it so to reduce at a minimum the risk of overheating.
The main problem arose when I tried to run my CUDA applications from a command window. I have a simple program the just enumerate GPUs in the system and also the program for lattice computations of
Pedro Bicudo and Nuno Cardoso can check the system to identify the exact set of resources to perform its work at best. Both the applications, that I recompiled on the upgraded platform, just saw a
single GPU. It was impossible, at first, to get a meaningful behavior from the system. I thought that this could have been a hardware problem and contacted the XFX support for my motherboard. I
bought my motherboard by second hand but I was able to register the product thanks to the seller that already did so. People at XFX were very helpful and fast in giving me an answer. The technician
said to me essentially that the system should have to work and so he gave me some advices to identify possible problems. I would like to remember that a 9800 GX2 contains two graphics cards and so I
have six GPUs to work with. I checked all the system again until I get the nice configuration above with Windows 7 seeing all the cards. Just a point remained unanswered: Why my CUDA applications did
not see the right number of GPUs. This has been an old problem for Nvidia and was overcome with a driver revision long before I tried for myself. Currently, my driver is 266.58, the latest one. The
solution come out unexpectedly. It has been enough to change a setting in the Performance menu of the Nvidia monitor for the use of multi-GPU and I have got back 5 GPUs instead of just 1. This is not
six but I fear that I cannot do better. The applications now work fine. I recompiled them all and I have run successfully the lattice computation till a $76^4$ lattice in single precision! With these
numbers I am already able to perform professional work in lattice computations at home.
Then I spent a few time to set the development environment through the debugger Parallel Nsight and Visual Studio 2008 for 64 bit applications. So far, I was able to generate the executable of the
lattice simulation under VS 2008. My aim is to debug it to understand why some values become zero in the output and they should not. Also I would like to understand why the new version of the lattice
simulation that Nuno sent to me does not seem to work properly on my platform. I have taken some time trying to configure Parallel Nsight for my machine. You will need at least two graphics cards to
get it run and you have to activate PhysX on the Performance monitor of Nvidia on the card that will not run your application. This was a simple enough task as the online manual of the debugger is
well written. Also, enclosed examples are absolutely useful. My next week-end will be spent to fine tuning all the matter and starting doing some work with the lattice simulation.
As far as I will go further with this activity I will inform you on my blog. If you want to initiate such an enterprise by yourself, feel free to get in touch with me to overcome difficulties and
hurdles you will encounter. Surely, things proved to be not so much complicated as they appeared at the start.
A striking clue and some more
My colleagues participating to “The many faces of QCD” in Ghent last year keep on publishing their contributions to the proceedings. This conference produced several outstanding talks and so, it is
worthwhile to tell about that here. I have already said about this here, here and here and I have spent some words about the fine paper of Oliveira, Bicudo and Silva (see here). Today I would like to
tell you about an interesting line of research due to Silvio Sorella and colleagues and a striking clue supporting my results on scalar field theory originating by Axel Maas (see his blog).
Silvio is an Italian physicist that lives and works in Brazil, Rio de Janeiro, since a long time. I met him at Ghent mistaking him with Daniele Binosi. Of course, I was aware of him through his works
that are an important track followed to understand the situation of low-energy Yang-Mills theory. I have already cited him in my blog both for Ghent and the Gribov obsession. He, together with David
Dudal, Marcelo Guimaraes and Nele Vandersickel (our photographer in Ghent), published on arxiv a couple of contributions (see here and here). Let me explain in a few words why I consider the work of
these authors really interesting. As I have said in my short history (see here), Daniel Zwanzinger made some fundamental contributions to our understanding of gauge theories. For Yang-Mills, he
concluded that the gluon propagator should go to zero at very low energies. This conclusion is at odds with current lattice results. The reason for this, as I have already explained, arises from the
way Gribov copies are managed. Silvio and other colleagues have shown in a series of papers how Gribov copies and massive gluons can indeed be reconciled by accounting for condensates. A gluon
condensate can explain a massive gluon while retaining all the ideas about Gribov copies and this means that they have also find a way to refine the ideas of Gribov and Zwanzinger making them agree
with lattice computations. This is a relevant achievement and a serious concurrent theory to our understanding of infrared non-Abelian theories. Last but not least, in these papers they are able to
show a comparison with experiments obtaining the masses of the lightest glueballs. This is the proper approach to be followed to whoever is aimed to understand what is going on in quantum field
theory for QCD. I will keep on following the works of these authors being surely a relevant way to reach our common goal: to catch the way Yang-Mills theory behaves.
A real brilliant contribution is the one of Axel Maas. Axel has been a former student of Reinhard Alkofer and Attilio Cucchieri & Tereza Mendes. I would like to remember to my readers that Axel have
had the brilliant idea to check Yang-Mills theory on a two-dimensional lattice arising a lot of fuss in our community that is yet on. On a similar line, his contribution to Ghent conference is again
a striking one. Axel has thought to couple a scalar field to the gluon field and study the corresponding behavior on the lattice. In these first computations, he did not consider too large lattices
(I would suggest him to use CUDA…) limiting the analysis to $14^4$, $20^3$ and $26^2$. Anyhow, also for these small volumes, he is able to conclude that the propagator of the scalar field becomes a
massive one deviating from the case of the tree-level approximation. The interesting point is that he sees a mass to appear also for the case of the massless scalar field producing a groundbreaking
evidence of what I proved in 2006 in my PRD paper! Besides, he shows that the renormalized mass is greater than the bare mass, again an agreement with my work. But, as also stated by the author,
these are only clues due to the small volumes he uses. Anyhow, this is a clever track to be pursued and further studies are needed. It would also be interesting to have a clear idea of the fact that
this mass arises directly from the dynamics of the scalar field itself rather than from its interaction with the Yang-Mills field. I give below a figure for the four dimensional case in a quenched
I am sure that this image will convey the right impression to my readers as mine. A shocking result that seems to match, at a first sight, the case of the gluon propagator on the lattice (mapping
theorem!). At larger volumes it would be interesting to see also the gluon propagator. I expect a lot of interesting results to come out from this approach.
Silvio P. Sorella, David Dudal, Marcelo S. Guimaraes, & Nele Vandersickel (2011). Features of the Refined Gribov-Zwanziger theory: propagators, BRST soft symmetry breaking and glueball masses arxiv
arXiv: 1102.0574v1
N. Vandersickel,, D. Dudal,, & S.P. Sorella (2011). More evidence for a refined Gribov-Zwanziger action based on an effective potential approach arxiv arXiv : 1102.0866
Axel Maas (2011). Scalar-matter-gluon interaction arxiv arXiv: 1102.0901v1
Frasca, M. (2006). Strongly coupled quantum field theory Physical Review D, 73 (2) DOI: 10.1103/PhysRevD.73.027701
Today on arxiv
I would like to write down a few lines on a paper published today on arxiv by Axel Maas (see here). This author draws an important conclusion about the propagators in Yang-Mills theories: These
functions depend very few on the gauge group, keeping fixed the coupling a la ‘t Hooft as $C_Ag^2$ being $C_A$ a Casimir parameter of the group that is N for SU(N). The observed changes are just
quantitative rather than qualitative as the author states. Axel does his computations on the lattice in 2 and 3 dimensions and gives an in-depth discussion of the way the scaling solution, the one
not seen on the lattice except for the two-dimensional case, is obtained and how the propagators are computed on the lattice. This paper opens up a new avenue in this kind of studies and, as far as
I can tell, such an extended analysis with respect to different gauge groups was never performed before. Of course, in d=3 the decoupling solution is obtained instead. Axel also shows the behavior
of the running coupling. I would like to remember that a decoupling solution implies a massive gluon propagator and a photon-like ghost propagator while the running coupling is strongly suppressed in
the infrared.
The conclusion given in this paper is a strong support to the work all the people is carrying on about the decoupling solution. As you can see from my work (see here), the only dependence seen for my
propagators on the gauge group is in the ‘t Hoof t coupling. The same conclusion is true for other authors. It is my conviction that this paper is again an important support to most theoretical work
done in these recent years. By his side, Axel confirms again a good nose for the choice of the research avenues to be followed.
Axel Maas (2010). On the gauge-algebra dependence of Landau-gauge Yang-Mills propagators arxiv arXiv: 1012.4284v1
The many faces of QCD (2)
Back at home, conference ended. A lot of good impressions both from the physics side and other aspects as the city and the company. On Friday I held my talk. All went fine and I was goodly inspired
so to express my ideas at best. You can find all the talks here. The pictures are here. Now it should be easier to identify me.
Disclaimer: The talks I will comment on are about results very near my research area. Talks I will not cite are important and interesting as well and the fact that I will not comment about them does
not imply merit for good or bad. Anyhow, I will appreciate any comment by any participant to the conference aiming to discuss his/her work.
On Tuesday afternoon started a session about phases in QCD. This field is very active and is a field where some breakthroughs are expected to be seen in the near future. I have had a lot of fun to
know Eduardo Fraga that was here with two of his students: Leticia Palhares and Ana Mizher. I invite you to read their talks as this people are doing a real fine work. On the same afternoon I
listened to the talk of Pedro Bicudo. Pedro, besides being a nice company for fun, is also a very good physicist performing relevant work in the area of lattice QCD. He is a pioneer in the use of
CUDA, parallel computing using graphic processors, and I intend to use his code, produced with his student Nuno Cardoso, on my machine to start doing lattice QCD at very low cost. On his talk you can
see a photo of one of my graphic cards. He used lattice computations to understand the phase diagram of QCD. Quite interesting has been the talk of Jan Pawlowski about the phase diagram of two flavor
QCD. He belongs to a group of people that produced the so called scaling solution and it is a great moment to see them to recognize the very existence of the decoupling solution, the only one
presently seen on lattice computations.
On Wednesday the morning session continued on the same line of the preceding day. I would like to cite the work of Marco Ruggieri because, besides being a fine drinking companion (see below), he
faces an interesting problem: How does the ground state of QCD change in presence of a strong magnetic field? Particularly interesting is to see how the phase diagram gets modified. On the same line
were the successive talks of Ana Mizher and Maxim Chernodub. Chernodub presented a claim that in this case vacuum is that of an electromagnetic superconductor due to $\rho$ meson condensation. In
this area of research the main approach is to use some phenomenological model. Ana Mizher used a linear sigma model while Marco preferred the Nambu-Jona-Lasinio model. The reason for this is that the
low-energy behavior of QCD is not under control and the use of well-supported effective models is the smarter approach we have at our disposal. Of course, this explains why the work of our community
is so important: If we are able to model the propagator of the gluon in the infrared, all the parameters of the Nambu-Jona-Lasinio model are properly fixed and we have the true infrared limit of QCD.
So, the stake is very high here.
In the afternoon there were some talks that touched very near the question of infrared propagators. Silvio Sorella is an Italian theoretical physicist living in Brazil. He is doing a very good work
in this quest for an understanding of the low-energy behavior of QCD. This work is done in collaboration with several other physicists. The idea is to modify the Gribov-Zwanziger scenario, that by
itself will produce the scaling solution currently not seen on the lattice, to include the presence of a gluon condensate. This has the effect to produce massive propagators that agree well with
lattice computations. In this talk Silvio showed how this approach can give the masses of the lowest states of the glueball spectrum. This has been an important step forward showing how this approach
can be used to give experimental forecasts. Daniel Zwanziger then presented a view of the confinement scenario. The conclusion was very frustrating: So far nobody can go to the Clay Institute to
claim the prize. More time is needed. Daniel has been the one who proposed the scenario of infrared Yang-Mills theory that produced the scaling solution. The idea is to take into account the problem
of Gribov copies and to impose that all the computations must be limited to the first Gribov horizon. If you do this the gluon propagator goes to zero lowering momenta and you get positivity
maximally violated obtaining a confining theory. So, this scenario has been called Gribov-Zwanzinger. From lattice computations we learned that the gluon propagator reaches a non zero finite value
lowering momenta and this motivated Silvio and others to see if one could maintain the original idea of Gribov horizon and agreement with lattice computations of the Gribov-Zwanzinger scenario.
Matthieu Thissier presented a talk with an original view. The idea is to consider QCD with a small perturbation expansion at one loop and a mass term added by hand. He computed the gluon propagator
and compared with lattice data till the infrared obtaining a very good agreement. Arlene Aguilar criticized strongly this approach as he worked with a coupling larger than one (a huge one said
Arlene) even if he was doing small perturbation theory. I talked about this with Matthieu. My view is that the main thing to learn from this kind of computations is that if you take a Yukawa-like
propagator with a mass going at least as $m^2+cq^2$ (do you remember Orlando Oliveira talk?) the agreement with lattice data is surely fairly good and so, even if you have done something that is
mathematically questionable, surely we apprehend an important fact! The afternoon session was concluded by the talk of Daniele Binosi. With Daniele we spent a nice night in Ghent. He is a student of
Joannis Papavassiliou and, together with Arlene Aguilar, this group is doing fine work on numerically solving Dyson-Schwinger equations to get the full propagator of Yang-Mills theory. They get a
very good agreement with lattice data and support the view that, on the full range of energies, the Cornwall propagator for the gluon with a logarithmic running mass reaching a constant in the
infrared is the right description of the theory. Daniele presented a beautiful computation based on Batalin-Vilkoviski framework that supported the conclusions of his group. It should be said that he
presented a different definition of the running coupling that grants a non-trivial fixed point at infrared. This is a delicate matter as, already a proper definition of the running coupling for the
infrared is not a trivial question. Daniele’s definition is quite different from that given by Andre Sternbeck in his talk as the latter has just the trivial fixed point as is emerging from the
lattice computations.
On Thursday the first speaker was Attilio Cucchieri. Attilio and his wife, Tereza Mendes, are doing a fine work on lattice computations that reached a breakthrough at Lattice 2007 when they showed,
with a volume of $(27fm)^4$, that the gluon propagator in the Landau gauge reaches a finite non-zero value lowering momenta. This was a breakthrough, confirmed at the same conference by two others
groups (Orlando Oliveira by one side and I. Bogolubsky, E.M. Ilgenfritz, M. Muller-Preussker and A. Sternbeck by the other side), as for long time it was believed that the only true solution was the
scaling one and the gluon propagator should have gone to zero lowering momenta. This became a paradigm so that papers have got rejected on the basis that they were claiming a different scenario.
Attilio this time was on a very conservative side presenting an interesting technical problem. Tereza’s talk was more impressive showing that, with higher temperatures and increasing volumes, in the
Landau gauge the plateau is still there. With Tereza and Attilio we spent some nice time in a pub discussing together with Marco Ruggeri about the history of their community, how they went to change
everything about this matter and their fighting for this. I hope one day this people will write down this history because there is a lot to learn from it. In the afternoon session there was a talk by
Reinhard Alkofer. Alkofer has been instrumental in transforming the scaling solution into a paradigm for a lot of years in the community. Unfortunately lattice computations talked against it and, as
Bob Dylan one time said, times are changing. He helped the community with discovering a lot of smart students that have given an important contribution to it. In his talk he insisted with his view
with a proposal for the functional form for the propagator (this was missing until now for the scaling solution) and a computation of the mass of the $\eta'$. $\eta'$ is a very strange particle. From
${\rm DA}\Phi{\rm NE}$ (KLOE-2) we know that this is not just a composite state of quarks but it contains a large part made of glue: It is like to have to cope with an excited hydrogen atom and so,
also its decay is to be understood (you can read my paper here). So, maybe a more involved discussion is needed before to have an idea of how to get the mass of this particle. After Alkofer’s talk
followed the talks of Aguilar and Papavassiliou. I would like to emphasize the relevance of the work of this group. Aguilar showed how they get an effective quark mass from Schwinger-Dyson equations
when there is no enhancement in the ghost propagator. Papavassiliou proposed to extend the background field method to Schwinger-Dyson equations. I invite you to check the agreement they get for the
Cornwall propagator of the gluon with lattice data in Arlene’s talk and how this can give the form $m^2+cq^2$ at lower momenta. My view is that, combining my recent results on strongly coupled
expansions for Yang-Mills and scalar field theories and the results of this group, a meaningful scenario is emerging giving a complete comprehension of what is going on for Yang-Mills theory at lower
energies. Joannis gave us an appointment for the next year in Trento. I will do everything I can to be there! Finally, the session was completed with Axel Mass’ talk. Axel has been a student of
Alkofer and worked with Attilio and Tereza. He put forward a lattice computation of Yang-Mills propagators in two dimensions that, for me, should have completely settled the question but produced a
lot of debate instead. He gave in his talk another bright idea: To study on the lattice a scalar theory interacting with gluons. I think that this is a very smart way to understand the mechanism
underlying mass generation in these theories. From the works discussed so far it should appear clear that Schwinger mechanism (also at classical level (see my talk)!) is at work here. The talk of
Axel manifestly shows this. It would be interesting if he could redo the computations taking a massless scalar field to unveil completely the dynamical generation of masses.
On Friday the morning session started with an interesting talk by Hans Dierckx trying to understand cardiac behavior using string theory. A talk by Oliver Rosten followed. Oliver produced a PhD
thesis on the exact renormalization group of about 500 pages (see here). His talk was very beautiful and informative and in some way gave a support to mine. Indeed, he showed, discussing on the
renormalization group, how a strong coupling expansion could emerge. In some way we are complimentary. I will not discuss my talk here but you are free to ask questions. The conference was concluded
by a talk of Peter van Baal. Peter has a terrible story about him and I will not discuss it here. I can only wish to him the best of the possible lucks.
Finally, I would like to thank the organizers for the beautiful conference they gave me the chance to join. The place was very nice (thanks Nele!) and city has an incredible beauty. I think these few
lines do not do justice to them and all the participants for what they have given. See you again folks!
The many faces of QCD
After a long silence, due to technical impediments as many of you know, I turn back to you from Ghent (Belgium). I am participating at the conference “The many faces of QCD”. You can find the
program here. The place is really beautiful as the town that I had the chance to look out yesterday evening. Organizers programmed a visit downtown tomorrow and I hope to see this nice town also at
the sun light. The reason why this conference is so relevant is that it gathers almost all the people working on this matter of Green functions of Yang-Mills theory and QCD whose works I cited widely
in my blog and in my papers. Now, I have the chance to meet them and speak to them. I am writing after the second day ended. The atmosphere is really exciting and discussion is always alive and it
happens quite often that speakers are interrupted during their presentations. The situation this field is living is simply unique in the scientific community. They are at the very start of a possible
scientific revolution as they are finally obtaining results of non-perturbative physics in a crucial field as that of QCD.
Disclaimer: The talks I will comment on are about results very near my research area. Talks I will not cite are important and interesting as well and the fact that I will not comment about them does
not imply merit for good or bad. Anyhow, I will appreciate any comment by any participant to the conference aiming to discuss his/her work.
I would like to cite some names here but I fear to forget somebody surely worthwhile to be named. From my point of view, there have been a couple of talks that caught my attention more strongly than
others, concerning computations on the lattice. This happened with the talk of Tereza Mendes yesterday and the one of Orlando Oliveira today. Tereza just studied the gluon propagator at higher
temperatures obtaining again striking and unexpected results. There is this plateau in the gluon propagator appearing again and again when lattice volume is increased. It would have been interesting
to have also a look to the ghost and the running coupling. Orlando, by his side, showed for the first time an attempt to fit with the function $G(p)=\sum_n\frac{Z_n}{p^2+m^2_n}$ that you can
recognize as the one I proposed since my first analysis to explain the infrared behavior of Yang-Mills theory. But Orlando went further and found the next to leading order correction to the mass
appearing in a Yukawa-like propagator. The idea is to see if the original hypothesis of Cornwall can agree with the current lattice computations. So, he shows that for the sum of propagators one can
get even better agreement in the fitting increasing the number of masses (at least 4) and for the Cornwall propagator you will need a mass corrected as $M^2+\alpha p^2$. Shocking as may seem, I
computed this term this summer and you can find it in this paper of mine. Indeed, this is a guess I put forward after a referee asked to me an understanding of the next-to-leading corrections to my
propagator and, as you can read from my paper, I guessed it would have produced a Cornwall-like propagator. Indeed, this is just a first infrared correction that can arise by expanding the logarithm
in the Cornwall’s formula.
The question of the gluon condensate, that I treated in my blog extensively thanks to the help of Stephan Narison, has been presented today by Olivier Péne through a lattice computation. Olivier
works in the group of Philippe Boucaud and contributed to the emerging of the now called decoupling solution for the gluon propagator. The importance of this work relies on the fact that a precise
determination of the gluon condensate from lattice is fundamental for our understanding of low-energy behavior of QCD. For this analysis is important to have a precise determination of the constant $
\Lambda_{QCD}$. Boucaud’s group produced an approach to this aim. Similarly, Andre Sternbeck showed how this important constant could be obtained by a proper definition of the running coupling and he
showed a very fine agreement with the result of Boucaud’s group.
Finally, I would like to remember the talk of Valentine Zakharov. I talked extensively about Valentine in my previous blog’s entries. His discoveries in this area of physics are really fundamental
and so it is important to have a particular attention to his talks. Substantially, he mapped scalar fields and Yang-Mills fields to get an understanding of confinement! As I am a strong supporter of
this view, as my readers may know from my preceding posts, I was quite excited to see such a an idea puts forward by Valentine.
As conference’s program unfolds I will take you updated with an eyes toward the aspects that are relevant to my work. Meantime, I hope to have given to you the taste of the excitement this area of
research conveys to us that pursue it. | {"url":"http://marcofrasca.wordpress.com/tag/lattice-computations/","timestamp":"2014-04-18T18:16:08Z","content_type":null,"content_length":"202930","record_id":"<urn:uuid:eff79259-8dde-41a4-bae0-ea373303440a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
Soddy's Formula
Back to Mathwords
Soddy's formula is another example of giving the credit to the wrong person.[1], r[2], r[3], r[4], must be such that [1] = b[1]. With this notation the formula can be written as
Descartes Circle Theorem was rediscovered in 1842 by an amateur English Mathematician named Phillip Beecroft. Beecroft also observed that there exist four other circles that would each be mutually
tangent at the same four points. These circles would have tangents perpendicular to the original circles tangents at each point of intersection.
In 1936 Sir Fredrick Soddy rediscovered the theorem again. Soddy may also be known to students of Science for receiving the Nobel Prize for Chemistry in 1921 for the discovery of the decay sequences
of radioactive isotopes. According to Oliver Sacks' wonderful book, Uncle Tungsten, Soddy also created the term "isotope" and was the first to use the term "chain reaction". In a strange "chain
reaction" of ideas, Soddy played a part in the US developing an atomic bomb. Soddy's book, The Interpretation of Radium, inspired H G Wells to write The World Set Free in 1914, and he dedicated the
novel to Soddy's book. Twenty years later, Wells' book set Leo Szilard to thinking about the possibility of Chain reactions, and how they might be used to create a bomb, leading to his getting a
British patent on the idea in 1936. A few years later Szilard encouraged his friend, Albert Einstein, to write a letter to President Roosevelt about the potential for an atomic bomb. The
prize-winning science-fiction writer, Frederik Pohl, talks about Szilard's epiphany in Chasing Science (pg 25),
".. we know the exact spot where Leo Szilard got the idea that led to the atomic bomb. There isn't even a plaque to mark it, but it happened in 1938, while he was waiting for a traffic light to
change on London's Southampton Row. Szilard had been remembering H. G. Well's old science-fiction novel about atomic power, The World Set Free and had been reading about the nuclear-fission
experiment of Otto Hahn and Lise Meitner, and the lightbulb went on over his head."
Perhaps Soddy's name is appropriate for the formula if only for the unique way he presented his discovery. He presented it in the form of a poem which is presented below.
The Kiss Precise
by Frederick Soddy
For pairs of lips to kiss maybe
Involves no trigonometry.
'Tis not so when four circles kiss
Each one the other three.
To bring this off the four must be
As three in one or one in three.
If one in three, beyond a doubt
Each gets three kisses from without.
If three in one, then is that one
Thrice kissed internally.
Four circles to the kissing come.
The smaller are the benter.
The bend is just the inverse of
The distance from the center.
Though their intrigue left Euclid dumb
There's now no need for rule of thumb.
Since zero bend's a dead straight line
And concave bends have minus sign,
The sum of the squares of all four bends
Is half the square of their sum.
To spy out spherical affairs
An oscular surveyor
Might find the task laborious,
The sphere is much the gayer,
And now besides the pair of pairs
A fifth sphere in the kissing shares.
Yet, signs and zero as before,
For each to kiss the other four
The square of the sum of all five bends
Is thrice the sum of their squares.
In _Nature_, June 20, 1936
Later another verse was written by Thorold Gosset to describe the even more general case in N dimensions for N+2 hyperspheres of the Nth dimension.
The Kiss Precise (Generalized) by Thorold Gosset
And let us not confine our cares
To simple circles, planes and spheres,
But rise to hyper flats and bends
Where kissing multiple appears,
In n-ic space the kissing pairs
Are hyperspheres, and Truth declares -
As n + 2 such osculate
Each with an n + 1 fold mate
The square of the sum of all the bends
Is n times the sum of their squares.
In _Nature_ January 9, 1937.
Fred Lunnon sent me a kind note correcting a typing oversight, and adding that
"The original result generalises nicely to curved n-space with curvature v [e.g. v^2 = +1 for elliptic space, -1 for hyperbolic] in the form
(\sum_i x_i)^2 - n \sum_i x_i^2 = 2n v^2
where x_i denote the curvatures of n+2 mutually tangent spheres. Example: n = 2, v = 0, x = [-1,2,2,3] is one solution, corresponding to a unit circle in the plane enclosing circles of radii
See Ivars Petersen "Circle Game" in Science News (2001) \bf 159 (16) p.254"
Fred admits he wasn't the first to prove this, but did manage to replicate it on his own (which impresses the heck out of me)... but THEN....... wait for it.... He wrote another poem verse to
accompany this extension to higher dimensions...
The Kiss Precise (Further Generalized) by Fred Lunnon
How frightfully pedestrian
My predecessors were
To pose in space Euclidean
Each fraternising sphere!
Let Gauss' k squared be positive
When space becomes elliptic,
And conversely turn negative
For spaces hyperbolic:
Squared sum of bends is sum times n
Of twice k squared plus squares of bends. | {"url":"http://www.pballew.net/soddy.html","timestamp":"2014-04-17T09:58:56Z","content_type":null,"content_length":"8744","record_id":"<urn:uuid:4ef2db65-7086-41bc-a8b1-cb894c28b20c>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
Well-posedness of the transport equation by stochastic perturbation
Seminar Room 1, Newton Institute
This is a joint work with F. Flandoli and M. Gubinelli. We consider the linear transport equation with a globally H\"{o}lder continuous and bounded vector field, with an integrability condition on
the divergence. While uniqueness may fail for the deterministic PDE, we prove that a multiplicative stochastic perturbation of Brownian type is enough to render the equation well-posed. This seems to
be the first explicit example of a PDE of fluid dynamics that becomes well-posed under the influence of a (multiplicative) noise. The key tool is a differentiable stochastic flow constructed and
analyzed by means of a special transformation of the drift of It\^{o}-Tanaka type.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/SPD/seminars/2010040114001.html","timestamp":"2014-04-21T04:37:06Z","content_type":null,"content_length":"6711","record_id":"<urn:uuid:cd636c9c-5f29-4db1-96ae-6e355924405b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Erie, CO SAT Tutor
Find an Erie, CO SAT Tutor
...Somewhere along the way, you need to actually know some English grammar and style rules, as well as how to do some basic interpretation of different kinds of texts- literary and nonliterary
passages. Fortunately, the reading and writing skills tested on the ACT are basic and finite- you already ...
31 Subjects: including SAT writing, SAT math, SAT reading, English
...I have taken genetics courses as well as upper division courses that cover specific concepts in genetics, such as biotechnology, microbiology, evolution, and ecology. I have done two years of
population genetics research, where I applied concepts of genetics and biotechnology to my research dail...
39 Subjects: including SAT reading, SAT math, reading, algebra 1
...I have been tutoring high school students one-on-one in algebra for several years and you can't possibly do algebra if you don't know anything about prealgebra. I have tutored high school as
well as college/university students in Calculus, and of course one must know something about Precalculus ...
15 Subjects: including SAT math, calculus, French, geometry
...Math was one of my favorite subjects in school, and I scored a perfect 36 on the math section of the ACT. I have used these math skills and knowledge in my career working in the software
development and IT related field, and of course as a math teacher, teaching middle school math, pre-algebra, ...
8 Subjects: including SAT math, geometry, algebra 1, GED
...I teach unit-recognition: learn the pieces of a word, how to break it down and where the pieces come from. I have the advantage of being able to refer to vocabulary from about nine different
languages, including Latin and Greek from which about 60% of English vocabulary is drawn. One popular vo...
14 Subjects: including SAT reading, SAT writing, English, ESL/ESOL
Related Erie, CO Tutors
Erie, CO Accounting Tutors
Erie, CO ACT Tutors
Erie, CO Algebra Tutors
Erie, CO Algebra 2 Tutors
Erie, CO Calculus Tutors
Erie, CO Geometry Tutors
Erie, CO Math Tutors
Erie, CO Prealgebra Tutors
Erie, CO Precalculus Tutors
Erie, CO SAT Tutors
Erie, CO SAT Math Tutors
Erie, CO Science Tutors
Erie, CO Statistics Tutors
Erie, CO Trigonometry Tutors
Nearby Cities With SAT Tutor
Dacono SAT Tutors
East Lake, CO SAT Tutors
Eastlake, CO SAT Tutors
Edgewater, CO SAT Tutors
Firestone SAT Tutors
Frederick, CO SAT Tutors
Glendale, CO SAT Tutors
Henderson, CO SAT Tutors
Hygiene SAT Tutors
Johnstown, CO SAT Tutors
Lafayette, CO SAT Tutors
Longmont SAT Tutors
Louisville, CO SAT Tutors
Niwot SAT Tutors
Superior, CO SAT Tutors | {"url":"http://www.purplemath.com/Erie_CO_SAT_tutors.php","timestamp":"2014-04-21T07:05:13Z","content_type":null,"content_length":"23544","record_id":"<urn:uuid:24a1754f-543e-4813-87e1-5299c29ff9e6>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
College Algebra: Intro to Relations and Functions Video | MindBites
College Algebra: Intro to Relations and Functions
About this Lesson
• Type: Video Tutorial
• Length: 9:17
• Media: Video/mp4
• Use: Watch Online & Download
• Access Period: Unrestricted
• Download: MP4 (iPod compatible)
• Size: 100 MB
• Posted: 06/26/2009
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, College Algebra. This course and others are available from Thinkwell, Inc. The full course can be
found athttp://www.thinkwell.com/student/product/collegealgebra. The full course covers equations and inequalities, relations and functions, polynomial and rational functions, exponential and
logarithmic functions, systems of equations, conic sections and a variety of other AP algebra, advanced algebra and Algebra II topics.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has
won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of
America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The Heart
of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math journals,
including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of numbers, and the
theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Recent Reviews
This lesson has not been reviewed.
Please purchase the lesson to review.
This lesson has not been reviewed.
Please purchase the lesson to review.
You know, when you think about connections between one thing and something else, something's proportional with something else, or you're given a connection with an equation--for example, a line: y =
3x + 2--if you give me x, then I can figure out what y is. Well in fact that is a particular example of a much more general circumstance, something known as a relation. And a relation is nothing more
than basically connections between two classes or two collections of things. And so, really, when you think of a relation, which is a very abstract idea, we're really thinking of a set of ordered
pairs. So, I'm going to define to you what a relation is, first of all, in very abstract terms, and then take a look at some real examples of this.
A relation is really just a connection and therefore a set of ordered pairs. Here's an example: Set ({), and now I'm going to give you ordered pairs--{(3, -1), (2, 4), (3, 0)}. So, this whole thing
is considered a relation, because, basically, for each first person here, I associate a second person. So the 3 here is associated with a -1; the 2 here is associated with 4; the 3 here is associated
with 0. Any time you have this pairing, you can think of it as an association: to this thing, I pair up that thing. It's almost like, in some sense, being married right? Now, the question is: what do
these relations look like? Let me give you another example: {(-1, 2),(1, 5),(3, 5)}. That's another example, where the connection is -1 here is being associated with 2, 1 here is being associated
with 5, 3 here is being associated with 5. I'm constantly thinking about these things in terms of (x, y)--so I'm putting the x first and the y second--this is what I'm thinking about when I think of
a relation.
Now, If you think about these things graphically, you can plot these things. So if we plot the first one, I see (3, -1). So there will be a point right there. (2, 4), it'll be way up here, and (3, 0)
will be right here. So there's a graph of this relation. You can see all the three points there. What about here? Here I can graph this relation: (-1, 2), (1, 5), and (3, 5). There's a visualization
of them. But it really is a connection between the first person with the second person.
Now, there's an extremely important, special example of relations, and those are called functions. And a function is nothing more than a relation where the first term, that number will only appear
once in the list. Now, for example, if you look here, I see that this thing has a first term of 3, and this one has a first term of 3. So the first term actually appears twice. So therefore, this is
not an example of a function. A function is a relation--so it's a collection of these ordered pairs--where the first term only appears once. So in this visual example, you can see, what does it mean
for the first person to appear more than once? It means that there has to be two points on top of each other somehow. There has to be two points where they have the same first coordinate. In that
case, we say this is not a function. What about this: {(-1, 2),(1, 5),(3, 5)}? Well here, I see that for every point, there is no other point that is on the same vertical line as the other ones. So
therefore, this is a function. And functions are in fact what we're going to really, for the most part, study in algebra and in further areas of math. And you can see that something is not a function
if it fails this vertical line test. So this vertical line test just means that every time a draw a vertical line, only at most one point will pass through that line. Here-- {(3, -1), (2, 4), (3, 0)}
--it fails the vertical line test because a vertical line will pass through two points here--not a function. Here--{(-1, 2),(1, 5),(3, 5)}--every single one only passes through one point. Notice by
the way that horizontally, these two share the same second coordinate (5), but that's not what's required to be a function, only that all the first coordinates are different. That's all that's
Now, let me show you some examples of things that aren't necessarily numbers. You don't have to have numbers here. Here's a pairing, here's a connection: I have states paired up with institutions. So
these are different institutions, these are in fact academic institutions. You can imagine penal institutions, where people go to prison, which in fact some of you may think some of these schools
are. So Texas, for example, I associate with the University of Texas in Austin; Massachusetts I associate with Williams College; New York I associate with Columbia; Texas I associate with Rice
University; California with UCLA; California with Harvey Mudd; and Ohio with Ohio State. So there's a state and here's a school from it. The question is: is this a function? Well, let's see. Does
everything here appear at most once? Well, no. You can see for example that California actually appears twice. So for California we have (California, UCLA), we have (California, Harvey Mudd), so this
would not be a function. Also notice that it fails because of Texas; I have (Texas, University of Texas), and (Texas, Rice University). You could draw a little picture like this, by the way, if you
wanted to. Some people do this. Here's the thing of states, and here's the schools. And so here's Texas, and send an arrow to the school UT. And then here's MA (Massachusetts), and I send that to
Williams. Here's New York--this is just a different way of representing this information, it's a bit more visual way--send that to Columbia. Here's Texas again, and now it's Rice, different school.
Here's California; UCLA. And then California again also goes to Harvey Mudd. And then Ohio, that goes to Ohio State. And to see if this relation is really a function or not, the question is: for each
point here, is there only one arrow coming out? But since I see here there are two arrows, you can see that this as a first coordinate is associated with two different people as second coordinates,
both UT and Rice. Similarly, California actually has two things that it's paired up with. So if I say I'm thinking of Texas, you don't know for sure which school I'm thinking about, because there are
two possibilities. This is not a function. It's not well-defined, we say. It's a relation, but it's not a function.
One last example; how about this thing? How about institutions and relate them with their mascots? So you have the University of Texas, which are the Longhorns; University of Tennessee, which are the
Volunteers; Williams College--they're actually the Ephs, but let's think of them as the Purple Cows; University of Colorado are the Buffalo; and University of Washington are the Huskies. Well notice
that this in fact is a function because each one of these schools is associated with only one thing here. And there's no "University of Texas has the Longhorns and the Beavers." There are no beavers
at the University of Texas. So in fact, this is an example of a function. If you drew that little chart--in fact I'll just draw it over here, take a look at this, there it is, there's the chart right
there. And you can see that out of every university there's only one arrow emanating going to the mascot. So this relation is actually an example of a function.
So there you have the connection between the special idea of a function, which is an example of the more general issue of a relation. And we'll take a look at functions in great, great detail, and
we'll even see some relations along the way. So there's sort of just a general, global sense of what a relation really is about. A little abstract, but it'll be more concrete as you think about it.
Good luck, and hook `em Horns!
Relations and Functions
An Introduction to Functions
Introducing Relations and Functions Page [1 of 2]
Get it Now and Start Learning
Embed this video on your site
Copy and paste the following snippet:
Link to this page
Copy and paste the following snippet: | {"url":"http://www.mindbites.com/lesson/3353-college-algebra-intro-to-relations-and-functions","timestamp":"2014-04-20T00:40:10Z","content_type":null,"content_length":"60475","record_id":"<urn:uuid:d59d8734-c036-42b5-9f18-6871a1751abd>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Suppose that delta is an eigenvalue of an invertible matrix A. Show that 1/delta is an eigenvalue of A inverse.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
write the definition of an eigenvalue
Best Response
You've already chosen the best response.
So I know what an eigenvalue is, but how would you write the definition of it, what do u mean by that?
Best Response
You've already chosen the best response.
Start with showing what an eigenvalue is: a scalar, \(\delta\) in this case, such that\[AI=\delta A\]Now, since we can assume \(A\) has an inverse \(A^{-1}\), multiply both sides by the inverse.
What do you get?
Best Response
You've already chosen the best response.
*I meant of course\[AI=\delta I\]multiply both sides by \(A^{-1}\) and what do you get?
Best Response
You've already chosen the best response.
Thanks for the help
Best Response
You've already chosen the best response.
So I do what u said and that should be the answer? That should prove what we need to prove?
Best Response
You've already chosen the best response.
Not quite, you have not demonstrated what we have set out to prove. why don't you write out what you get by multiplying the definition of the eigenvalue\[AI=\delta I\]by the inverse of the
matrix, \(A^{-1}\) and show me what you get? I will help you from there.
Best Response
You've already chosen the best response.
So this is what I did, so here's my work: A^-1 AI = delta*I*A^-1 I = delta*I*A^-1 So I that's what I got, I'm pretty sure the right side can't be simplified any further, so if you could help me
with the rest, that'd be great.
Best Response
You've already chosen the best response.
divide both sides by delta and you're done check it out, you get the definition of the eigenvalue, with the eigenvalue of A^-1 as 1/delta
Best Response
You've already chosen the best response.
So on the right side, we then get I * A^-1, but that's just still A^-1 right?
Best Response
You've already chosen the best response.
yes\[AI=\delta I\]\[A^{-1}AI=II=I^2=I=A^{-1}\delta I\]remember that scalars are commutative so we can move delta around\[\delta A^{-1}I=I\]\[A^{-1}I=\frac1\delta I\]which is the definition of the
Best Response
You've already chosen the best response.
And one more thing, we also divide I from both sides too right, because that's how we get the one right?
Best Response
You've already chosen the best response.
I is the identity matrix, multiplying or dividing by it changes nothing, just like the scalar number 1
Best Response
You've already chosen the best response.
Right, so that'll just give us that one.
Best Response
You've already chosen the best response.
Because it's just I / I.
Best Response
You've already chosen the best response.
matrices do not become scalars matrix division is defined as multiplication by its inverse the inverse of the identity matrix is still the identity matrix, so\[II^{-1}=II=I^2=I\]still the
identity matrix, not the scalar number 1 big difference, gotta get that straight in linear algebra
Best Response
You've already chosen the best response.
Ok, so I think I see, ok so in the end we get this: A^-1 * I = 1/delta * I, so then the I's just cancel out since they're on both sides and that is what gives us A^-1 = 1/delta right?
Best Response
You've already chosen the best response.
you don't need to cancel the I's, it's just like times 1 if I had 1x=1y would I need to cancel the 1's ? a matrix times the identity matrix is itself, so\[AI=AI^{-1}=A\] in the definition of the
eigenvalue, I is written explicitly to show that the scalar eigenvalue lambda is multiplying into a matrix\[A=\lambda I\]. if the I's cancel you get\[A=\lambda\]that's wrong because the thing on
the right is a scalar and the thing on the left is not Just leave your last line as\[A^{-1}=\frac1\delta I\]and that makes clear the properties of the eigenvalue we were looking for
Best Response
You've already chosen the best response.
All right that makes sense, and can u help out with one more problem related to this same stuff?
Best Response
You've already chosen the best response.
I can try, but my connection is horrible right now, I may only be able to pm you
Best Response
You've already chosen the best response.
Ok, so here's the question: If matrix A has an inverse A^-1, use equation 2: Av = lambda*v, to show that A^-1 has the same eigenvectors as A. Determine a relationship between the eigenvalues of A
and A^-1. Illustrate with a suitable example.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b51e61e4b0a5a78e15e835","timestamp":"2014-04-16T23:04:22Z","content_type":null,"content_length":"77938","record_id":"<urn:uuid:62dcb378-bf9e-44aa-a867-2811b11681c7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
rings of planar graphs
Results 1 - 10 of 73
- INF. PROCESSING LETTERS 51 , 1994
"... A k-coloring of an oriented graph G = (V, A) is an assignment c of one of the colors 1; 2; : : : ; k to each vertex of the graph such that, for every arc (x; y) of G, c(x) 6= c(y). The
k-coloring is good if for every arc (x; y) of G there is no arc (z; t) 2 A such that c(x) = c(t) and c(y) = c(z). ..."
Cited by 42 (19 self)
Add to MetaCart
A k-coloring of an oriented graph G = (V, A) is an assignment c of one of the colors 1; 2; : : : ; k to each vertex of the graph such that, for every arc (x; y) of G, c(x) 6= c(y). The k-coloring is
good if for every arc (x; y) of G there is no arc (z; t) 2 A such that c(x) = c(t) and c(y) = c(z). A k-coloring is said to be semi-strong if for every vertex x of G, c(z) 6= c(t) for any pair fz; tg
of vertices of N \Gamma (x). We show that every oriented planar graph has a good coloring using at most 5 \Theta 2 4 colors and that every oriented planar graph G = (V; A) with d \Gamma (x) 3 for
every x 2 V has a good and semi-strong coloring using at most 4 \Theta 5 \Theta 2 4 colors.
- SIAM REV , 2005
"... Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring
problems occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specific ..."
Cited by 41 (7 self)
Add to MetaCart
Graph coloring has been employed since the 1980s to efficiently compute sparse Jacobian and Hessian matrices using either finite differences or automatic differentiation. Several coloring problems
occur in this context, depending on whether the matrix is a Jacobian or a Hessian, and on the specifics of the computational techniques employed. We consider eight variant vertexcoloring problems
here. This article begins with a gentle introduction to the problem of computing a sparse Jacobian, followed by an overview of the historical development of the research area. Then we present a
unifying framework for the graph models of the variant matrixestimation problems. The framework is based upon the viewpoint that a partition of a matrixinto structurally orthogonal groups of columns
corresponds to distance-2 coloring an appropriate graph representation. The unified framework helps integrate earlier work and leads to fresh insights; enables the design of more efficient algorithms
for many problems; leads to new algorithms for others; and eases the task of building graph models for new problems. We report computational results on two of the coloring problems to support our
claims. Most of the methods for these problems treat a column or a row of a matrixas an atomic entity, and partition the columns or rows (or both). A brief review of methods that do not fit these
criteria is provided. We also discuss results in discrete mathematics and theoretical computer science that intersect with the topics considered here.
, 2004
"... In a total order of the vertices of a graph, two edges with no endpoint in common can be crossing, nested, or disjoint. A k-stack (resp... ..."
Cited by 31 (19 self)
Add to MetaCart
In a total order of the vertices of a graph, two edges with no endpoint in common can be crossing, nested, or disjoint. A k-stack (resp...
- Discrete Math , 1995
"... The oriented chromatic number o(H) of an oriented graph H is defined as the minimum order of an oriented graph H 0 such that H has a homomorphism to H 0 . The oriented chromatic number o(G) of
an undirected graph G is then defined as the maximum oriented chromatic number of its orientations. In ..."
Cited by 30 (15 self)
Add to MetaCart
The oriented chromatic number o(H) of an oriented graph H is defined as the minimum order of an oriented graph H 0 such that H has a homomorphism to H 0 . The oriented chromatic number o(G) of an
undirected graph G is then defined as the maximum oriented chromatic number of its orientations. In this paper we study the links between o(G) and mad(G) defined as the maximum average degree of the
subgraphs of G. 1 Introduction and statement of results For every graph G we denote by V (G), with vG = jV (G)j, its set of vertices and by E(G), with e G = jE(G)j, its set of arcs or edges. A
homomorphism from a graph G to a graph On leave of absence from the Institute of Mathematics, Novosibirsk, 630090, Russia. With support from Engineering and Physical Sciences Research Council, UK,
grant GR/K00561, and from the International Science Foundation, grant NQ4000. y This work was partially supported by the Network DIMANET of the European Union and by the grant 96-01-01614 of the
Russian F...
, 2001
"... A star coloring of an undirected graph G is a proper vertex coloring of G (i.e., no two neighbors are assigned the same color) such that any path of length 3 in G is not bicolored. The star ..."
Cited by 27 (1 self)
Add to MetaCart
A star coloring of an undirected graph G is a proper vertex coloring of G (i.e., no two neighbors are assigned the same color) such that any path of length 3 in G is not bicolored. The star
- 2002, submitted. Stacks, Queues and Tracks: Layouts of Graph Subdivisions 41 , 2004
"... A queue layout of a graph consists of a total order of the vertices, and a partition of the edges into queues, such that no two edges in the same queue are nested. The minimum number of queues
in a queue layout of a graph is its queue-number. A three-dimensional (straight- line grid) drawing of a gr ..."
Cited by 26 (20 self)
Add to MetaCart
A queue layout of a graph consists of a total order of the vertices, and a partition of the edges into queues, such that no two edges in the same queue are nested. The minimum number of queues in a
queue layout of a graph is its queue-number. A three-dimensional (straight- line grid) drawing of a graph represents the vertices by points in Z and the edges by non-crossing line-segments. This
paper contributes three main results: (1) It is proved that the minimum volume of a certain type of three-dimensional drawing of a graph G is closely related to the queue-number of G. In particular,
if G is an n-vertex member of a proper minor-closed family of graphs (such as a planar graph), then G has a O(1) O(1) O(n) drawing if and only if G has O(1) queue-number.
- IPCO '96 , 1996
"... Given a subset of cycles of a graph, we consider the problem of finding a minimum-weight set of vertices that meets all cycles in the subset. This problem generalizes a number of problems,
including the minimum-weight feedback vertex set problem in both directed and undirected graphs, the subset fee ..."
Cited by 22 (3 self)
Add to MetaCart
Given a subset of cycles of a graph, we consider the problem of finding a minimum-weight set of vertices that meets all cycles in the subset. This problem generalizes a number of problems, including
the minimum-weight feedback vertex set problem in both directed and undirected graphs, the subset feedback vertex set problem, and the graph bipartization problem, in which one must remove a
minimum-weight set of vertices so that the remaining graph is bipartite. We give a 9/4-approximation algorithm for the general problem in planar graphs, given that the subset of cycles obeys certain
properties. This results in 9/4-approximation algorithms for the aforementioned feedback and bipartization problems in planar graphs. Our algorithms use the primal-dual method for approximation
algorithms as given in Goemans and Williamson [16]. We also show that our results have an interesting bearing on a conjecture of Akiyama and Watanabe [2] on the cardinality of feedback vertex sets in
planar graphs.
- J. London Math. Soc , 1999
"... A proper vertex-colouring of a graph is acyclic if there are no 2-coloured cycles. It is known that every planar graph is acyclically 5-colourable, and that there are planar graphs with acyclic
chromatic number χ a � 5 and girth g � 4. It is proved here that a planar graph satisfies χ ..."
Cited by 18 (0 self)
Add to MetaCart
A proper vertex-colouring of a graph is acyclic if there are no 2-coloured cycles. It is known that every planar graph is acyclically 5-colourable, and that there are planar graphs with acyclic
chromatic number χ a � 5 and girth g � 4. It is proved here that a planar graph satisfies χ
- DISCRETE MATH , 2001
"... An oriented k-coloring of an oriented graph G (that is a digraph with no cycle of length 2) is a partition of its vertex set into k subsets such that (i) no two adjacent vertices belong to the
same subset and (ii) all the arcs between any two subsets have the same direction. We survey the main resu ..."
Cited by 18 (4 self)
Add to MetaCart
An oriented k-coloring of an oriented graph G (that is a digraph with no cycle of length 2) is a partition of its vertex set into k subsets such that (i) no two adjacent vertices belong to the same
subset and (ii) all the arcs between any two subsets have the same direction. We survey the main results that have been obtained on oriented graph colorings.
, 2004
"... A proper coloring of the vertices of a graph is called a star coloring if every two color classes induce a star forest. Star colorings are a strengthening of acyclic colorings, i.e., proper
colorings in which every two color classes induce a forest. We show that ..."
Cited by 14 (0 self)
Add to MetaCart
A proper coloring of the vertices of a graph is called a star coloring if every two color classes induce a star forest. Star colorings are a strengthening of acyclic colorings, i.e., proper colorings
in which every two color classes induce a forest. We show that | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=111878","timestamp":"2014-04-21T01:30:58Z","content_type":null,"content_length":"36242","record_id":"<urn:uuid:aaf5a7d7-8913-4530-b68a-4421a2a424cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
1 December 1997 Vol. 2, No. 48
THE MATH FORUM INTERNET NEWS
Math Teacher Link | rec.puzzles Archive | The Proof
MATH TEACHER LINK - A NETMATH COALITION PROJECT
Math Teacher Link provides professional development
opportunities and classroom resources to teachers of
mathematics, statistics, and related subjects at the
high school and lower division college levels.
Math Teacher Link's for-credit courses and tutorials
cover the following modules:
- Calculus and Mathematica for Mathematics Teachers
- Using Internet Resources For High School Mathematics
- Using Mathematica in the Mathematics Classroom
- Using The Geometer's Sketchpad
- Algebra through Modeling with the TI-82 and TI-83
Graphing Calculators
- Teaching Statistics in High School
- HTML Programming for Teachers
- JavaScripting for Teachers
- Dynamic Geometry with Geometer's Sketchpad
- Logo Programming for the Math Classroom
Current non-credit courses for teachers and their students
include downloadable and online interactive tutorials for
the TI-82 and TI-92 calculators, and "UserActive," an
interactive HTML tutorial for teachers.
Classroom resources now under construction include links
to sites with traditional courses for algebra, geometry,
precalculus, trigonometry, and calculus; extended curricula
for probability and statistics, discrete mathematics,
linear algebra, computer science, and differential equations;
teaching strategies for collaborative learning, graphing
calculators, and graphics packages, and other topics such as
the history of mathematics.
INTER-LINKS PUZZLE ARCHIVE - rec.puzzles
What is the Arabian Nights factorial? What digits does googol!
start with? Can three houses be connected to three utilities
without the pipes crossing? Is there a Ham Sandwich Theorem?
Brain teasers with their solutions, archived from the
newsgroup rec.puzzles. Puzzles are organized into the
following categories:
- analysis
- arithmetic
- combinatorics
- competition
- cryptology
- decision
- geometry
- group
- induction
- language
- logic
- physics
- pickover
- probability
- real-life
- references
- series
- trivia
To learn more about rec.puzzles, see its FAQ:
NOVA ONLINE: THE PROOF
"In a tale of secrecy, obsession, dashed hopes, and
brilliant insights, Princeton math sleuth Andrew Wiles
goes undercover - for eight years - to solve history's
most famous math problem: Fermat's Last Theorem.
His success was front-page news around the world.
But then disaster struck..."
Some of the greatest minds of science struggled for more than
350 years to prove the idea that a simple equation had no
solutions. This site offers a variety of resources for
teachers to use in discussing Fermat's Last Theorem with
their students, extending the NOVA program seen on TV in
November, 1997, which may be purchased from PBS.
The site includes an interview with Andrew Wiles, the story
of Sophie Germain (an 18th-century mathematician who hid her
identity in order to work on Fermat's Last Theorem),
Pythagorean Theorem activities, and related links.
A Teachers' Guide with lesson plans is also provided:
For more about Fermat and the theorem, see the letter "F" in
the biographical index of the MacTutor math history archive:
CHECK OUT OUR WEB SITE:
The Math Forum http://mathforum.org/
Ask Dr. Math http://mathforum.org/dr.math/
Problem of the Week http://mathforum.org/geopow/
Internet Resources http://mathforum.org/~steve/
Join the Math Forum http://mathforum.org/join.forum.html
Send comments to the Math Forum Internet Newsletter editors
_o \o_ __| \ / |__ o _ o/ \o/
__|- __/ \__/o \o | o/ o/__/ /\ /| |
\ \ / \ / \ /o\ / \ / \ / | / \ / \ | {"url":"http://mathforum.org/electronic.newsletter/mf.intnews2.48.html","timestamp":"2014-04-18T10:45:27Z","content_type":null,"content_length":"8301","record_id":"<urn:uuid:95beacae-068e-4ede-9551-9c1ebeac51df>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hull, MA Math Tutor
Find a Hull, MA Math Tutor
...I teach a variety of levels of students from advanced to students with special needs. I show students a variety of ways to answer a problem because my view is that as long as student can
answer a question, understand how they got the answer and can explain how they do so, it doesn't matter the m...
5 Subjects: including algebra 1, algebra 2, prealgebra, precalculus
...Once I understand what concept a student needs to be taught or clarified, I devise a series of problems or logic steps that the student can solve in succession. Ultimately this will allow the
student to start from a place of confidence and comprehension giving them a hands-on understanding as to why the new technique or theory works. If you're anxious to learn, I'm anxious to teach!
12 Subjects: including algebra 1, algebra 2, calculus, chemistry
...For others, business and technology related concepts escape them. In my prior career I was often called upon to translate complex technical concepts into laymen's terms for a mass audience, or
as I liked to say, "I speak Geek." You are likely seeking a tutor because either you are struggling to keep up or trying to get ahead. I can help with that.
23 Subjects: including calculus, Microsoft Excel, Microsoft Word, Microsoft PowerPoint
...I have a strong background in Math, Science, and Computer Science. I currently work as software developer at IBM. When it comes to tutoring, I prefer to help students with homework problems or
review sheets that they have been assigned.
17 Subjects: including algebra 2, geometry, prealgebra, precalculus
...Currently, I teach a SAT math course for Cohasset Town Recreation. The students not only learn math but what problems to skip, when to guess and how to manage their time to achieve higher
scores.I designed and implemented a MCAS math review program. Knowing how to find a students weakness and turning it into a positive is my strength.
5 Subjects: including algebra 1, algebra 2, geometry, prealgebra
Related Hull, MA Tutors
Hull, MA Accounting Tutors
Hull, MA ACT Tutors
Hull, MA Algebra Tutors
Hull, MA Algebra 2 Tutors
Hull, MA Calculus Tutors
Hull, MA Geometry Tutors
Hull, MA Math Tutors
Hull, MA Prealgebra Tutors
Hull, MA Precalculus Tutors
Hull, MA SAT Tutors
Hull, MA SAT Math Tutors
Hull, MA Science Tutors
Hull, MA Statistics Tutors
Hull, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/Hull_MA_Math_tutors.php","timestamp":"2014-04-19T23:31:23Z","content_type":null,"content_length":"23852","record_id":"<urn:uuid:7491b39b-56c9-479d-bfa6-6852f321370a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using Weighted Criteria to Make Decisions
Date: 07/18/2008 at 19:37:32
From: AS
Subject: Finding a result by assigning weights
Hypothetical Example:
I want to award my employee for highest attendance and learning new
skills. I have 3 employees who worked the following days in a month:
Employee 1: 10 days
Employee 2: 15 days
Employee 3: 20 days
Employee 1: Learned 2 new skills
Employee 2: Learned 2 new skills
Employee 3: Learned 1 new skill
Weighting: 50% for attendance and 50% for new skills learned therefore
best employee is employee 3
Employee 1 score is .5(10)+.5(2) = 6.5
Employee 2 score is .5(15)+.5(2)= 8.5
Employee 3 score is .5(20)+ .5(1)= 10.5
Therefore Employee 3 gets that best employee award. I somehow feel
this is not the right method of calculation.
Am I able to calculate averages as above where skill and attendance
are apples and oranges (different)?
I looked up weighted average but that only shows calculating weights
for one type of quantity.
Date: 07/19/2008 at 22:34:47
From: Doctor Wilko
Subject: Re: Finding a result by assigning weights
Hi AS,
Thanks for writing to Dr. Math!
This is a very practical question in business and government. Often a
decision maker has to prioritize many alternatives (picking which
employee gets the quarterly award), where the candidates can be ranked
by multiple criteria (attendance record, skills learned, etc.).
If you were prioritizing your employees strictly based on attendance,
Employee 3 gets the reward because he has the highest attendance.
But, you actually have two criteria to rank your employees against:
attendance record and skills learned. Now it's not so clear if
Employee 3 still deserves the award. He does have the highest
attendance, but he only learned one new skill. Employee 2 on the
other hand learned two new skills, but worked five less days. Now
it's getting a little fuzzy on which employee should be rewarded. Now
suppose you have 20 employees and there are four or five criteria to
pick an award winner; now the selection is almost impossible to do
fairly. This is where the decision maker needs some objective method
to help him make a fair decision.
Your decision is to pick the award winner from the three employees,
based on two criteria, attendance record and skills learned. If we
use a decision tree to model your problem, it looks as follows:
Attendance Skills
Days Learned
/ E1 10 2
Decision ----- E2 15 2
\ E3 20 1
There are two main components we'll look at to model your decision.
1. You have to measure all criteria on similar numerical scales (to
compare apples to apples), and
2. You have to assign your importance (weights) to those criteria
that will be used to rank the employees.
Once you have these two components, you can calculate a weighted
average for each employee. This final weighted average will allow you
to rank your employees in order to determine who gets the reward.
Let's do these two steps now.
1. Measure all Criteria on Similar Numerical Scales
The first thing we'll do is use the same scale to measure both
criteria, say a 0-100 scale, where 0 is the worst or least desired
outcome for each criterion and 100 is the best or most desired outcome
for each criterion.
Criteria #1, Attendance:
Assume for attendance, 5 days is the worst outcome and 30 days is the
best outcome for your employees. 5 days of attendance gets mapped to
the score 0 and 30 days of attendance gets mapped to a score of 100.
To get the scores for the intermediate days, you could use
"proportional scoring" as one technique. For example, Employee 1 is
20% of the way from the lowest to the highest value
[(10-5)/(30-5)=0.20, and 100*0.20 = 20%], so Employee 1 gets a score
of 20 for his 10 days of attendance. Likewise, Employee 2's attendance
gets mapped to a score of 40, and Employee 3's attendance gets mapped
to a score of 60.
Remember, we're trying to map days of attendance onto a scale from
0-100. The mapping looks as follows:
E1, 10 days --> score of 20
E2, 15 days --> score of 40
E3, 20 days --> score of 60
Criteria #2, Skills Learned:
Now, let's map the number of skills learned onto the same 0-100 scale
using the same proportional weighting technique. First set your
endpoints of the scale. 0 skills learned gets a score of 0 and say 4
skills learned gets a score of 100. Now, using proportional weighting
the mapping looks as follows:
E1, 2 skills --> score of 50
E2, 2 skills --> score of 50
E3, 1 skill --> score of 25
Are you seeing how we can compare apples to apples? I have taken two
completely different criteria and put them on the same scale so
comparing them makes sense.
Now, if I re-draw my decision tree from above, it looks like:
Attendance Skills
Score Score
/ E1 20 50
Decision ----- E2 40 50
\ E3 60 25
2. Weight the Criteria
Next, you have to decide what's more important between attendance
record and skills learned. Is a score of 50 on the attendance scale
the same as a score of 50 on the skills scale? It might be, but this
is where you as the decision maker assign your importance to the
Importance is assigned to the criteria through weights. These weights
will allow you to calculate a weighted average using the two criteria
to get an overall score for each employee. Then you can rank the
employees by the overall score to determine who gets the award.
It is your call concerning the relative importance of the two
criteria. It is important however to consider the ranges of the
criteria. The weights should reflect the relative value of going from
best to worst on each scale. For example, if improving attendance
from 5 days to 30 is three times more important than going from zero
skills learned to four skills learned, then this implies you weight
attendance = 0.75 and skills = 0.25. But to go with your proposal
that the criteria are equal, you'd weight attendance = 0.50 and skills
= 0.50.
Now with your weights chosen, you can finally calculate the weighted
averages to get the overall score of each employee. The decision tree
will look as follows:
0.50 0.50
Attendance Skills Overall
Score Score Score
/ E1 20 50 = .50(20)+.50(50) = 35
Decision ----- E2 40 50 = .50(40)+.50(50) = 45
\ E3 60 25 = .50(60)+.50(25) = 42.5
Now you can rank your employees according to the final weighted
average to see who shall receive the award:
E2 = 45 (highest overall score)
E3 = 42.5
E1 = 35 (lowest overall score)
I'd say before accepting this as the final answer, especially when you
are building the model the first time, you'd step back and look at the
scales you constructed and the weights you applied to the criteria.
This is called "Sensitivity Analysis". It is easy to see that if
Skills was weighted higher, Employee 3 may have come out the winner.
Also, if the Skills score was a little higher than 25, Employee 3
might have received the highest overall score. Just be aware of this
as you construct your scales for the criteria and as you assign
weights to the criteria.
Model building is usually an iterative process, but once you've tested
your model and you feel confident it has captured your priorities, you
can use it objectively to rank all your employees to see who gets the
award. I think this is where the value of this method lies; in the
end you've built a transparent, standardized, and repeatable process
for selecting employees for the award.
If you're interested in this topic, try looking up articles related to
decision analysis, value-focused thinking, and multiple-objective
decision making to name a few.
Does this help? Please write back if you still have questions.
- Doctor Wilko, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/72033.html","timestamp":"2014-04-16T19:06:15Z","content_type":null,"content_length":"14007","record_id":"<urn:uuid:f2d2a5f7-cb1b-4ebe-be43-f22997dc45ef>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 42
- J. ACM
"... Let G = (V, E) be an undirected weighted graph with |V | = n and |E | = m. Let k ≥ 1 be an integer. We show that G = (V, E) can be preprocessed in O(kmn 1/k) expected time, constructing a data
structure of size O(kn 1+1/k), such that any subsequent distance query can be answered, approximately, in ..."
Cited by 210 (8 self)
Add to MetaCart
Let G = (V, E) be an undirected weighted graph with |V | = n and |E | = m. Let k ≥ 1 be an integer. We show that G = (V, E) can be preprocessed in O(kmn 1/k) expected time, constructing a data
structure of size O(kn 1+1/k), such that any subsequent distance query can be answered, approximately, in O(k) time. The approximate distance returned is of stretch at most 2k − 1, i.e., the quotient
obtained by dividing the estimated distance by the actual distance lies between 1 and 2k−1. A 1963 girth conjecture of Erdős, implies that Ω(n 1+1/k) space is needed in the worst case for any real
stretch strictly smaller than 2k + 1. The space requirement of our algorithm is, therefore, essentially optimal. The most impressive feature of our data structure is its constant query time, hence
the name “oracle”. Previously, data structures that used only O(n 1+1/k) space had a query time of Ω(n 1/k). Our algorithms are extremely simple and easy to implement efficiently. They also provide
faster constructions of sparse spanners of weighted graphs, and improved tree covers and distance labelings of weighted or unweighted graphs. 1
- In Proceedings of the ACM-SIGMOD Conference on Management of Data , 2000
"... This paper addresses the problem of finding the K closest pairs between two spatial data sets, where each set is stored in a structure belonging in the R-tree family. Five different algorithms
(four recursive and one iterative) are presented for solving this problem. The case of 1 closest pair is tr ..."
Cited by 65 (9 self)
Add to MetaCart
This paper addresses the problem of finding the K closest pairs between two spatial data sets, where each set is stored in a structure belonging in the R-tree family. Five different algorithms (four
recursive and one iterative) are presented for solving this problem. The case of 1 closest pair is treated as a special case. An extensive study, based on experiments performed with synthetic as well
as with real point data sets, is presented. A wide range of values for the basic parameters affecting the performance of the algorithms, especially the effect of overlap between the two data sets, is
explored. Moreover, an algorithmic as well as an experimental comparison with existing incremental algorithms addressing the same problem is presented. In most settings, the new algorithms proposed
clearly outperform the existing ones. 1
, 1997
"... This is the preliminary version of a chapter that will appear in the Handbook on Computational Geometry, edited by J.-R. Sack and J. Urrutia. A comprehensive overview is given of algorithms and
data structures for proximity problems on point sets in IR D . In particular, the closest pair problem, th ..."
Cited by 65 (14 self)
Add to MetaCart
This is the preliminary version of a chapter that will appear in the Handbook on Computational Geometry, edited by J.-R. Sack and J. Urrutia. A comprehensive overview is given of algorithms and data
structures for proximity problems on point sets in IR D . In particular, the closest pair problem, the exact and approximate post-office problem, and the problem of constructing spanners are
discussed in detail. Contents 1 Introduction 1 2 The static closest pair problem 4 2.1 Preliminary remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Algorithms that are optimal in
the algebraic computation tree model . 5 2.2.1 An algorithm based on the Voronoi diagram . . . . . . . . . . . 5 2.2.2 A divide-and-conquer algorithm . . . . . . . . . . . . . . . . . . 5 2.2.3 A
plane sweep algorithm . . . . . . . . . . . . . . . . . . . . . . 6 2.3 A deterministic algorithm that uses indirect addressing . . . . . . . . . 7 2.3.1 The degraded grid . . . . . . . . . . . . . .
. . . . ...
- In Proc. 7th ACM Symp. on Parallel Algorithms and Architectures , 1997
"... Abstract—For years, the computation rate of processors has been much faster than the access rate of memory banks, and this divergence in speeds has been constantly increasing in recent years. As
a result, several shared-memory multiprocessors consist of more memory banks than processors. The object ..."
Cited by 32 (5 self)
Add to MetaCart
Abstract—For years, the computation rate of processors has been much faster than the access rate of memory banks, and this divergence in speeds has been constantly increasing in recent years. As a
result, several shared-memory multiprocessors consist of more memory banks than processors. The object of this paper is to provide a simple model (with only a few parameters) for the design and
analysis of irregular parallel algorithms that will give a reasonable characterization of performance on such machines. For this purpose, we extend Valiant’s bulk-synchronous parallel (BSP) model
with two parameters: a parameter for memory bank delay, the minimum time for servicing requests at a bank, and a parameter for memory bank expansion, the ratio of the number of banks to the number of
processors. We call this model the (d, x)-BSP. We show experimentally that the (d, x)-BSP captures the impact of bank contention and delay on the CRAY C90 and J90 for irregular access patterns,
without modeling machine-specific details of these machines. The model has clarified the performance characteristics of several unstructured algorithms on the CRAY C90 and J90, and allowed us to
explore tradeoffs and optimizations for these algorithms. In addition to modeling individual algorithms directly, we also consider the use of the (d, x)-BSP as a bridging model for emulating a very
high-level abstract model, the Parallel Random Access Machine (PRAM). We provide matching upper and lower bounds for emulating the EREW and QRQW PRAMs on the (d, x)-BSP.
, 2002
"... Modern CPUs have instructions that allow basic operations to be performed on several data elements in parallel. These instructions are called SIMD instructions, since they apply a single
instruction to multiple data elements. SIMD technology was initially built into commodity processors in order to ..."
Cited by 26 (3 self)
Add to MetaCart
Modern CPUs have instructions that allow basic operations to be performed on several data elements in parallel. These instructions are called SIMD instructions, since they apply a single instruction
to multiple data elements. SIMD technology was initially built into commodity processors in order to accelerate the performance of multimedia applications. SIMD instructions provide new opportunities
for database engine design and implementation. We study various kinds of operations in a database context, and show how the inner loop of the operations can be accelerated using SIMD instructions.
The use of SIMD instructions has two immediate performance benefits: It allows a degree of parallelism, so that many operands can be processed at once. It also often leads to the elimination of
conditional branch instructions, reducing branch mispredictions.
- STOC'03 , 2003
"... ..."
, 1996
"... In this paper we consider solutions to the static dictionary problem ���� � on RAMs, i.e. random access machines where the only restriction on the finite instruction set is that all
computational instructions are ���� � in. Our main result is a tight upper and lower bound ���� � ���©���������������� ..."
Cited by 19 (5 self)
Add to MetaCart
In this paper we consider solutions to the static dictionary problem ���� � on RAMs, i.e. random access machines where the only restriction on the finite instruction set is that all computational
instructions are ���� � in. Our main result is a tight upper and lower bound ���� � ���©��������������������� of on the time for answering membership queries in a set of � size when reasonable space
is used for the data structure storing the set; the upper bound can be obtained using space ������ � �� � ���� �. Several variations of this result are also obtained. Among others, we show a tradeoff
between time and circuit depth under the unit-cost assumption: any RAM instruction set which permits a linear space, constant query time solution to the static dictionary problem must have an
instruction of depth �������©���������������©���� � , where � is the word size of the machine (and ���© � the size of the universe). This matches the depth of multiplication and integer division,
used in the perfect hashing scheme by Fredman, Komlós and Szemerédi.
- In Proc. Parallel Architectures and Languages Europe, LNCS 694 , 1993
"... . The influence of several hash functions on the distribution of a shared address space onto p distributed memory modules is compared by simulations. Both synthetic workloads and address traces
of applications are investigated. It turns out that on all workloads linear hash functions, although prove ..."
Cited by 17 (4 self)
Add to MetaCart
. The influence of several hash functions on the distribution of a shared address space onto p distributed memory modules is compared by simulations. Both synthetic workloads and address traces of
applications are investigated. It turns out that on all workloads linear hash functions, although proven to be asymptotically worse, perform better than theoretically optimal polynomials of degree O
(log p). The latter are also worse than hash functions that use boolean matrices. The performance measurements are done by an expected worst case analysis. Thus linear hash functions provide an
efficient and easy to implement way to emulate shared memory. 1 Introduction Users of parallel machines more and more tend to program with the view of a global shared memory. Commercial machines
(with more than 16 processors) however usually have distributed memory modules. Therefore the address space has to be mapped onto memory modules, memory access is simulated by packet routing on a
network connecting ...
, 1997
"... We consider dictionaries of size n over the finite universe U = and introduce a new technique for their implementation: error correcting codes. The use of such codes makes it possible to replace
the use of strong forms of hashing, such as universal hashing, with much weaker forms, such as clus ..."
Cited by 17 (2 self)
Add to MetaCart
We consider dictionaries of size n over the finite universe U = and introduce a new technique for their implementation: error correcting codes. The use of such codes makes it possible to replace the
use of strong forms of hashing, such as universal hashing, with much weaker forms, such as clustering. We use | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=23888","timestamp":"2014-04-18T08:49:06Z","content_type":null,"content_length":"37472","record_id":"<urn:uuid:3a898592-d223-4f17-86ac-49b99465fac7>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
A Safe Stochastic Analysis with Relaxed Limitations on the Periodic Task Model
May 2009 (vol. 58 no. 5)
pp. 634-647
ASCII Text x
Kanghee Kim, Chang-Gun Lee, "A Safe Stochastic Analysis with Relaxed Limitations on the Periodic Task Model," IEEE Transactions on Computers, vol. 58, no. 5, pp. 634-647, May, 2009.
BibTex x
@article{ 10.1109/TC.2008.208,
author = {Kanghee Kim and Chang-Gun Lee},
title = {A Safe Stochastic Analysis with Relaxed Limitations on the Periodic Task Model},
journal ={IEEE Transactions on Computers},
volume = {58},
number = {5},
issn = {0018-9340},
year = {2009},
pages = {634-647},
doi = {http://doi.ieeecomputersociety.org/10.1109/TC.2008.208},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - A Safe Stochastic Analysis with Relaxed Limitations on the Periodic Task Model
IS - 5
SN - 0018-9340
EPD - 634-647
A1 - Kanghee Kim,
A1 - Chang-Gun Lee,
PY - 2009
KW - Real-time and embedded systems
KW - scheduling
KW - stochastic analysis
KW - worst case analysis
KW - periodic task model.
VL - 58
JA - IEEE Transactions on Computers
ER -
This paper proposes a safe stochastic analysis for fixed-priority scheduling, which is applicable to a broader spectrum of periodic tasks than the ones analyzable by any of the existing techniques.
The proposed analysis can find a safe upper-bound of deadline miss probability for periodic tasks with 1) arbitrary execution time distributions, 2) varying interrelease times with the period as the
minimum, and 3) the maximum utilization factor U^{max} that can be greater than 1. One challenge for this is that the release times of tasks are not known a priori because we are not limiting the
interrelease times of each task to a constant, i.e., the period. In such a situation, the relative phases of task instances at run time can be arbitrary. Thus, we need to consider all possible phase
combinations among jobs to find the worst case deadline miss probability, which is not tractable. To handle this difficulty, we first derive the worst case phase combination for harmonic task sets.
Then, we present a safe way to transform a nonharmonic task set to a harmonic task set such that the deadline miss probabilities obtained with the worst case phase combination for the transformed
harmonic task set are guaranteed to be worse than those for the original nonharmonic task set with all possible phase combinations. Therefore, the worst case deadline miss probabilities of the
transformed harmonic tasks can be used as safe upper-bounds of deadline miss probabilities of the original nonharmonic tasks. Through experiments, we show that the safe upper-bound computed by the
proposed analysis is tight enough for practical uses.
[1] L. Abeni and G. Buttazzo, “Stochastic Analysis of a Reservation Based System,” Proc. Ninth Int'l Workshop Parallel and Distributed Real-Time Systems (WPDRTS '01), Apr. 2001.
[2] G. Bernat, A. Colin, and S. Petters, “WCET Analysis of Probabilistic Hard Real-Time Systems,” Proc. 23rd IEEE Real-Time Systems Symp. (RTSS '02), Dec. 2002.
[3] A. Burchard, J. Liebeherr, Y. Oh, and S.H. Son, “New Strategies for Assigning Real-Time Tasks to Multiprocessor Systems,” IEEE Trans. Computers, vol. 44, no. 12, pp. 1429-1442, Dec. 1996.
[4] A. Cervin, “Integrated Control and Real-Time Scheduling,” PhD thesis, Lund Inst. of Tech nology, 2003.
[5] J.L. Díaz, J.M. López, M. García, A.M. Campos, K. Kim, and L. LoBello, “Pessimism in the Stochastic Analysis of Real-Time Systems: Concept and Applications,” Proc. 25th IEEE Real-Time Systems
Symp. (RTSS '04), Dec. 2004.
[6] M.K. Gardner, “Probabilistic Analysis and Scheduling of Critical Soft Real-Time Systems,” PhD thesis, School of Computer Science, Univ. of Illi nois, 1999.
[7] C.-C. Han and H.-Y. Tyan, “A Better Polynomial-Time Schedulability Test for Real-Time Fixed-Priority Scheduling Algorithms,” Proc. 18th IEEE Real-Time Systems Symp. (RTSS '97), Dec. 1997.
[8] K. Kim, J.L. Díaz, L. LoBello, J.M. López, C.-G. Lee, and S.L. Min, “An Exact Stochastic Analysis of Priority-Driven Periodic Real-Time Systems and Its Approximations,” IEEE Trans. Computers,
vol. 54, no. 11, pp. 1460-1466, Nov. 2005.
[9] J.F.C. Kingman, “Inequalities in the Theory of Queues,” J. Royal Statistical Soc., Series B, vol. 32, pp. 102-110, 1970.
[10] C.-G. Lee, L. Sha, and A. Peddi, “Enhanced Utilization Bounds for QoS Management,” IEEE Trans. Computers, vol. 53, no. 2, pp. 187-200, Feb. 2004.
[11] J.P. Lehoczky, “Fixed Priority Scheduling of Periodic Task Sets with Arbitrary Deadlines,” Proc. 11th IEEE Real-Time Systems Symp. (RTSS '90), pp. 201-209, Dec. 1990.
[12] J.P. Lehoczky, “Real-Time Queueing Theory,” Proc. 17th IEEE Real-Time Systems Symp. (RTSS '96), pp. 186-195, Dec. 1996.
[13] J.P. Lehoczky, “Real-Time Queueing Network Theory,” Proc. 18th IEEE Real-Time Systems Symp. (RTSS '97), pp. 58-67, Dec. 1997.
[14] J.P. Lehoczky, L. Sha, and Y. Ding, “The Rate-Monotonic Scheduling Algorithm: Exact Characterization and Average Case Behavior,” Proc. 10th IEEE Real-Time Systems Symp. (RTSS '89), Dec. 1989.
[15] A. Leulseged and N. Nissanke, “Probabilistic Analysis of Multi-Processor Scheduling of Tasks with Uncertain Parameter,” Proc. Ninth Int'l Conf. Real-Time and Embedded Computing Systems and
Applications (RTCSA '03), Feb. 2003.
[16] J. Leung and J. Whitehead, “On the Complexity of Fixed Priority Scheduling of Periodic Real-Time Tasks,” Performance Evaluation, vol. 2, no. 4, pp. 237-250, 1982.
[17] L. Liu and J. Layland, “Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment,” J. ACM, vol. 20, no. 1, pp. 46-61, 1973.
[18] S. Manolache, P. Eles, and Z. Peng, “Memory and Time-Efficient Schedulability Analysis of Task Sets with Stochastic Execution Times,” Proc. 13th Euromicro Conf. Real-Time Systems (ECRTS '01),
pp. 19-26, June 2001.
[19] M.-Y. Nam, C.-G. Lee, K. Kim, and M. Caccamo, “Time-Parameterized Sensing Task Model for Real-Time Tracking,” Proc. 26th IEEE Real-Time Systems Symp. (RTSS '05), pp. 245-255, Dec. 2005.
[20] K.M. Obenland, POSIX in Real-Time, http://www.xtrj.org/collectionposix_rtos.htm , 2001.
[21] A. Terrasa and G. Bernat, “Extracting Temporal Properties from Real-Time Systems by Automatic Tracing Analysis,” Proc. Ninth Int'l Conf. Real-Time and Embedded Computing Systems and Applications
(RTCSA '03), Feb. 2003.
[22] T.-S. Tia, Z. Deng, M. Shankar, M. Storch, J. Sun, L.-C. Wu, and J.-S. Liu, “Probabilistic Performance Guarantee for Real-Time Tasks with Varying Computation Times,” Proc. Real-Time Technology
and Applications Symp. (RTAS '95), pp. 164-173, May 1995.
[23] W. Yuan and K. Nahrstedt, “Energy-Efficient Soft Real-Time CPU Scheduling for Mobile Multimedia Systems,” Proc. 19th ACM Symp. Operating Systems Principles (SOSP '03), pp. 149-163, Oct. 2003.
Index Terms:
Real-time and embedded systems, scheduling, stochastic analysis, worst case analysis, periodic task model.
Kanghee Kim, Chang-Gun Lee, "A Safe Stochastic Analysis with Relaxed Limitations on the Periodic Task Model," IEEE Transactions on Computers, vol. 58, no. 5, pp. 634-647, May 2009, doi:10.1109/
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tc/2009/05/ttc2009050634-abs.html","timestamp":"2014-04-20T19:09:11Z","content_type":null,"content_length":"58249","record_id":"<urn:uuid:3ce5443a-f93c-4b49-bc90-75e4b443e8ac>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Percents Part 5
Percents Part 5
2338 views, 1 rating - 00:11:02
How to write percent as a fraction, part 3--even more complicated examples.
• How do you write a percent as a fraction?
• How do you change a decimal percent into a fraction in lowest terms?
• How can you write 14.2% as a fraction?
• How do you convert 8.3% to a fraction?
• How do you convert 2.12% to a fraction?
• How do you convert 524% to a fraction?
• What does per cent mean?
• Why does % mean /100 or * 1/100?
• How do you divide a decimal by 100?
This is yet another lesson on converting percents to fractions. What makes this lesson yet more difficult than the previous lessons are that the percents have decimals in them. These decimal
percents must be converted to fractions in lowest terms. All steps involved are shown and explained in this great percent / fraction tutorial. There are several ways to convert and simplify each
of the problems. | {"url":"http://mathvids.com/lesson/mathhelp/1491-percents-part-5","timestamp":"2014-04-21T05:22:35Z","content_type":null,"content_length":"104064","record_id":"<urn:uuid:072911e6-05c4-4e67-8c82-a978b65f4018>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maintainer diagrams-discuss@googlegroups.com
Safe Haskell None
Bounding boxes are not very compositional (e.g. it is not possible to do anything sensible with them under rotation), so they are not used in the diagrams core. However, they do have their uses; this
module provides definitions and functions for working with them.
Bounding boxes
data BoundingBox v Source
A bounding box is an axis-aligned region determined by two points indicating its "lower" and "upper" corners. It can also represent an empty bounding box - the points are wrapped in Maybe.
Typeable1 BoundingBox
Eq v => Eq (BoundingBox v)
(Typeable (BoundingBox v), Data v) => Data (BoundingBox v)
Show v => Show (BoundingBox v)
(HasBasis v, Ord (Basis v), AdditiveGroup (Scalar v), Ord (Scalar v)) => Semigroup (BoundingBox v)
(HasBasis v, Ord (Basis v), AdditiveGroup (Scalar v), Ord (Scalar v)) => Monoid (BoundingBox v)
(InnerSpace (V (BoundingBox v)), OrderedField (Scalar (V (BoundingBox v))), InnerSpace v, HasBasis v, Ord (Basis v), AdditiveGroup (Scalar v), Ord (Scalar v), Floating (Scalar v)) => Enveloped (
BoundingBox v)
(VectorSpace (V (BoundingBox v)), VectorSpace v, HasBasis v, Ord (Basis v), AdditiveGroup (Scalar v), Ord (Scalar v)) => HasOrigin (BoundingBox v)
Constructing bounding boxes
emptyBox :: BoundingBox vSource
An empty bounding box. This is the same thing as mempty, but it doesn't require the same type constraints that the Monoid
Queries on bounding boxes
boxFit :: (Enveloped a, Transformable a, Monoid a, Ord (Basis (V a))) => BoundingBox (V a) -> a -> aSource
Transforms an enveloped thing to fit within a BoundingBox. If it's empty, then the result is also mempty.
Operations on bounding boxes | {"url":"http://hackage.haskell.org/package/diagrams-lib-0.6/docs/Diagrams-BoundingBox.html","timestamp":"2014-04-19T22:25:49Z","content_type":null,"content_length":"45091","record_id":"<urn:uuid:7409dc7f-9d4a-4f9c-ac15-bbb5233689a9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
Northbrook Math Tutor
Find a Northbrook Math Tutor
...Proper time management and attention to detail are the keys to a high score. This requires effortful engagement with the material and some open mindedness on the part of the student. The tutors
job is the provide the student with the strategies that will help them overcome an obstacles.
24 Subjects: including differential equations, discrete math, linear algebra, algebra 1
...It is because of my extensive experience and love of the subject matter that I believe I would be a perfect fit for you and your tutoring needs! I look forward to meeting and working with you
soon, JenniferI graduated with a BS in Biology (with minor in chemistry) and, since then, have taken som...
26 Subjects: including trigonometry, ACT Math, discrete math, ASVAB
...I am a certified teacher with experience teaching Social Studies. Most of my experience is in 7th grade, but this past September through December I was teaching Social Studies to grades K-8,
quite a task. I have taught many subjects such as The Constitution, The Declaration of Independence, Indians, Communities, American West, Civil Wat, Revolution, Colonies, World and U.S.
40 Subjects: including SAT math, ACT Math, algebra 1, algebra 2
I am a former family doctor, who on the side has always liked to teach. Teaching patients is a large part of primary care, as I sought to increase patients' understanding and motivation. It was
also important in truly informing them of their options so they could make an intelligent decision in treatment plan.
17 Subjects: including algebra 1, algebra 2, biology, chemistry
...I was valedictorian in elementary school, achieved AP scores of 5/5 in French Language, French Literature, AB Calculus, and Biology in high school, and graduated from college as J.N. Honors
Scholar Cum Laude with a BS in Elementary Education. I am certified to teach grades K-8 in Michigan and grades 1-6 in NY.
19 Subjects: including algebra 1, geometry, precalculus, ACT Math | {"url":"http://www.purplemath.com/northbrook_math_tutors.php","timestamp":"2014-04-18T18:45:02Z","content_type":null,"content_length":"23932","record_id":"<urn:uuid:4158435b-0e77-4f00-8184-832e56ff4cbe>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Flow meters-Venturi tube
Venturi effect
The Venturi effect is the reduction in fluid pressure that results when a fluid flows through a constricted section of pipe. The fluid velocity must increase through the constriction to satisfy the
equation of continuity, while its pressure must decrease due to conservation of energy: the gain in kinetic energy is balanced by a drop in pressure or a pressure gradient force. An equation for the
drop in pressure due to venturi effect may be derived from a combination of Bernoulli's principle and the equation of continuity.
The limiting case of the Venturi effect is when a fluid reaches the state of choked flow, where the fluid velocity approaches the local speed of sound. In choked flow the mass flow rate will not
increase with a further decrease in the downstream pressure environment.
However, mass flow rate for a compressible fluid can increase with increased upstream pressure, which will increase the density of the fluid through the constriction (though the velocity will remain
constant). This is the principle of operation of a convergent-divergent nozzle.
Referring to the diagram to the right, using Bernoulli's equation in the special case of incompressible flows (such as the flow of water or other fluid, or low speed flow of gas), the theoretical
pressure drop (p[1] − p[2]) at the constriction would be given by:
where ρ is the density of the fluid, v[1] is the (slower) fluid velocity where the pipe is wider, v[2] is the (faster) fluid velocity where the pipe is narrower (as seen in the figure). This assumes
the flowing fluid (or other substance) is not significantly compressible - even though pressure varies, the density is assumed to remain approximately constant.
Venturi Tube
The venturi tube, illustrated in Figure 3, is the most accurate flow-sensing element when properly
calibrated. The venturi tube has a converging conical inlet, a cylindrical throat, and a diverging recovery cone. It has no projections into the fluid, no sharp corners, and no sudden changes in
contour. Figure 3 Venturi Tube The inlet section decreases the area of the fluid stream, causing the velocity to increase and the
pressure to decrease. The low pressure is measured in the center of the cylindrical throat since the pressure will be at its lowest value, and neither the pressure nor the velocity is changing.
The recovery cone allows for the recovery of pressure such that total pressure loss is only 10% to 25%. The high pressure is measured upstream of the entrance cone. The major disadvantages
of this type of flow detection are the high initial costs for installation and difficulty in installation and inspection.
In the venturi meter the fluid is accelerated through a converging cone of angle 15-20^o and the pressure difference between the upstream side of the cone and the throat is measured and provides a
signal for the rate of flow.
The fluid slows down in a cone with smaller angle (5 - 7^o) where most of the kinetic energy is converted back to pressure energy. Because of the cone and the gradual reduction in the area there is
no "Vena Contracta". The flow area is at a minimum at the throat.
High pressure and energy recovery makes the venturi meter suitable where only small pressure heads are available.
A discharge coefficient c[d] = 0.975 can be indicated as standard, but the value varies noticeably at low values of the Reynolds number.
The pressure recovery is much better for the venturi meter than for the orifice plate.
The venturi tube is suitable for clean, dirty and viscous liquid and some slurry services.
The rangeability is 4 to 1
Pressure loss is low
Typical accuracy is 1% of full range
Required upstream pipe length 5 to 20 diameters
Viscosity effect is high
Relative cost is medium
Dear Guest,
Spend a minute to
in a few simple steps, for complete access to the Social Learning Platform with Community Learning Features and Learning Resources.
If you are part of the Learning Community already, | {"url":"https://www.classle.net/book/flow-meters-venturi-tube","timestamp":"2014-04-16T05:13:43Z","content_type":null,"content_length":"57960","record_id":"<urn:uuid:1cd6b234-a664-4780-b8a4-e7c9323680ca>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Endogenous variable in
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Endogenous variable in
From revollo7@etu.unige.ch
To statalist@hsphsun2.harvard.edu
Subject st: Endogenous variable in
Date Wed, 24 Nov 2004 20:08:34 +0100
I am student of Master degree at the Unversity of Geneva, I am coimputing the
estimation of this 2 eq. The first is a loglog cost function with 3 variables,
using a panel data.
(1) ln C(t)= beta_{q}ln q(t) + beta_{R}ln R(t)+ beta_{pm} ln pm()
(2) (P(t) - e^{rt} P_{0}) - beta_{q}left[ C_{t}/q_{t}- e^{rt} C_{0}/q_{0} right]
= beta_{R} [ (1+r)^t C_{0}/R_{0}+ (1+r)^(t-1) C_{1}/R_{1} + ... + (1+r)
C_{t-1}/R_{t-1} + C_{t}/R_{t} ]
Empirical strategy :
1. estimate (1) by fixed effects 2sls IV
2. estimate the system by 3sls, however is a nonlinear and a presence of lagged
endogenous variable said to me is a fair estimator. Fair, Ray C. "The estimation
of Simultaneous Equation Models with Lagged Endogenous Variables and the First
Order Serially Correlated Errors", Econometrica 38 (May 1970 ) 507-16.
How can I compute the Fair estimator in STATA ?
Thank you for your time and help.
Gustavo Revollo
cite universitaire
46 av. miremont
1206 geneve
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2004-11/msg00755.html","timestamp":"2014-04-16T13:22:28Z","content_type":null,"content_length":"5648","record_id":"<urn:uuid:6cf842cc-abc2-470d-a3ef-b37b421fcb8c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: The Expressive Power of Voting Polynomials
James Aspnes y Richard Beigel z Merrick Furstx Steven Rudichx
October 14, 1993
We consider the problem of approximating a Boolean function f : f0;1gn ! f0;1g by the
sign of an integer polynomial p of degree k. For us, a polynomialp(x) predicts the value of f(x)
if, whenever p(x) 0, f(x) = 1, and whenever p(x) < 0, f(x) = 0. A low-degree polynomial
p is a good approximator for f if it predicts f at almost all points. Given a positive integer
k, and a Boolean function f, we ask, \how good is the best degree k approximation to f?"
We introduce a new lower bound technique which applies to any Boolean function. We show
that the lower bound technique yields tight bounds in the case f is parity. Minsky and Papert
10] proved that a perceptron can not compute parity; our bounds indicate exactly how well
Yale University, Dept. of Computer Science, P.O. Box 208285, New Haven CT 06520-8285.
yEmail: aspnes-james@cs.yale.edu.
zEmail: beigel-richard@cs.yale.edu. Supported in part by NSF grants CCR-8808949 and CCR-8958528.
xCarnegie-Mellon University, School of Computer Science, Pittsburgh, PA 15213-3890.
a perceptron can approximate it. As a consequence, we are able to give the rst correct proof
that, for a random oracle A, PPA is properly contained in PSPACEA. We are also able to prove
the old AC0 exponential-size lower bounds in a new way. This allows us to prove the new result | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/916/4779797.html","timestamp":"2014-04-23T18:40:44Z","content_type":null,"content_length":"8614","record_id":"<urn:uuid:0aa3f1d7-80c9-46d6-ab9e-a9ef7f933182>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
HELP in differential calculus
April 2nd 2009, 02:03 PM #1
Apr 2009
hey,I have 2 questions that I need help in .
1)If U=tan^-1(x^3+y^3/x-y)Prove that x*du/dx + y*du/dy = Sin2U
2)d^2u/2x*dy=d^2u/dx*dy wen U=log(x^2+y^2/xy)
Please help
How does "U" relate to "u"? What did you find when you took the various derivatives?
Please be complete. Thank you!
I think 'U' and 'u' are the same thing ??? Does that make any sense ?
1)If U=tan^-1(x^3+y^3/x-y)Prove that x*du/dx + y*du/dy = Sin2U
this has something to do iwth Partial derivatives
2)d^2u/2x*dy=d^2u/dx*dy wen U=log(x^2+y^2/xy)
and this has something to do with Successive Partial Derivatives
Well, honestly i am totally pathetic at this calculus so I have no clue at all
Okay,sorry for goin into the history of pointless things ...I'm sure u dont rely care
THANK YOU EVA SO MUCH
April 3rd 2009, 04:27 PM #2
MHF Contributor
Mar 2007
April 5th 2009, 03:32 AM #3
Apr 2009 | {"url":"http://mathhelpforum.com/calculus/81999-help-differential-calculus.html","timestamp":"2014-04-20T04:23:23Z","content_type":null,"content_length":"34897","record_id":"<urn:uuid:c0277739-c560-40fc-a53f-e926bdc0296b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
December 2
part 1
we reviewed the basic first steps of making an 80/20 analysis, using customer profit data. Now, at long last, we'll go over how you can turn the data into information and action.
Use the results of an 80/20 analysis to help you see patterns you might not have noticed (or didn't want to notice) otherwise.
The most obvious would be that a small group of customers produces a huge portion of the profit. This is super important. I know I am repeating myself from part 1, but hey - it is important.
If you lose a small portion of this top customer business, you lose a big chunk of profit. Not just sales - profit.
On the other hand, if you treat them well, find out what you are doing right (best way to find out? - ask them!), and apply the time and resources to do more of it, any increase in activity with
these top customers adds a big chunk of profit. It's the leverage of 80/20 and it works both ways.
Next scan your prospect list and the bottom 80% of customers for the ones that could be in the top 20% and think about how to get them there.
T.S. had an excellent
about negative ways 80/20 information is all too often used. I agree with him.
Don't go telling your customers that you have run any kind of analysis. They really shouldn't know how they compare with your other customers.
Your customers only need to know that you appreciate them by how well you take care of their needs and wants. The analysis is to remind you to do just that.
Don't use the numbers to apply silly labels and give cheesy plaques to the top customers while effectively letting the rest know that they are second-rate.
But do use this great information internally in order to allocate precious resources like time. There are plenty of ways to make sure you are using these resources where they will have the most
impact (leveraging the top 20%), without kissing off the bottom 80% of your customers.
So that will lead to part 3.
The 80/20 principle is a powerful tool that can be used in many ways. This is the time of year to employ the 80/20 principle to help focus your planning for 2009.
The simple applications of the 80/20 principle tell us that, typically:
80 percent of our sales come from 20 percent of our customers
80 percent of our profits come from 20 percent of the products we offer
80 percent of productivity comes from 20 percent of our efforts, etc.
Now these numbers might end up being 70/30 or 95/5, but the principle is remarkably consistent, and it provides us in fastener marketing with a strategy management consultant that is objective,
reliable, and that doesn't require payment or even so much as a cup of coffee.
The 80/20 principle is completely free and works 24/7. The trick is in finding ways to apply it so that we can maximize its effectiveness. As a matter of fact, you might says that 20 percent of the
ways we use the 80/20 principle will produce 80 percent of the results.
So in fastener marketing we can find many uses for the 80/20 principle. Let's start with one of the obvious ones - customer profitability.
If you have a decent database you should be able to print a list of all of your customers (whether you are doing this for your territory as a salesperson, or your district as a district manager, or
the whole company as the big cheese)and how much profit (notice I did not say sales) you have made from each. Try the period of 11/1/07 - 10/31/08. That's it. Keep it simple. 80/20 loves simple.
Now sort that list (paste it into excel if you need to) by total annual profit from each customer, with the greatest profit total at the top, and working down to the customer that contributed 64
cents of profit during those 12 months.
Now run a profit total for the whole list, let's say it is $100,000. Now you want to see how many of your customers give you 80% of that profit (in our example - $80,000).
You know what is going to happen, but you still have to do this. You have to see it. Go down the list until you accumulate $80,000 in profit. You will be surprised that you don't have to go very far
down the list before you hit that magic number. Now, the portion of customers won't be exactly 20%, but it will probably be a striking figure.
Now, the first thing you should do is find some meaningful way to show your appreciation to those top few customers who are giving you such a huge portion of your profit. Don't call them and say,
"Hey, I ran an 80/20 analysis and you are in the top group, so thanks!" but figure out how to make sure the customer feels appreciated, and to make sure you are not taking them for granted.
That is a very important step, but it is only the beginning of how you can use these 80/20 results. So stay tuned for Part 2. | {"url":"http://fastenerfreaks.blogspot.com/2008_12_01_archive.html","timestamp":"2014-04-19T14:29:29Z","content_type":null,"content_length":"39398","record_id":"<urn:uuid:91092e24-05ce-43af-a9c5-3fd380ddfc3b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
C++ algorithm books
10-08-2002 #1
Hey everyone, I was looking for some good problem solving books for C++ and came across "Algorithms in C++" by Robert Sedgewick. Any recommend this book or have any gripes about it? Also, anyone
know of any other great books for a semi-newbie that explores algorithms and problem solving? Thanks!
personally I dont like sedgewicks book at all. I would instead though suggest these three books...
Introduction to data structures and algorithms with c++ by Glenn Rowe.
Data structures and algorithm analysis in c++ by Mark Weiss.
Data structures and problem solving with c++ by Mark Weiss.
Free the weed!! Class B to class C is not good enough!!
And the FAQ is here :- http://faq.cprogramming.com/cgi-bin/smartfaq.cgi
For semi-newbies:
C++ Data Structures book by D. Malik - Course Technology
Also, Schaums DS with C++ Hubbard- McGraw Hill
I have more advanced texts- for non newbies- but these should get you started.
Also, Sedgewick is Ok, I would not personally teach out of it. Rowe and Weiss book are Ok as well.
Mr. C: Author and Instructor
I think Sedgewick is not a bad book. However, whomever is responsible for coding the examples should be shot in the face. Actually, it says "with C++ consulting by Christopher J. Van Wyk" so
there you have it.
I'm a fan of Sedgewick's book, but not the code examples.
My best code is written with the delete key.
What exactly is the problem with Sedgewicks books? Since this is the only book at my bookstore dealing with algorithms, it would be nice to have something concrete wrong with it before searching
solely on the web. Also, Mister C - do the books that you recommend get into any problem solving techniques? Thanks all!
>What exactly is the problem with Sedgewicks books?
The code examples exhibit bad style. They obviously aren't meant for production code, but that is never mentioned anywhere...some of them also use void main.
My best code is written with the delete key.
Well, considering I plan on doing most of my own examples to learn instead of copying from the book, then Sedgewick's book sounds just fine for me. Thanks Prelude! Amazing how often you help me
Stone_Coder (or anyone!), I just searched on the web and found some cheap versions of Weiss's books, but they were all old editions (like 1994 and 1996). Do you know if there's much difference
between these editions and the latest updates? The newest editions were a little pricey considering shipping has to be included...
Sorry, for not getting back sooner. The Malik does a little bit (problem solving). The Hubbard does not. They both cover the wide range of data structure topics- and how to use them.
Most DS books already assume you know how to problem solve already.
Mr. C: Author and Instructor
Thanks! I found a Sedgewick book on PDF but its rather old and uses Pascal
I have his algorithms in C++ book and it is decent, but for a novice to intermediate programmer. If you have a lot of expirience under your belt, you could go with soemthing a little more
Unless you consider two months as having lots of experience, I'm definately a newbie!
The best part is that Sedgewick's book is far more readable than Knuth's, which is the only algorithms book more detailed that I have seen.
I found Introduction to Algorithms to be one of the best. Sedgewick's code is really too compressed. It's
as if he's try to fit it all in one page. Weiss's books are
too light on analysis and they use c++/c code depending
on which one you get. The analysis
that is used is not explained really well. Introduction to Algorithms really goes to great lengths to
make the mathematics self contained.
>Sedgewick's code is really too compressed.
That's a good thing for a book like this. You can move through the code without getting lost after flipping 6 pages. Though that may just be me.
My best code is written with the delete key.
That's a good thing for a book like this. You can move through the code without getting lost after flipping 6 pages.
Well 6 pages per algorithm is Weiss's book. Really few
of the algorithms in Introduction to Algorithms are
more than a page.
I don't think it is a good introductory book, unless you are well versed with some form or programming or higher mathematics.
I'm not well versed in mathematics, but I can see someone having a tough time with it. It's not nearly as bad as Knuth's though.
10-08-2002 #2
10-08-2002 #3
CS Author and Instructor
Join Date
Sep 2002
10-08-2002 #4
Registered User
Join Date
Apr 2002
10-09-2002 #5
10-09-2002 #6
10-09-2002 #7
10-09-2002 #8
10-10-2002 #9
CS Author and Instructor
Join Date
Sep 2002
10-11-2002 #10
10-11-2002 #11
10-11-2002 #12
10-11-2002 #13
Join Date
Aug 2001
10-11-2002 #14
10-11-2002 #15
Join Date
Aug 2001 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/26104-cplusplus-algorithm-books.html","timestamp":"2014-04-16T13:59:54Z","content_type":null,"content_length":"97621","record_id":"<urn:uuid:22c28f07-bada-408d-aece-17b1291da77d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kids.Net.Au - Encyclopedia > Phase-shift keying
, the term
phase-shift keying
(PSK) has the following meanings:
1. In digital transmission, angle modulation in which the phase of the carrier is discretely varied in relation either to a reference phase or to the phase of the immediately preceding signal element
[?], in accordance with data being transmitted.
2. In a communications system, the representing of characters, such as bits or quaternary digits, by a shift in the phase of an electromagnetic carrier wave with respect to a reference, by an amount
corresponding to the symbol being encoded.
Note 1: For example, when encoding bits, the phase shift could be 0° for encoding a "0," and 180° for encoding a "1," or the phase shift could be -90° for "0" and +90° for a "1," thus making the
representations for "0" and "1" a total of 180° apart.
Note 2: In PSK systems designed so that the carrier can assume only two different phase angles, each change of phase carries one bit of information, i.e., the bit rate[?] equals the modulation rate.
If the number of recognizable phase angles is increased to 4, then 2 bits of information can be encoded into each signal element[?]; likewise, 8 phase angles can encode 3 bits in each signal element.
Synonyms biphase modulation, phase-shift signaling.
Source: from Federal Standard 1037C and from MIL-STD-188
All Wikipedia text is available under the terms of the GNU Free Documentation License | {"url":"http://encyclopedia.kids.net.au/page/ph/Phase-shift_keying","timestamp":"2014-04-19T14:45:00Z","content_type":null,"content_length":"14901","record_id":"<urn:uuid:0fdf7df3-8820-4468-a05f-cdfb86cb558c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analysis with an Introduction to Proof
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/analysis-introduction-proof-5th-lay-steven/bk/9780321747471","timestamp":"2014-04-16T20:01:02Z","content_type":null,"content_length":"29146","record_id":"<urn:uuid:2fe04f29-53bd-428f-bbde-14ea4dd702c9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Can One Test a Program's Average Performance?
December 12, 2013
The standard-library
function. This function typically implements the Quicksort algorithm, which sorts an
-element sequence in
time — on average.
Last week, I argued that testing a program's performance is harder than testing its functionality. Not only is it hard to verify that the performance is up to par, but it can be hard to define
exactly what "par" means.
I would like to continue by looking at the standard-library sort function. This function typically implements the Quicksort algorithm, which sorts an n-element sequence in O(n log n) time — on
average. Despite this average performance, input that is chosen unfortunately can cause a Quicksort implementation to run much more slowly than average; for example, in O(n^2) time. I chose the word
unfortunately on purpose, because it is not unusual for Quicksort implementations to use randomness to ensure that the quadratic-performance cases come along only very rarely.
Why is randomness important here? Quicksort starts by picking an element, called the pivot, of the array to be sorted. Quicksort then typically rearranges the elements of the sequence so that all
elements less than or equal to the pivot come first, followed by all of the elements greater than the pivot. This rearrangement can always be done in O(n) time. Finally, Quicksort calls itself
recursively to sort the elements of the two sections of the (now rearranged) array.
Accordingly, Quicksort's running time is no worse than proportional to the number of elements times the maximum recursion depth. By implication, Quicksort's performance depends on having the
recursion depth usually be no more than O(log n). This depth limit can be achieved so long as the pivot, on average, is not too close to the largest or smallest element.
How does Quicksort guarantee that the pivot is not too close to the endpoints? In general, it can't. Nevertheless, it can avoid performance problems most of the time by picking the pivot at random.
Doing so ensures that Quicksort's average performance is reasonable, even though once in a while the pivot might happen to be close enough to an endpoint to cause performance problems. Such
occasional problems aren't a big deal as long as they're rare. Right?
Well, that depends. Suppose your job is to write performance tests for an implementation of Quicksort.
• How do you translate the vague "average performance" claim in the C++ standard into a requirement that it is possible to test at all?
• How you test Quicksort in a way that gives you any confidence in the results?
What makes average performance so hard to test is that the very notion has an element of probability in it. If a program is required to produce a particular result, then you can say with certainty
that the result of a particular test run is either right or wrong. In contrast, if you are testing a requirement on average performance, no single test can be said with certainty to be right or
wrong. The best you can hope for is that by running more and more tests, you can increase your confidence that the program is working correctly; there is always the possibility that further testing
may cause you to change your mind about the program's correctness.
In short, if the performance requirements include claims about average execution time, testing those claims is apt to require some kind of statistical analysis. Such analysis is not always easy, but
certainly has a long tradition in engineering. As an example, consider American Airlines flight 191.
Flight 191 took off from O'Hare Airport on May 25, 1979. Just as the airplane was leaving the ground, the engine on the left wing seized up and separated from the wing. The engine was attached to the
wing by shear pins that were designed to break rather than damaging the wing. Nevertheless, because of faulty maintenance, the wing was damaged; that damage caused the airplane to go out of control
and crash, killing everyone aboard.
In reading about the ensuing investigation, I saw a discussion of how a different aircraft manufacturer tested its shear pins in order to ensure that — assuming that the aircraft is maintained
properly — the pins will allow the engine to leave the wing rather than damage it. It hasn't occurred to me before, but a major engineering problem in designing shear pins is that the purpose of a
shear pin is to break if too much force is applied to it. There is no way to test whether a pin meets that requirement without destroying it. It follows, therefore, that the pins that are actually
used in the airplane cannot be tested.
How can one possibly be confident in the safety of an airplane that is built this way? The answer is quite clever.
• The engine is attached to the wing with several shear pins in such a way that even if one of them fails to break, the engine will still separate from the wing rather than damage the wing.
• The shear pins are manufactured in batches of 100, all made at the same time in the same way.
• From each batch of 100 pins, ten pins are selected at random and tested, thereby destroying them. If all ten pins pass the tests, the other 90 are assumed to be good enough to use. If even a
single pin fails, the entire batch is discarded.
Obviously, this design involves not only clever mechanical engineering, but also sophisticated statistical reasoning. The limits on the pins must be chosen so that the probability of two randomly
chosen pins being out of limits is very small once the 10% sample of the pins has passed its tests. I imagine that this probability can be made even smaller by making the limits on the tested pins
narrower than the pins need to be in practice.
I would not want to have to do this kind of statistical analysis in order to test the performance of a Quicksort implementation. Even if I were confident enough in my ability to get the statistics
right, there is always the possibility that a future change to the specifications or to the test procedure might render the statistics invalid. Moreover, there is one important difference between
algorithms such as Quicksort and mechanical devices such as shear pins, namely that algorithms are sometimes given difficult inputs on purpose. For example, Doug McIlroy wrote a paper in 1999 that
detailed how one can construct input to Quicksort that will force it to take O(n^2) operations to sort an n-element array. Does a Quicksort implementation that misbehaves in this way fail to meet its
specifications? If so, it's hard to see how we can use Quicksort at all.
One way to simplify such performance-testing problems is to use white-box testing, which is testing that takes advantage of knowledge of the program's implementation details. I'll discuss such
testing techniques in more detail next week. | {"url":"http://www.drdobbs.com/cpp/how-can-one-test-a-programs-average-perf/240164691?cid=SBX_ddj_related_mostpopular_default_mobile&itc=SBX_ddj_related_mostpopular_default_mobile","timestamp":"2014-04-17T06:57:40Z","content_type":null,"content_length":"97629","record_id":"<urn:uuid:8ae5622a-7060-4d88-9fba-1bcab9cd0587>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Artificial Intelligence: Problem Set 3
Assigned: Feb. 27 Due: Mar. 20
Consider a domain whose entities are people, books, and volumes. (A "volume" is a particular physical object, which is a copy of an abstract "book", like Moby Dick).
Let L be the first-order language containing the following predicates:
o(P,V) --- Predicate. Person P owns volume V
c(V,B) --- Predicate. Volume V is a copy of book B.
a(P,B) --- Predicate. Person P is the author of book B.
i(P,B) --- Predicate. Person P is the illustrator of book B.
h --- Constant. Howard Pyle.
s --- Constant. Sam.
j --- Constant. Joe.
Problem 1
Express the following statements in L: (Note correction to sentence d.)
• a. Sam owns a copy of every book that Howard Pyle illustrated.
• b. Joe owns a copy of a book that Howard Pyle wrote.
• c. Howard Pyle illustrated every book that he wrote.
• d. Sam owns only illustrated books. Interpret this in the form "If Sam owns volume V and V is a copy of book B, then B has been illustrated by someone."
• e. None of the books that Joe has written have been illustrated by anyone.
• f. Sam does not own a copy of any book that Joe has written.
• g. There is a book B such that both Sam and Joe own a copy of B.
Problem 2
Using resolution, show that (g) can be proven from (a-c) and that (f) can be proven from (d,e). You must show the Skolemized form of each of the axioms and of the negated goals. You need not show the
intermediate steps of Skolemization. | {"url":"http://cs.nyu.edu/courses/spring01/G22.2560-001/hwk3.html","timestamp":"2014-04-18T03:01:08Z","content_type":null,"content_length":"2010","record_id":"<urn:uuid:d8bcd2ec-e706-4af6-9679-00b294c562e4>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00443-ip-10-147-4-33.ec2.internal.warc.gz"} |
MPPT hunting
In Hardware on May 21, 2013 at 00:01
Solar panels are funny power sources: for each panel, if you draw no power, the voltage will rise to 15..40 V (depending on the type of panel), and when you short them out, a current of 5..12 A will
flow (again, depending on type). My panels will do 30V @ 8A.
Note that in both cases just described, the power output will be zero: power = volts x amps, so when either one is zero, there’s no energy transfer! – to get power out of a solar panel, you have to
adjust those parameters somewhere in between. And what’s even worse, that optimal point depends on the amount of sunlight hitting the panels…
That’s where MPPT comes in, which stands for Maximum Power Point Tracking. Here’s a graph, copied from www.kitenergie.com, with thanks:
As you draw more current, there’s a “knee” at which the predominantly voltage-controlled output drops, until the panel is asked to supply more than it has, after which the output voltage drops very
Power is the product of V and A, which is equivalent to the surface of the area left of and under the current output point on the curve.
But how do you adjust the power demand to match that optimal point in the first place?
The trick is to vary the demand a bit (i.e. the current drawn) and then to closely watch what the voltage is doing. What we’re after is the slope of the line – or in mathematical terms, its
derivative. If it’s too flat, we should increase the load current, if it’s too steep, we should back off a bit. By oscillating, we can estimate the slope – and that’s exactly what my inverter seems
to be doing here (but only on down-slopes, as far as I can tell):
As the PV output changes due to the sun intensity and incidence angle changing, the SMA SB5000TL inverter adjusts the load it places on the panels to get the most juice out of ‘em.
Neat, eh?
Update – I just came across a related post at Dangerous Prototypes – synchronicity!
1. This might be ask for the sake of asking, but the sketch above only is for a certain amount of insolation. Is there -as far as you know- a difference in the SHAPE of the MPP curve with changed
Would a JeeNode be able to do MPP on a solar cell using a suitable R/C/FET combo?
□ The shape does not change very much with insolation (only due to secondary effects). Of course, the short circuit current is linearly dependant on insolation :-). The output voltage has a
quite strong negative temperature depedance. MPP tracking is mostly done quite slowly (changes in insolation are also slow), a Jeenode has much more than the necessary processing power to do
2. Maybe an L/C/FET combo would be more effective. As far as I can see, a resistor could only waste power.
3. I’m a complete newbie at this but how do you make use of the MPPT info? How can you tune and control the output current draw and voltage? And how can you make optimal use of it?
4. Baruch, an MPPT charge controller (e.g., to charge batteries from PV) or a grid-tie inverter will work a bit like a switch-mode power supply. It changes the frequency or mark-space ratio or
whatever to control the amount of current, and hence the voltage, drawn from the PV panels for optimum power transfer to the batteries or grid.
Normally the device will hunt up and down the power curve, varying the current, looking for the optimum, automatically tracking changes as the sunlight varies. Sometimes it has to sweep the whole
range in case it’s got caught up on a local maximum – this sort of thing can happen if you have multiple panels and some of them are partially shaded. | {"url":"http://jeelabs.org/2013/05/21/mppt-hunting/","timestamp":"2014-04-17T06:41:59Z","content_type":null,"content_length":"31588","record_id":"<urn:uuid:b68ac524-db9d-43a4-8378-c7b89a13e0ea>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Integral question ( verifying my answer)
• 11 months ago
• 11 months ago
Best Response
You've already chosen the best response.
\[\int\limits_{}^{} tsin(t)\cos(t)\] i did as follow: Integration by part u=t and du=dt / dv=sin(t)cos(t) v=sin^2(t)/2 which followed with \[(tsin ^{2}t)/2 - \int\limits_{}^{}(\sin ^{2}t)/2dt \]
then I get ( just for the integral) \[(1/4)\int\limits_{}^{}1-\cos2t \rightarrow (1/4)(t+(\sin2t)/2)\] So my final answer looks like this: \[(tsin ^{2}t)/2 - 1/4(t+(\sin2t)/2 + c)\]
Best Response
You've already chosen the best response.
The answer key did it differently I just wanted to know if someone could help me confirm whether or not my answer looks right
Best Response
You've already chosen the best response.
Hmm your process was a little strange. If you start by applying the `Double Angle Formula for Sine` it makes this problem a bit easier in my opinion. You end up with a solution of,\[\large -\frac
{1}{4}\cos(2t)+\frac{1}{8}\sin(2t)+C\] But looking at your solution, applying some identities, I can now see that it is equivalent. So it looks like it worked out for you! :)
Best Response
You've already chosen the best response.
Yeah it was a little strange to me, I just remembered a number where it used substitution within the integration by part. I honnestly kinda forgot how to use my substitution to prove a trig ( 3
years ago ) but alright thanks^^ It gets kinda hard to notice whether or not an answer is right when different methods can apply ahaha!
Best Response
You've already chosen the best response.
Yes the solution used double angle so I didn't come up as the same answer!
Best Response
You've already chosen the best response.
Ah I see c:
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5175dd9ae4b0050fabb9e31b","timestamp":"2014-04-16T13:14:35Z","content_type":null,"content_length":"40475","record_id":"<urn:uuid:63f93da3-114f-456c-89d7-18a5bcea503d>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
Perpetual Motion of The 21st Century?
January 30, 2012
Are quantum errors incorrigible? Discussion between Gil Kalai and Aram Harrow
Gil Kalai and Aram Harrow are world experts on mathematical frameworks for quantum computation. They hold opposing opinions on whether or not quantum computers are possible.
Today and in at least one succeeding post, Gil and Aram will discuss the possibility of building large-scale quantum computers.
Quantum computers provide a 21st Century field for the kind of debate first led by Albert Einstein about the reach of quantum theory. One thought experiment by which Einstein tried to contravene the
Uncertainty Principle can be described as having asserted that quantum theory implies the creation of perpetual motion machines, which are impossible machines. In a later attempt, after initial
puzzlement, Niels Bohr pointed out that Einstein himself had neglected to correct for gravity’s effect on time in general relativity.
Perpetual motion machines were the dream of many inventors over the centuries—and why not? Having a machine that could create useful work but consume no fuel would change the world. Alas advances in
our understanding of physics have ruled them out: there is indeed no free lunch. The designs at right look like birds-of-a-feather, but the rightmost was designed in 1960 by Hermann Bondi to
illustrate Bohr’s correction above.
The guest discussions here between Gil Kalai and Aram Harrow address the fundamental question:
Are quantum computers feasible? Or are their underlying models defeated by some fundamental physical laws?
Those like Royal Society co-founder John Wilkins who in 1670 wrote of perpetual motion machines did not know of the second law of thermodynamics. We, Dick and Ken, would like to think that if blogs
like GLL were around centuries ago there might have been a more penetrating discussion than even the Royal Society could foster. We are here now, and we are very honored that Gil and Aram wish to use
GLL as a location to discuss this interesting, important, and wonderful question. We believe in the win-win that either we will have wonderful quantum computers, or we will learn some new laws of
nature, particularly about information.
For a roadmap, Gil and Aram will alternate thesis-response in these posts, talking about quantum error-correction and fault tolerance. However, we also invite you, the reader, to take part in the
debate sparked by Gil’s paper, “How quantum computers fail: Quantum codes, correlations in physical systems, and noise accumulation.”Perhaps they and we will react to comments. We thank them greatly,
and have worked to make the issues even more accessible.
Guest Post: Gil Kalai
The discovery by Peter Shor of the famous quantum algorithm for fast integer factoring gave a strong reason to be skeptical about quantum computers (QC’s), along with an even stronger reason for
wanting to build them. Shor is also the pioneer of quantum error-correction and quantum fault-tolerance, which give good reasons to believe that QC’s can be built. Other researchers have focused on
this very issue, and the physics community is filled with work on many approaches to building practical QC’s.
In my (Gil’s) part of the world, Michael Ben-Or is a world leader in theoretical computer science with major contributions in cryptography, complexity, randomization, distributed algorithms, and
quantum computation. Among the famous notions associated with Michael’s work before he turned quantum are non-cryptographic fault-tolerance, multi-prover interactive proofs, and algebraic decision
trees. Dorit Aharonov is one of the great quantum computation researchers in the world and she has studied, among other things, fault tolerance, adiabatic computation, lattice problems, computation
of Jones polynomials, and quantum Hamiltonian complexity.
Aharonov and Ben-Or proved in the mid-1990s (along with other groups) the threshold theorem which allows fault tolerant quantum computation (FTQC) at least in theory. The following photo shows them
on the road in Jerusalem in 2005 with me at left, and on the right Robert Alicki, a famous quantum physicist from Gdansk, Poland, known for work on quantum dynamical systems.
Alicki is perhaps the only physicist engaged in long-term research skeptical towards quantum computers and error-correction. Over the years he has produced several papers and critiques under this
program, coming from several different directions: some based on thermodynamics, others based on various issues in modeling noisy quantum evolutions.
Conjectures on noisy QC’s and error-correction
I suppose readers here are familiar with the basic concepts of quantum computers: qubits, basis states as members of ${\{0,1\}^n}$, superposition, entanglement, interference. My comments in the first
round of discussions are based on several (related) papers of mine, mainly the one linked above (alternate link). A more technical paper is “When noise accumulates.” Here are slides from a related
lecture at Caltech’s Institute for Quantum Information, and an earlier, more-detailed, survey. The feasibility of building quantum computers that can out-perform digital computers is one of the most
fascinating and clear-cut scientific problems of our time. The main concern is that quantum systems are inherently noisy. Roughly what this means for QC’s is that the internal states of quantum
registers may vary unpredictably outside the range that allows the algorithm to continue.
First consider a single classical bit ${b}$ with some probability ${p < 1/2}$ of being flipped when read. For any ${\delta > 0}$ we can improve the odds of correct reading above ${1-\delta}$ by
making and sending enough separate copies ${b,b,b,\ldots,b}$. In case of any flips the reader will take the majority value, and this works provided the error events on the different bits are
independent. For strings of bits there are error correcting codes that achieve the same guarantee more efficiently than making copies, and that can also cope with limited kinds of correlated errors
such as “burst noise” which affects consecutive bits.
For quantum systems there are special obstacles, such as the inability to make exact copies of quantum states in general. Nevertheless, much of the theory of error-correction has been carried over,
and the famous threshold theorem shows that fault-tolerant quantum computation (FTQC) is possible if certain conditions are met. The most-emphasized condition sets a threshold for the absolute rate
of error, one still orders of magnitude more stringent than what current technology achieves but approachable. One issue raised here, however, is whether the errors have sufficient independence for
these schemes to work or correlations limited to what they can handle. I will now go on to describe my conjectures regarding how noisy quantum computers really behave.
Conjecture 1 (No quantum error-correction): In every implementation of quantum error-correcting codes with one encoded qubit, the probability of not getting the intended qubit is at least some $
{\delta > 0}$, independently of the number of qubits used for encoding.
Conjecture 1 does not obstruct classical error correction as described above. The rationale behind Conjecture 1 is that when you implement the decoding from a single qubit to ${n}$ qubits ${f(x) \
rightarrow (x_1,x_2,\dots,x_n)}$, a noise in the input ${x}$ amounts to having a mixture with undesired code words. The conjecture asserts that, for a realistic implementation of quantum
error-correction, there is no way around it. Conjecture 1 reflects a strong conjectural interpretation of the principle that quantum systems are inherently noisy:
Conjecture 2 (The strong principle of noise): Quantum systems are inherently noisy with respect to every Hilbert space used in their description.
The next two conjectures concern noise among entangled qubits—proposed mathematical formulations for them are in the paper.
Conjecture 3: A noisy quantum computer is subject to noise in which error events for two substantially entangled qubits have a substantial positive correlation.
Conjecture 4: In any quantum computer at a highly entangled state there will be a strong effect of error synchronization.
Standard circuit or machine models of QC’s divide the computation into discrete cycles, between which one can identify “fresh noise” apart from the accumulated effect of previous noise. The threshold
theorem does entail that (when the noise rate is under the threshold) for FTQC to fail, these conjectures must hold for the fresh noise. A QC model in which fresh noise shows these effects differs
sharply from the assumptions underlying standard models. I proved that a strong form of Conjecture 3, where “entanglement” is replaced by a certain notion of “emergent entanglement,” implies
Conjecture 4.
Conjectured Limit on Entanglement
The papers argue a few other conjectures regarding how noisy quantum computers behave. One describes noisy quantum evolutions that do not enact quantum fault tolerance, which we skip here. The most
quantitative one is called Conjecture C in the technical paper on noise, C for censorship because it concerns what types of (highly entangled) quantum states cannot be reached at all by such noisy
Consider a QC with a set ${A}$ of ${n}$ qubits. Given a subset ${B}$ of ${m}$ qubits, consider the convex hull ${F}$ of all states that for some ${k}$ factor into a tensor product of a state on some
${k}$ of the qubits and a state on the other ${m-k}$ qubits. For a state ${\tau}$ on ${B}$, define ${ENT(\tau,B)}$ as the trace distance between ${\tau}$ and ${F}$. For a state ${\rho}$ of all the $
{n}$ qubits, define ${\widetilde{ENT}(\rho)=\sum \{ENT(\rho|_B): B\subset A\}}$.
Conjecture C: There is a polynomial ${P(n)}$ (perhaps even a quadratic polynomial) such that for any QC on ${n}$ qubits, which describes a state ${\rho}$ (which need not be pure), ${\widetilde
{ENT}(\rho)\le P(n)}$.
Here QC can be regarded as a quantum circuit given initial state ${|0>^{\otimes n}}$.
Interpreting and Testing the Conjectures
The strong interpretation is that the conjectures hold globally, for any quantum dynamical system on which a QC can be based. The medium interpretation says they hold for processes currently observed
in nature, but human artifice can create systems in which they are false, thus allowing computationally superior QC’s to be built via FTQC. The weak interpretation is that they only make a sharp
distinction between two kinds of QC models, one supporting FTQC and the other not, and that the former kind can be built artificially and also does represent some quantum processes that occur
I tend to believe in the strong interpretation, namely, that the conjectures are always true. The weaker interpretations can be used to discuss (as we do below) specific proposals for implementing
quantum computation. There are quite a few suggestions on how to build quantum computers based on qubits and gates, and also some suggestions based on computationally equivalent but physically quite
different methods.
Nevertheless, I do not expect a common physical reason why my conjectures should apply for each proposed realization of a QC. Hence the conjectures should be examined, either based on detailed
modeling, or based on experimentation, on a case-by-case basis. Note that they are not about some mysterious breakdown that occurs when you try to scale quantum computers to a large number of qubits.
Conjecture 3 is about the two-qubit behavior of a quantum computer with any number of qubits, and it can be checked (as can the other conjectures) on quantum computers with a rather small number of
One prominent proposal under which the conjectures can be tested is measurement-based QC employing cluster states. Cluster states can be regarded as code words in a certain quantum error-correcting
code. Once you prepare such states, universal quantum-computing can be achieved by a certain measurement of the state. Conjecture 1 asserts that noisy quantum states created in the laboratory will
involve a mixture of the intended state with other cluster states.
Question 1: Will such noisy cluster states still support universal quantum-computing?
A second proposal is topological quantum computing. Non-abelian anyons that can support universal quantum-computing can also be regarded as codewords in a certain quantum error-correcting code.
Similar to before, the conjecture asserts that when we create such states in the laboratory (in a process that does not apply quantum fault-tolerance) we achieve a mixture of intended codewords with
unintended codewords.
Question 2: Will such noisy anyons be useful for universal quantum-computing?
For these two proposals the special physical gadgets are supposed to be constructible by “ordinary” experimental quantum physics that does not involve quantum fault-tolerance, so they are an
especially appealing testbed for my conjectures where all three interpretations can apply.
Why I Believe My Conjectures
Let me explain why I think that my conjectures are correct—also mindful of this nice post by Shmuel Weinberger on what “a conjecture” means for a mathematician. I regard it as implausible (see below)
that universal quantum computers are realistic, and I think that the issue of noise is indeed the main issue. The strong principle of noise underlying Conjecture 1 strikes me as the right way to
approach noise in quantum systems to begin with. The two-qubit conjecture proposes the simplest dividing line that I can think of between noise that allows fault tolerance and noise that does not.
The conjecture regarding error-synchronization also captures, in my opinion, a very basic obstacle to quantum fault-tolerance. There is an argument from first principles that since error-correction
is possible classically and Nature is really quantum, then error-correction must be possible quantumly. But it strikes me as conflating the settings after-the-fact. In any case, my conjectures allow
classical error-correction and fault tolerance. And, finally, as far as I can see, my conjectures on the behavior of noise do not violate any principle of quantum mechanics.
As an aside, let me briefly say why I tend to regard universal quantum computers as unrealistic. An explanation for why universal quantum computers are unrealistic may require some change in physics
theory of quantum decoherence. On the other hand, universal quantum computers will be physical devices that are able to simulate arbitrary quantum evolutions, where the word “simulate” is understood
in the strong sense that the computer will actually create an identical quantum state to the state created by the evolution it simulates, and the word “arbitrary” is understood in the strong sense
that it applies to every quantum evolution we can imagine as long as it obeys the rules of quantum mechanics. As such, quantum computers propose a major change in physical reality.
Aram Harrow: A Short Response
Although Peter Shor has already been featured on this blog for his famous factoring algorithm, I want to mention an arguably deeper contribution of his to quantum information. After demonstrating
that ${n}$-bit numbers could be factored in ${O(n^2)}$ time, Shor pointed out that this was possible even with noisy gates, as long as each gate’s noise was ${o(1/n^2)}$ (This observation is not
totally obvious, and rests on the fact that quantum computers, unlike analog computers, cannot magnify small errors in their amplitudes.) Shor made this point to argue that factoring can be achieved
with resources that are genuinely only polynomial, even when counting time, number of processors, energy and precision. When proposing new models of computation, it’s important to not to fall into
the trap of analog computing, where seemingly innocuous assumptions dramatically change the power of the model.
While requiring noise to scale as ${1/\mathsf{poly}(n)}$ might be theoretically reasonable, it’s not very encouraging if we hope to ever build a large-scale quantum computer. In the mid 1990′s, many
disbelieved that quantum decoherence could ever be significantly reduced. Shor (and others) responded to this by developing the theory of quantum error correcting codes (QECC), which protect data in
a manner analogous to classical codes. This requires overcoming several difficulties, such as the no-cloning theorem (which prevents redundant encodings), the fact that measurements cause
disturbance, and the continuous range of possible errors.
Later, Shor (and Aharonov and Ben-Or, and others) extended QECCs to protect dynamic computations, so that fault-tolerant quantum computing (FTQC) could be achieved in the presence of a sufficiently
low, but constant, rate of errors. To be sure, this makes assumptions such as independence that Gil is questioning.
QECC and FTQC are more than an answer to a technical objection; together they describe a potentially new phase of matter. In my opinion, they represent the deepest discovery in quantum mechanics
since Bell’s Theorem. And we have in part the criticism of the quantum computing skeptics to thank for these breakthroughs! I hope the conversation between Gil’s skepticism and the optimism of people
like me will also lead to useful results.
In a later post, I’ll respond in detail about why I believe that the emperor is fully dressed, and large-scale FTQC is possible, not only in theory, but realistically in the not-too-distant future.
But by way of preview, I’ll outline my arguments briefly here.
Response Road Map
1. Any argument that FTQC is impossible must also deal with the fact that classical computing is evidently possible. Just as we know that any ${\mathsf{P}}$ vs ${\mathsf{NP}}$ proof must avoid
working relative to every oracle, we can argue that any proof of quantum computing’s impossibility must somehow distinguish quantum computers from classical computers. This rules out most models
of maliciously correlated errors.
2. The key assumption of FTQC is (approximately) independent errors. Conversely, Gil’s skepticism is based on error models that may have low single-qubit error rates, but are highly correlated even
across large distances. While this possibility can’t be definitively ruled out until we build a working large-scale quantum computer, I’ll give both theoretical and experimental evidence that
such error models don’t occur in nature.
3. Current routes to building quantum computers, such as ion traps and superconductors, nevertheless suffer from correlated errors. I think these correlations aren’t too bad, but they definitely
exist. However, I’ll propose a thought-experiment implementation of a quantum computer, which is not meant to be practical, but where correlated errors are highly implausible.
Open Problems
What are your thoughts on this matter? Please try to be as clear as possible, and if you refer to specific issues raised here this will be especially good. Also, solve Questions 1 and 2.
[fixed intro's conflation of two Einstein-Bohr interchanges]
1. January 30, 2012 12:12 pm
I have to channel Scott Aaronson here, and say that the skeptics of quantum computing are really skeptics of quantum mechanics. In particular, the claim that there is a fundamental limit on
entanglement implies that quantum mechanics, as we currently believe it, is false: for instance, that there is some automatic process that decoheres the wave function at large scales, due to some
nonlinearity in Schrodinger’s equations.
The problem is that experimental searches for such nonlinearities have shown that they are either zero, or incredibly small. Hence the truism that the quantum mechanics is, quantitatively, the
best-verified physical theory. And if you believe that QM is correct, then states of arbitrarily large entanglement can be created.
The question about correlated noise is less clear to me, but as Aram hints, a skeptic might find themselves in the position of postulating correlations over very large distances. What would be
the physical mechanism for this?
I am also a little confused by Gil’s motivation for his conjectures. It’s reasonable to take as a physical postulate (as Scott does) that NP-complete problems can’t be solved by a physical device
in polynomial time (and I might add polynomial energy, since in physics you can often trade energy for time). But I don’t see why the factoring problem deserves this status.
Thus I would be much more interested in engaging with the skeptics if, rather than computer science-style conjectures that certain algorithms are impossible, they made conjectures about the
underlying physics, proposing an alternative to QM that would explain the phenomena we have observed while making large-scale quantum computing impossible. I think it’s fair to say that, except
for some rather convoluted and poorly-motivated proposals like Penrose’s that nothing with much mass can be in a quantum state, there is no such alternative theory currently known to us.
Gil also says “Universal quantum computers will be physical devices that are able to simulate arbitrary quantum evolutions, where the word “simulate” is understood in the strong sense that the
computer will actually create an identical quantum state to the state created by the evolution it simulates… As such, quantum computers propose a major change in physical reality.” I don’t think
this is correct. Quantum computers will simulate quantum systems in exactly the same way that classical computers simulate classical ones, namely subject to some encoding and with some amount of
The weak version of Gil’s conjecture — that the laws of quantum mechanics allow for quantum computing, but that we can never build such a device — may be true. Building a quantum computer will
certainly require huge advances in our ability to control quantum states, and shield them from unwanted interactions. But we have built exotic states of matter before (e.g. semiconductors) and I
see no fundamental reason why we can’t make these advances.
In any case, as theorists it’s the strong version of the conjecture we really care about — do the laws of physics let us compute in BQP, or only BPP? If you believe the latter, you need to
propose some new physics, which can be tested in the laboratory.
□ January 30, 2012 2:03 pm
Thanks for your interesting post, Chris. I suppose some of your points will be discussed later. Here are some comments about a few of them:
“In particular, the claim that there is a fundamental limit on entanglement implies that quantum mechanics, as we currently believe it, is false:”
First , to be precise, we talk about limits for entanglements achievable by quantum computers. Second, we both do believe in such limits: for example both you and I believe that a quantum
computer on n qubits cannot achieve a state described by a random unitary operation on all the qubits. This does not make us skeptics of quantum mechanics, right?
Just to make it clear: I do not believe in any nonlinearities in Schrodinger equation. I am not a skeptics of quantum mechanics and if my conjectures are in conflict with QM I will happily
admit that they are wrong. If you are interested to engage in a conversation with somebody skeptics of QM you need to find somebody else.
QM is a mathematical language of noncommutative probability which allows (in principle) to formulate further layers of laws of physics needed to describe our world. It does not allow to
derive all the laws and facts about our physical world. So my view is that QM itself accomodates the possibility of (universal) QCs and it also accomodates the possibility that QCs are
impossible. And the second possibility should be explored.
Correlation over very large distances will reflect entanglement over these large distances.
☆ January 30, 2012 2:45 pm
“…both you and I believe that a quantum computer on n qubits cannot achieve a state described by a random unitary operation on all the qubits”
I can’t speak for Cris, but let me clarify what I think most QC researchers would say to this. Yes, a quantum state *can* be prepared in a random state in the total Hilbert space. You
will have to wait an exponential amount of time to prepare this, but there is no problem in principle. In practice, you will need to encode the state to protect it from errors, but this
part is also true classically.
If you wish to limit yourself to what is achievable in polynomial time, then that’s okay too. You can prepare states using unitary t-designs for t ~ poly(n), so that with a polynomially
large number of copies of the state, the statistics you get from measuring them would be indistinguishable from the t-th moments of the Haar measure. Preparing these states takes a
polynomial amount of time on a quantum computer. So in a rather strong operational sense, these states are indeed random.
☆ January 30, 2012 6:20 pm
Gil, I’m still confused. You said:
“First , to be precise, we talk about limits for entanglements achievable by quantum computers.”
I take it by “quantum computer” you mean a physical system that can make a series of simple unitary operations. Such things certainly can generate arbitrary amounts of entanglement, and
do so efficiently.
“Second, we both do believe in such limits: for example both you and I believe that a quantum computer on n qubits cannot achieve a state described by a random unitary operation on all
the qubits. This does not make us skeptics of quantum mechanics, right?”
Well, as Steve says there are pseudorandom states that we can achieve efficiently, and that are highly entangled. So I still believe that the claim that we can’t achieve high degrees of
entanglement (even with a series of poly(n) unitary operators, each of which only couples a constant number of qubits) is inconsistent with QM.
☆ January 31, 2012 8:09 am
Gil, related to Steve’s reply,is it the case that according to your conjectures there are certain quantum states that simply cannot be (approximately) generated, even in exponential time,
and even for small values of n (maybe even n=2)?
☆ January 31, 2012 11:52 am
Closely related to Boaz’ excellent and very natural informatic question, “Are there certain quantum states that cannot be (approximately) generated?” is its thermodynamically dual
question “What is the entropy function of a quantum dynamical system in which certain states cannot be (approximately) generated?”
It was Gil himself who (implicitly) suggested this latter question, in his recent Theoretical Physics StackEchange question “Onsager’s Regression Hypothesis, Explained and Demonstrated” …
certainly Gil’s TPS question relates very naturally to core issues of this wonderful Kalai/Harrow debate.
These entropic issues are subtle at the classical level (via the Boltzmann entropy function), and they are even subtler at the quantum level (via the von Neumann entropy function). Thus
it is far from obvious (to me) what “Kalai entropy function” might describe the thermodynamical behavior of qubit systems for which Kalai’s Conjectures 1–4 hold true.
Prior to grappling with this tough class of problems, younger researchers in particular may wish to reflect upon David Goodstein’s celebrated introduction to his textbook States of Matter
“Ludwig Boltzman, who spent much of his life studying statistical mechanics, died in 1906, by his own hand. Paul Ehrenfest, carrying on the work, died similarly in 1933. Now it is our
turn to study statistical mechanics. Perhaps it will be wise to approach the subject cautiously.”
With due regard for the sobering subtlety and difficulty of these fundamental informatic/thermodynamic questions — and yet with regard too for their immense fundamental and practical
importance — the possible mathematical forms of a Kalai entropy function, and the physical interpretation of the thermodynamical potentials derived from it, is kind of problem that many
folks (well, me anyway) would very much enjoy seeing discussed by Gil and Aram.
□ January 31, 2012 5:50 am
Christopher reminds us: The problem [with nonlinear dynamics] is that experimental searches for such nonlinearities have shown that they are either zero, or incredibly small.
Christopher, as with most “no-go” statements, it is prudent to parse the assumptions with care. Let’s use the language of geometric dynamics to do that parsing. Thus we appreciate:
(1) Experiments (to date) give us good reason to reject quantum dynamical theories that specify nonsymplectic flow on flat Kähler (i.e. Hilbert) state-spaces. And this is reasonable, because
the Second Law fails to hold in these systems.
(2) Experiments (to date) give us little reason to reject quantum dynamical theories that specify symplectic flow on nonflat Kähler (i.e. Hilbert) state-spaces. And this is reasonable,
because the Second Law holds rigorously in these systems.
We thus are led to seek dynamical manifolds on which global conservation laws hold perfectly, the Second Law holds rigorously, and local quantum dynamics pulls-back naturally. Such manifolds
exist, and over the decades they have been called by various names, of which “product state” is presently among the most common.
Unsurprisingly, product state-spaces have become immensely popular among quantum researchers: the arxiv server presently holds more than 5000 “product state” preprints, and their number is
increasing rapidly with no upper bound yet apparent.
What is interesting in the context of the present GLL discussion is that product states constitute a concrete class of physical theories in which quantum mechanics is respected locally and
Gil Kalai’s Conjectures 1-4 are respected globally.
Thus, two physically interesting and mathematically natural questions (which I take to be the main topic of this GLL post) are:
(A) Are experimental state-spaces effectively product spaces?
(B) Are experimental state-spaces rigorously product spaces?
Pragmatically speaking, if we refer to the quantum literature, we see that Question A has been more-or-less settled: the short answer is “yes” — although obviously there is an immense body of
research remaining to be done, and that research is associated to practical problems having global importance and great urgency.
Question B remains open, and it turns out that the key issues are associated to the fundamental limits of informatic causality, spatial localization, and relativistic invariance … which are
of course precisely the limits where Hilbert-space quantum dynamics encounters still-unresolved difficulties. The simulationist perspective on these fundamental quantum dynamical issues is
(of course) evolving rapidly, and it is reasonable for younger researchers to foresee that later in the 21st century there will be GLL posts that are specifically concerned with these topics
Regardless of one’s opinion regarding Question B — which is such a deep, difficult, unsettled, and contentious question that opinions are as much a matter of faith as of physics — it is
evident that Gil Kalai’s four conjectures help greatly to illuminate our practical understanding of Question A … and for setting forth these conjectures Gil surely is deserving of all of our
appreciation and thanks.
For all of the preceding reasons, it seems (to me) that this one of the best GLL posts ever! :)
2. January 30, 2012 1:48 pm
I had a thought that Heisenberg’s Uncertainty Principle is the result of the laws of physics being digital in the sense of Fredkin and Zuse:
Consider the relation, (delta x) times (delta p) >= h/4pi.
In a digital universe, there is a finite amount of memory in the computer controlling the universe, so it makes sense that as delta x gets small enough, the computer controlling the universe
would run out of memory, forcing delta p to be bigger.
It is known that it is possible to derive the laws of quantum mechanics from Heisenberg’s Uncertainty Principle, so the digital physics hypothesis would explain why quantum mechanics has been
found to hold true. http://arxiv.org/abs/quant-ph/0201084
Has my idea been proposed before?
I think the digital universe hypothesis as an explanation for quantum mechanics makes the most sense and I also think it implies that quantum computing on a large scale is impossible, as the
“great computer in the sky” doesn’t have enough memory to perform it. What do you think?
□ January 30, 2012 2:15 pm
I think you could say the universe is digital in the sense of being made of qubits, but not of bits. Bits don’t work because otherwise n particles would require exp(n) bits to describe, and
also because of Bell’s theorem. But qubits are a perfectly fine way to discretize information.
☆ January 30, 2012 2:53 pm
No, I mean bits. I claim that the “great computer in the sky” (most likely some type of cellular automata) wouldn’t be able to handle calculations for n particles in a digital universe
when n is large, say n=1,000, because exp(n) would be too large. So quantum mechanics would fail for large entanglements in a digital universe. This is why I think large scale quantum
computing wouldn’t work in our universe, because I think our universe is digital.
The laws of quantum mechanics would only work for interactions between a small number of particles. For instance, the Young double slit experiment could easily be modeled in a digital
universe by a cellular automata.
3. January 30, 2012 2:58 pm
Correct me if I’m wrong, but aren’t you confusing two different thought experiments of Einstein. I hadn’t heard much of his criticism of BKS before, but that appears to a decade earlier than the
‘Clock in the box’ idea, which Bohr countered with an appeal to general relativity. I would add that the idea that that Einstein didn’t understand the consequences of his own theory isn’t really
fair. The point is that just as there is a ‘position-momentum’ uncertainty, for a clock there is an ‘accuracy-mass’ uncertainty, which will turn up however you try to measure the mass.
□ January 30, 2012 3:58 pm
You are right—having remembered Bondi and the Solvay 1930 debate and “perpetual motion”, I expected the Bohr-K-S reference (which 1-1/2 pages earlier says “Bohr had stung Einstein in BKS with
one of Einstein’s great ideas”) to be the same thing. I’ve fixed it above, thanks. A recent description that’s probably “fairer” is this by Paul Davies, About Time.
On the overall discussion, my take is that whether highly entangled (non-random) states can be created is not the issue. It’s the effort required to create them, and whether this creates a
double-bind: Either they’re hard, or they’re easy enough that we may expect to find them plentiful in Nature, especially after long evolutions.This strikes me as something QC that’s not QM. I
know Gil is saying some other things, and I have some other things too…in due time. Out of left field there’s also “quantum discord” said to give correlations without entanglement.
□ January 30, 2012 7:37 pm
From Walter Isaacson, Einstein, p349 top re. 1930 Solvay: “It was one of the great ironies of scientific debate that, after a sleepless night, Bohr was able to hoist Einstein by his own
petard. The thought experiment had not taken into account Einstein’s own beautiful discovery, the theory of relativity…Einstein forgot this, but Bohr remembered.” Apparently no mention of the
BKS exchange.
4. January 30, 2012 3:14 pm
These conjectures would be new physical laws, but there is neither experimental evidence supporting them or known physical mechanisms that would explain them. All relevant experiments suggest the
opposite, and any new physical mechanisms going beyond quantum mechanics would be bigger news than anything the LHC might ever find. Conjectures 3 and 4 would contradict relativity! The arguments
seem to come down to quantum computers can’t exist because then factoring would be easy.
Factoring is easy.
5. January 30, 2012 3:52 pm
Oh boy! Gil and Aram are two researchers whom I respect greatly … so this is one debate that (IMHO) both sides are sure to win.
How can both sides win the debate? Easily … even naturally!
Suppose that an omniscient oracle revealed to us that Hilbert-style state-spaces are the state-spaces of Nature. Still it might happen that a Kalai-style (non-Hilbert) state-space might be (a) an
excellent approximation to Nature’s state-space, and (b) far easier to simulate than Hilbert space.
Such Kalai-style physics in quantum dynamics might (for example) resemble von Neumann’s artificial viscosity in fluid dynamics. In practical computations von Neumann viscosity makes Navier-Stokes
systems far easier to simulate, at the price of dynamical imprecision that is (for many purposes) entirely acceptable.
Since no such omniscient oracle is available, it seems entirely plausible (to me) that at the end of the 21st century, we will still be debating whether “Nature’s state-space is Hilbert yet
practical simulation state spaces are Kalai”, versus “Nature’s state-space is Kalai, yet for theoretical convenience the Hilbert approximation is exceedingly useful.”
Elevator Summary: One plausible outcome of the Kalai/Harrow debate is (a) yes quantum computers will be feasible in the 21st century, yet (b) they won’t work by physics we are entirely sure of,
or compute by algorithms that we presently conceive.
□ January 30, 2012 4:10 pm
To be concrete, in seminars we teach our engineering students practical methods for compromise: we pullback the old testament of Hilbert and Dirac onto the new testament of Onsager and Kalai.
☆ January 30, 2012 8:21 pm
Beautiful comparison chart. Meanwhile, I love the way Onsager’s gravestone, next to one listing many accomplishments of the deceased, says just “Nobel Laureate*” with the * added later by
his children, and “*Etc.” below.
☆ January 30, 2012 8:51 pm
Yes, there are many famous Onsager stories: students called Onsager’s graduate-level course “Sadistical Mechanics”, for example!
□ January 30, 2012 5:07 pm
Hmmm … one further way to describe a Harrow/Kalai middle ground is this: The gods of mathematics have blessed us with an superabundance of dynamical state-spaces for which (a)
well-established global conservation laws hold exactly, and (b) local dynamical flows pullback approximately, and (a) thermodynamical principles hold rigorously.
So for the present, perhaps the question “Is Hilbert-space quantum mechanics the true state-space of Nature?” is too hard for us to be entirely confident of near-term progress (although
surprises surely are welcome) … a more practical use of our energies may be to construct, survey and appreciate the (many!) exceedingly useful dynamical state-spaces with which (as a
burgeoning arxiv shows us) our 21st century has been so abundantly and unexpectedly blessed.
Needless to say, these middle-of-the-road views leave good reason to hope for a vigorous debate here on GLL between the “Kalaists” and the “Harrowites” … in the confident expectation that
both sides will win! :)
6. January 30, 2012 4:25 pm
Dear Sirs;
You can weather one more deluded. See my updated web site, section “perpetuum”
BOOK 3
7. January 30, 2012 4:42 pm
The perpetual motion analogy is a good one. Quantum computer research keep claiming progress, but it is like the progress of free energy researchers. Yes, they may find some slight gain in
efficiency, but the research says nothing toward showing that the goal is attainable.
It is just not true that the skeptics of quantum computing are really skeptics of quantum mechanics. All of quantum mechanics can be true without quantum computers.
8. January 31, 2012 7:45 am
Gil, I understand from this you believe that physically realizable quantum computers offer no superpolynomial speedup over classical computers, and hence there is a polynomial time algorithm to
simulate them. Do you have a sense of what this algorithm is?
□ January 31, 2012 11:04 am
Boaz, as you see in the post we mainly consider quantum error correction and do not discuss computational issues. Do you ask if quantum computers that satisfies Conjectures 1-4 can be
simulated classically? (The answer is that I dont know.) Or is your question different?
☆ January 31, 2012 12:49 pm
Gil, I thought the whole point of this conjectures was to argue that the computational power of physically realizable quantum computers falls short of BQP. So, I’m just trying to
understand if you think that their power is exactly like classical computers (i.e., the uncontrolled noise helps simulate them classically, perhaps somewhat analogously to the situation
described in this paper), or you think that they are stronger than BPP but weaker than BQP. I guess the answer is that you don’t know.
I would just comment that if the latter condition is true, then qualitatively it doesn’t change the “conventional wisdom” on quantum computers: they offer super-polynomials speedups over
classical computers on some, but not all problems. Whether integer factoring is one of those problems is of course very interesting, but in some sense a question that is not so
☆ January 31, 2012 1:35 pm
Dear Boaz, As is also mentioned in Aram’s remark, the feasibility of quantum error correction is in several respects closer to various questions in physics and may be more relevant to
deep aspects of physics than the computational complexity issue. Probably my conjectures “should” push the computational power to BPP. We will discuss related things later.
☆ January 31, 2012 3:25 pm
Thanks Gil – I’m looking forward to your future posts. I know very little about quantum noise models and error correction, so am using my (probably flawed) intuition from classical
computation that in principle one should be able to reduce the noise to an arbitrarily low level which is some function of the budget you have to spend. I guess this is in sharp contrast
to your Conjecture 1, and I’m very curious to understand this.
I’m also wondering about your statement that you “do not expect a common physical reason why my conjectures should apply for each proposed realization of a QC”. To rephrase (combining
this with your statement that the conjectures should push the computational power to BPP) it seems you’re saying that a classical computer can simulate in polynomial time every real-world
quantum system, but there isn’t any unifying reason why this happens.
☆ February 3, 2012 10:30 am
Dear Boaz,
Regarding the computational complexity aspect. It is possible that my conjectures (including a conjecture about noisy quantum evolutions which is in the papers and not in the posts) push
the computational power of noisy quantum computers that satisfy them to BPP. This is largely because of the additional side effect of error-rate scaling up. But I am not sure about it.
And I certainly cannot prove it.
There are various subtle issues regarding computational power of quantum systems: It is an unproved conjecture that entanglement is needed for computational speed up; it is not clear that
decision problems capture the full computational power of quantum computers; it is not even known if distributions achievable by depth 2 quantum circuits give computational advantage; It
is not known that irreversible noisy quantum computation reduces to BPP and as a matter of fact I believe that under the standard noise models noisey irrevesible quantum computing allows
Still it is possible that a definite result can be proven and perhaps the first step would be to prove that some combination of my conjectures reduce the computational power to BPP in the
reveresible case. This would already be very interesting.
If you just conjecture that quantum noise have crazy correlation without the effect on the rate then I observed that log depth quantum circuits survive. This falls well below BQP but
still allows factoring!
I dont see a connection between the claim that the noise will satisfy my conjectures but possibly for different physical reasons depending on the implementation to your assertion that if
the conjectures imply a reduction to BPP there will be no unifying reasons for why, in this case, classical computer can simulate the quantum system. (The conjecture themselves if indeed
sufficient to give BPP reduction will provide a unifying reason for all the systems satisfying them).
Indeed a crucial aspect of the discussion is about the possibility to transfer several intuitions from classical computation to the quantum case. (Those are crucil issues about
computation although not necessary belong to computation complexity.) We will come back to this.
9. January 31, 2012 7:56 am
The current issue of “Scientific American” has an article titled:
“Is Space Digital?”.
The article describes an experiment that might settle the question empirically.
10. January 31, 2012 9:00 am
There are still many interesting items on Chris’ first post. So I will comment slowly on those points that we do not plan to discuss later, and some subsequent remarks.
1) “The weak version of Gil’s conjecture — that the laws of quantum mechanics allow for quantum computing, but that we can never build such a device — may be true.”
Chris, This is more or less the strong interpretation/version of the conjectures not one of the weaker ones. (Of course, my conjectures refer to specific properties of noisy quantum systems, and
not just to a prophecy if QC will be built or not.)
2) “I take it by “quantum computer” you mean a physical system that can make a series of simple unitary operations. Such things certainly can generate arbitrary amounts of entanglement, and do so
Yes, I referred to a “realistic noisy quantum computer” namely a physical system that can make a series of simple unitary operations under realistic noise assumptions. Indeed this is the crucial
question if realistic noise model allows arbitrary amount of entanglement (In the sense of Conjecture C, say). Standard noise models (and of course the noiseless model) do allow.
3) I somehow disagree with Steve’s statement that the state approximating the action of random unitary operator on all our qubits can be built in principle from local operators (But this can be a
semantic matter). We have limitations on the states we can realistically generate by a computational principle.
Similarly the standard models of noise also give some limitation to states that can be generated. These limitations are consequences of further assumptions which are not part of the “laws of
quantum physics”. For the standard noise models these limitations restrict the states that we can generate but not the computational power. So to Boaz’s question, even standard noise models
restrict the type of states we can generate (also for a small number of qubits).
…Unless you mean by “we can generate” also what can be generated with exponentially small probability.
…Or unless you refer to encoded states, in certain senses. (What do you mean?)
(I am traveling so I will return to Boaz’s question, and other items on Chris’s, and questions later in the discussion in a couple of days).
□ January 31, 2012 12:09 pm
Gil, one of my difficulties with your conjectures is that physics (even QM) doesn’t have noise per se: it has lots of physical processes, and interactions with the environment, some of which
we can control in practice and some of which we can’t (and this boundary is constantly moving as engineering improves).
I think you agree that in a noiseless situation, we can generate states that are arbitrarily entangled. We also know how to defend against noise that is independent. So you are making a
conjecture that in every real system, there are highly-correlated sources of noise that will prevent us from making highly-entangled states and doing long-term quantum computation. Why in the
world would this be true? Where does this noise come from? What physical processes generate it?
Syntactically, your conjecture seem to be a bit like this: “We know that the laws of hydrodynamics could, in principle, allow for heavier-than-air flight. However, turbulence is very
complicated, unpredictable, and hard to control. Since heavier-than-air flight is highly implausible, we conjecture that in any realistic system, correlated turbulence conspires to reduce the
lift of an airplane so that it cannot fly for long distances.”
Forgive me for poking fun, but doesn’t that conjecture have a similar flavor?
☆ January 31, 2012 12:43 pm
Christopher, I have taken the liberty of lightly amending your example as follows:
“We know that the laws of plasma dynamics could, in principle, allow for practical fusion reactors. However, plasma turbulence is very complicated, unpredictable, and hard to control.
In practice, correlated instability modes conspire to reduce confinement times such that fusion energy-gain is far less feasible than originally conceived.”
And this amended example happens to be true.
The great 20th century Soviet plasma physicist Lev Artsimovich was fond of assuring the public: “Fusion will be there when society needs it.” Well, now in the 21st century we need clean
power urgently … and fusion power not here. This is a concrete and sobering lesson in how easy it is for a large scientific community (with undoubtedly the best of intentions) to
seriously underestimate the dynamical subtleties of noise, instability, and chaos.
As an emerging STEM discipline, quantum systems engineering is about much more than computing processes, and seeks to solve problems more urgent than decryption. To the extent that Gil’s
conjectures 1–4 remind us of how easy it is to underestimate the effects of noise, instability, and chaos, these conjectures are providing valuable service.
☆ January 31, 2012 5:16 pm
Chris, I think that the heavier than air flight example is perfectly reasonable. There are various such examples, some went this way and some went that way, and some are still undecided.
It his hard to make definite conclusions from analogies of this kind.
I think, if I may, that conjecture 1 is interesting, as it sais things about states that can be experimentally created or that people are working on creating. You need not except it as a
conjecture about how things are always are. You can simply think about it as a conjecture on how things are for processes that do not enact fault tolerance. This is relevant because there
is no reason to assume that current experimental processes involve QFT mechanism. I think this is a different way to think about noise models for specific states of interest.
you asked:”where this noise comes from” the proposed answer is that unless you interfere with the process this is how the noise will look like (independent noise need to be supressed in
order to create these states).
The issue of your first paragraph that physics does not have noise per se is interesting. I am quite interested what experts think about it.
11. January 31, 2012 4:00 pm
As my name was mentioned by Gil I think, I should present my view on the discussed issue.
First of all, I like the title of this debate corresponding to my last preprint on this topic – “Quantum memory as a perpetuum mobile of the second kind”, because I believe that impossibility of
fault-tolerant quantum computations (FTQC) should follow from the existing, perhaps refined, laws of thermodynamics. However, we do not have a universal proof of the second law of thermodynamics
but a large and convincing body of partial results for the classes of idealized physical systems. Similarly, we can only analyze the existing evidence for two types of quantum computation models
(I think that all schemes of quantum computations are isomorphic, including their problems)
I) Standard theory of quantum circuits with “threshold theorems”.
Here, the phenomenological models of noise are drastically oversimplified. Everyone, who tries to (properly) derive quantum master equations for quantum open systems can see that the terms
describing the irreversible effects of environment depend not only on the environment’s parameters and the coupling of the system to environment but also on the system’s Hamiltonian. The more
complicated (more “entangling”) is the self-evolution (“algorithm”) of the system the more involved is the response of the environment (“noise”). This is also true for quantum circuits described
in terms of time-dependent Hamiltonians and can be seen as a physical counterpart to Gil’s conjectures. Notice that this effect is missing for classical systems. To describe the influence of a
heat bath it is sufficient to add a friction force and the white-noise Langevin force, both independent of the system’s Hamiltonian.
II) Self-correcting systems (e.g. topological)
For these models (because of absence of “wave propagation”) one can rigorously derive Markovian master equations describing thermal noise and rigorously analyze the relaxation processes. For
example, one can prove that:
1) There exist encoded qubits which are stable in the sense that their life-times grow exponentially with the size of the system (4D-Kitaev model, and others). The mechanism of stability is
essentially classical – “macroscopic” free energy barriers between different states, below certain critical temperature.
2) For some models one can transversally realize some of the basic gates on encoded qubits.
However, one cannot even design a quantum memory using the above stability properties. Namely, the (transversal) gates realized as continuous operations driven by certain local Hamiltonians
interfere with the protecting mechanism and produce irreversibility. I think that there is a universal (thermodynamical?) conflict between stability of encoded information and reversibility of
the gates. It does not harm classical computations which are based on irreversible gates but is deadly for the quantum ones which need unitary gates.
Summarizing, there is no hard evidence, neither theoretical nor experimental, in favor of FTQC, on the contrary the analysis of existing models suggests its unfeasibility. Nevertheless, the
criticism of FTQC is strongly suppressed by the community. One can find the wildest speculations published in the most prestigious journals, while even well-motivated criticism is usually
rejected. The King is naked!
Robert Alicki
□ January 31, 2012 4:54 pm
Thanks! Here’s a link to your perpetuum mobile preprint, which I somehow missed while going through your arXiv list to add links to Gil’s original draft. This was before Dick suggested the
PMM analogy.
12. January 31, 2012 4:49 pm
Robert Alicki asserts: “Criticism of FTQC is strongly suppressed by the community.
This assertion calls to mind numerous parallel assertions along the lines of “plasma instability modes $\Leftrightarrow$ fusion reactor community”; “epigenomic regulation $\Leftrightarrow$
genomic community”; “tomography $\Leftrightarrow$ radiology community”; “Onsager theory $\Leftrightarrow$ thermodynamics community”; “density functional theory $\Leftrightarrow$ quantum chemistry
community”; “KAM theory $\Leftrightarrow$ ergodic dynamics community”; “sociobiology $\Leftrightarrow$ sociology community”.
One wonders whether substituting “under-appreciated” for “strongly suppressed” might foster a more productive mind-set for the QM/QC/QIT community?
Certainly we have the inspiring personal examples, and the wonderfully innovative mathematical toolsets, of Kolmogorov, Onsager, Wilson, Arnol’d (etc.) to assure us that beautiful mathematics and
physics reside within noise, instability, renormalization, and chaos … and we can be confident that further rich discoveries and wonderful practical applications await our persistent inquiry.
Moreover, it is entirely conceivable (and to my mind, even likely) that the societal benefits that already are associated to our QM/QC/QIT-inspired understanding of noise, instability, chaos, and
simulation algorithms (etc.) are already comparable to (and may even exceed) the foreseeable benefits that quantum computing technologies are ever likely to provide.
Thus, one productive path forward is not a confrontational opposition between quantum computing skeptics and nonskeptics, but rather a mutually inspiring partnership.
13. January 31, 2012 6:16 pm
To the question “Is a quantum computer feasible?” – Didn’t they build a clunky machine which said 15 = 5 x 3?
□ January 31, 2012 7:34 pm
Please let me be among the first to acknowledge and congratulate you, your colleagues, and the D-Wave Corporation, on the achievements that are reported in the recent arxiv preprint
“Experimental determination of Ramsey numbers with quantum annealing” (arXiv:1201.1842). Perhaps the era of “quantum computers that are feasible in the 21st century, that work by physics we
aren’t entirely sure of, and compute by algorithms that we don’t presently conceive” … is arriving sooner than many expected? In any case, congratulations!
15. January 31, 2012 7:39 pm
Thanks John. BTW I side with Gil in the debate going on here, but for different (but related) reasons. The ideas underlying the circuit model of quantum computation are not good ideas, and I
don’t think useful quantum computers based on circuit model ideas will ever be built. But this doesn’t mean that using quantum mechanics to make better computers won’t work.
□ January 31, 2012 8:07 pm
Geordie, I agree with much of what you say — and I appreciate vigor in exposition too — but have a concern that blanket assertions like “the ideas underlying the circuit model of quantum
computation are not good ideas” may obstruct / distract / confuse students (in particular) from appreciating that all the attendees at this scientific banquet are bringing good ideas to the
□ January 31, 2012 8:10 pm
“The ideas underlying the circuit model of quantum computation are not good ideas, and I don’t think useful quantum computers based on circuit model ideas will ever be built.”
Can you substantiate?
16. January 31, 2012 9:52 pm
@John & Marcin: let me give you four simple examples of why the circuit model is not a good model for building quantum computers. (1) during a circuit model computation, the Hamiltonian for most
of the qubits is designed to be zero. This means that for these qubits, the noise term in the Hamiltonian dominates their dynamics. For the majority of the qubits for the majority of a
computation, the dominant term in the Hamiltonian is the noise term *by design*. This is a bad idea. The underlying reason for the inherent stability of the adiabatic model against noise is that
in the adiabatic model the Hamiltonian of the computation is always, or nearly always, dominant over noise. (2) the gate model tacitly assumes that decoherence is basis independent. This is not
the not way real / open quantum systems behave. In open quantum systems, there is a preferred basis — the energy eigenbasis. The definition of things like T_1 and T_2 is relative to the energy
basis. While in a circuit model algorithm you are expected to be able to create arbitrary superpositions of states in any basis (usually written explicitly in the readout basis but you can also
write these in the energy basis), superpositions of energy eigenstates decohere extremely quickly (this is what you measure when you measure T_2 for example). However this does not mean that
superpositions in other bases decohere quickly. For example, if you have an entangled ground state and the energy gap to the first excited state is much larger than the other energy scales (like
temperature), the entanglement in that state is an equilibrium property of the system, and persists for a long, long time — even if the system would decohere very quickly if you attempted to
create a superposition of the ground and first excited states a la circuit model. (3) the circuit model requires high frequency, high precision signals to perform gates, and you need to be able
to apply a large number of these. This is a huge practical issue. If all you care about is the ultimate computing power of the universe (maybe) you don’t need to worry about this. But this one
will forever prevent, say, superconducting based circuit model quantum computers from being built. There are no ways known of getting millions of fully programmable microwave lines into a
superconducting chip. (4) quantum error correction protocols vastly blow up the physical resource requirements into territory that makes absolutely no sense. while again it may be possible in
principle to do this, in practice it’s simply a bad use of precious resources when there are other more effective ways to use quantum mechanics to build better computers. the amount of additional
control circuitry and associated peripherals you need to implement even simple quantum error correction is really ludicrous. Sometimes people think that the main cost is simply the number of
qubits — say multiplying the number of qubits by 7^3 or something. But that is not nearly the whole cost. You also need all of the high frequency, conditional control + active circuitry + all of
the circuits to couple the qubits + etc. etc. etc. in order to make the whole thing work. Could you do this? yes, if what you mean is “in principle”. In practice? no. This type of system is like
a ladder to the moon. Do any of the laws of physics prevent us from building a ladder to the moon? I don’t think so. Will we ever build one? No, because there are much better ways to get to the
□ February 1, 2012 11:05 am
Geordie, as I read your post I found myself nodding my head and repeating the punch-line of a centuries-old Judaic joke: “You know, Geordie is absolutely right!”
And therefore, we all can be confident (or at least hopeful) that the Great Truths that your post has voiced, and that D-Wave’s achievements have demonstrated, will be matched by Great Truths
in posts-to-come by older-school QM / QC / QIT researchers.
By the process of this debate, we may all expect (or at least be hopeful) that the “dual” in “dualing quantum narratives” will transmogrify from the adversarial dualing of formal debate to
the creative duality of formal mathematics.
This is why, for purposes of catalyzing creative enterprises, plenty of folks (and I am one of them) have a particularly high regard for the complexity class that contains Calvinball. :)
17. February 1, 2012 6:23 am
Gil’s conjectures appear in danger of falling into the trap of forbidding classical computation. In particular, conjecture 4 seems to be demonstrably false (the conjectures are somewhat
ill-defined as written here, so perhaps Gil means something different than I am taking them to mean).
To see this note that we can choose a basis on the Hilbert space such that subsystems are defined non-locally. We can do this in such a way as to assign maximally entangled states in the new
separation into subsystems to classical basis states. For example we can choose the 4 EPR states as corresponding to each of the 4 possible 2 bit strings so that classical errors still correspond
to 1-local errors. Similarly we can choose a partitioning into subsystems of an n qubit string such that all classical strings were taken to represent the superposition of that string + its
negation with the phase determined by the first bit of the string. Such a representation yields maximally entangled states (with respect to that basis) which are only subject to errors (which
remain 1-local) via classical bit-filps. Thus conjecture 4 is demonstrably false when applied to such a partitioning.
However to exclude such partitionings is to exclude classical computation on with entangled energy eigenstates, which is contradicted by virtually all classical computers we have. The coupling
Hamiltonian which makes materials stick together also almost universally produces an entangled ground states. Indeed electrons in solids are virtually always in extremely entangled states with
respect either to the atomic energy eigenstates or spacial modes.
The reason why classical computing works, and quantum computing is usually hard, is due to the fragility of superpositions with respect to dephasing noise relative to the robustness of energy
eigenstates. However the assumption that this corresponds to unentangled states is demonstrably false: many molecules exist with extremely entangled states even at high temperature. A simple
example of this is the ground state of an Heisenberg antiferromagnet. Even for two spin qubits this has a maximally entangled ground state. Further the coupling in such systems is very often
extremely strong compared to its coupling to the environment.
□ February 1, 2012 4:44 pm
Dear Joe,
Thanks a lot for the comment. The issues
1) Do my conjectures (or perhaps even any similar attempt) cause classical computers to fail?
2) Why is it that classical error correction (and computing) is so easy and common and quantum error correction (and computing) is so hard?
Will be central to our continued discussion.
For a rigorous and most recent formulation of the conjecture You may look at my paper http://gilkalai.files.wordpress.com/2009/12/qi.pdf (when noise accumulates) (Unfortunately the numbering
of the conjectures has changed but it will be easy to be traced.)
Your example regarding Conjecture 4 (of this post) is **very interesting** and I will do my best to understand it. If true it shows at the very least that something is very wrong with the
formulation of that conjecture. However, please do explain it in more detail and also please look at the formal Conjecture, and , in particular, at the meaning of “highly entangled state”.
Also, what do you refer to as “maximally entangled states (with respect to that basis)”?
In your last paragraph, I am not sure that when you talk about “extremely entangled states” you mean what I mean. For example, in what sense are the molecules you mention “extremely
entangled”? This is also very interesting.
“The reason why classical computing works, and quantum computing is usually hard, is due to the fragility of superpositions with respect to dephasing noise relative to the robustness of
energy eigenstates.”
I suppose we would like to have an exlanation which is not based on the distinction between bit flip errors and phase errors. This distinction is based on some symmetry breaking which itself
should be explained.
☆ February 1, 2012 11:34 pm
Hi Gil,
Thanks for your response. I think it is only fair if I read the paper you mention to be sure we are talking about the same thing, so I will try do so later today.
What I was trying to get at is that entanglement is only defined once we pick a particular partitioning of our system into subsystems. Without such a partitioning you merely have at most
a superposition and even this depends on your choice of basis of basis. This may seem trivial, but there are often more than one natural partitioning of the system, and whether or not you
have entanglement is merely a matter of what you choose to label as subsystems.
An obvious example of this is a single photon put into a superposition of two modes. This is often used in linear optics implementations to encode a single qubit (and hence is a pure
state of the single qubit, hence no entanglement). However you can also consider the occupancy number of the two spatial modes as defining the subsystems, in which case you can easily
have a maximally entangled pair, and indeed it is rather simple, experimentally, to make rather complicated entangled states in this representation using only a single photon and many
modes. This is simply an interferometer, and is much easier to build than a full quantum computer, but as per Scott’s recent paper is probably not efficiently classically simulable.
My point here is that entanglement is not necessarily the hard part of quantum computing (in this case the problem is performing non-linear gates), and so I believe making conjectures
which have entanglement as a key part is destined to fail, since entanglement itself is not an intrinsic property of nature. It is something we impose on nature by choosing a
representation for our information (in terms of choosing whether and how we subsystems), and there is often more than one reasonable way to do this. Indeed, decoherence isn’t even the
current limiting factor in several implementations, including many optical implementations, so it on its own doesn’t seem to be a full explanation for why we have not yet made more
However, I just want to add that I agree with your last point. Phase flips are relevant for some systems, but it is only part of the picture, and doesn’t really tell the whole story.
There are many systems where such an error model does not apply.
Lastly, I’d like to say that I find the 1st question you mention (“Do my conjectures (or perhaps even any similar attempt) cause classical computers to fail?”) to be very interesting.
Indeed, it makes me wonder whether it might be possible to prove something along these lines. This may seem a fools errant, since generally classical computing is deemed to be immune from
phase errors, but this isn’t really true. This is something that seems obvious when you treat a classical computer classically, however it is fundamentally false. Whether or not we may
like to believe it, classical computers are still governed by quantum mechanics, and sufficiently strong dephasing noise will simply freeze them via the quantum Zeno effect, and even at
lower levels will lead to incorrect transitions between states (I won’t say bit flip errors, because most classical computers represent bits with systems that can be in far more than 2
In any case, I have found the debate so far very interesting (if somewhat infuriating, as I find it hard to understand what could possibly lead someone knowledgeable in the area to
conjecture that building a quantum computer is impossible), and will follow the series with much interest.
☆ February 2, 2012 7:37 am
” I have found the debate so far very interesting (if somewhat infuriating, as I find it hard to understand what could possibly lead someone knowledgeable in the area to conjecture that
building a quantum computer is impossible), ”
In my case the answer is simple : 35 years of experience in the theory of quantum open systems which exactly deals with quantum noise, decoherence , stability etc.
☆ February 2, 2012 2:12 pm
Dear Joe, maybe the following remark on Conjecture 4 could be helpful.
Conjecture 4 refers to highly entangled states of n qubits. It should certainly applies to states which occur in quantum codes, or in basic quantum algorithms. More formally we want that
the state to demonstrate 2-particle interaction for every (or for most) pairs of qubits. To define it formally we cannot just talk about 2-qubits entanglement so the way to do it is to
demand that if you measure every n-2 qubits and look at the outcome, the expected entanglement of the pure joint state for the remaining 2 qubits will be large. (The papers give
mathematical formulations.)
☆ February 3, 2012 12:25 am
Hi Gil,
So when you say “highly entangled” you are referring only to states on maximal Schmidt rank? These are the only states which exhibit entanglement between every pair when the remaining
qubits are measured. If you want to make that entanglement significant, than you also need to have the amplitude for each branch approximately equal in amplitude, but I guess this is less
important. What’s particularly strange is that such states are more robust against noise than states of low Schmidt rank such as the GHZ state, in the sense that entanglement can remain
even in the presence of quite significant noise (see quant-ph/0004051 for example).
☆ February 3, 2012 8:36 am
hi Joe, So is your counterexample still survives with this interpretation of “highly entangled?” (The various notions you mention are not sufficiently fress on my mind to tell on the spot
if this is the same definition. But if it, is it is certainly a better way to formulate it.)
☆ February 3, 2012 10:35 am
Hi Gil,
Well, I didn’t have your exact definition in mind when I made the comment. To be honest, I’m not sure that there is any stabilizer state with Schmidt rank n-1, though if there was this
would provide a counter example. Linear graph states have Schmidt rank floor(n/2). Under Z errors these form a complete basis, with X errors are equivalent to at most pairs of Z errors,
and Y errors equivalent to at most tripplets of Z errors. Labeling this with computational basis states such that the bitstring corresponding to the number of Zs applied at each vertex
(i.e. 0 or 1), we would then have an entangled state where any error leading to an orthogonal state would correspond to a set of bit flip errors with weight at most three times the weight
of the original error.
However, you may object that such a state does not have Schmidt rank n-1, which is equivalent to your definition when considering qubits as the local system. As I mention above, I’m not
sure that it is possible to pick a graph state as an example. However if you do pick the maximally connected graph and replace all controlled-Z gates in the constructive definition (in
which qubits are prepared in the pus state and then have controlled-Z gates performed wherever there is an edge in the corresponding graph) with weaker controlled-phase gates (say
associated with the angle theta), then you should get a state with Schmidt rank n-1. Again, local Z operators applied to this form a complete basis. X operators are equivalent to Z
rotations on neighbouring qubits through the angle theta associated with the controlled-phase gate applied, and hence if it is weak are essentially equivalent to slightly increased
uncorrelated noise in the neighbouring qubits. In any case, associating these weighted graph state basis states with the computational basis states as before (i.e. via the combination of
Zs required to produce a given basis state) again leads to a situation where correlations in the noise in one basis lead to correlations in the noise in the other basis and uncorrelated
noise in one basis corresponds to only slightly correlated noise in the other.
□ February 3, 2012 2:42 pm
Dear Joe,
your suggestions are interesting. But I don’t think that your new tensor structure idea is relevant to the conjectures.
My conjectures depends on the computational basis which is required for the definition of the qubits and gates. In particular, they depend on dependent errors for gated qubits.
The setting for Conjecture 3 is this:
We have a noisy quantum computer where we assume (like in the standard model of noise) that when we create a pair of entangled qubits by a gate then the errors are of arbitrary form. Lets
restrict ourselves further to the case that there are gates acting on one and two qubits and that the 2-qubit gates induce substantial positive correlation for the probability that the two
involved qubits be corrupted.
Quantum fault tolerance allows you to start with such noisy entangled pairs and create pairs of entangled qubits with almost independent errors.
We want to identify crucial properties of noisy systems for which QFT fail.
QFT can fail from several reasons such as:
1) The computer program does not enact quantum fault tolerance,
2) the error-rate is above the threshold of the threshold theorem,
3) Crazy properties of noise
And the way we propose to test such a failure is that the property “positively correlated noise for entangled qubits” is carried also to pairs of entangled qubits which are created along the
computation. This is the context of Conjecture 3.
The rational of Conjecture 4 is that for highly entangled states, Conjecture 3 (or a somewhat stronger form) will say that each pair of errors are substantially positively correlated and this
implies error shynchronization.
Of course, if our system has gates which create pairs of entangled qubits with independent errors to start with, then we cannot expect the conjecture to hold. The conjecture is an amended
behavior of the standard model of noisy quantum computers.
In your construction you replaced the tensor power structure by another tensor power structures so now the states are entangled to start with and you don’t need noisy 2-qubits gates at all to
create entanglement. This does not seem relevant to us. (And I dont think it violate the mathematical formulation of the conjectures.) Again, we start with a quantum computer where the only
way to create entanglement to start with is via gates acting on 2 or more qubits.
Anyway, it will be best to check your idea for Conjecture 3 on a small number of qubits.
☆ February 4, 2012 8:44 am
Hi Gil,
My point was that there is often more than one natural basis, and often there are two or more natural choices for the basis and subsystems where the basis states are entangled with
respect to one another.
If for example I have a set of strongly exchange coupled electron spins, do I consider spin states as the basis or do I consider eigenstates of the Hamiltonian? Both seem perfectly
reasonable, both have been used in QC implementations, and pretty much any assignment of local subsystems to the Hamiltonian eigenstates will lead to each basis being entangled with
respect to the choice of local systems for the other.
18. February 2, 2012 3:27 am
Dear Colleagues,
one cannot understand the problems with FTQC in terms of flip of phase errors or even their generalizations to clusters of such errors for fixed number of qubits. Imagine a thermalization process
of a quantum system with complicated Hamiltonian, i.e. with highly entangled eigenstates.
To drive such a system to an equilibrium (Gibbs) state, which is a mixture of those eigenstates, one needs a highly entangled (correlated) noise which contains error operators acting on the whole
Hilbert space and not local flip or phase errors. The situation is completely different for classical systems, like those described by Ising Hamiltonians. Here the Hamiltonian eigenstates are
product vectors and the Gibbs state can be easily generated by local flips (phase errors are irrelevant as they do not change the eigenvectors). This is the reason why the Metropolis algorithm is
so successful in classical statistical mechanics but does not work in quantum. Only the combination of a truly quantum (entangling) computer’s dynamics with essentially classical model of errors
can produce “miracles” like the threshold theorems in FTQC.
□ February 6, 2012 1:32 pm
This is an interesting theory, even apart from the FTQC question. I think it’s a possibility, but I believe more in the possibility that interactions affecting a few terms at a time will
still eventually drive the system to equilibrium. This is clearly possible with *some* sequence of two-qubit interactions; consider the quantum Metropolis algorithm of Temme et al.
Another possibility is that systems are metastable and take exponentially long to reach their ground state.
What I think is the more relevant possibility for quantum computing is that everything is far out of equilibrium, and work is continually performed to pump away errors and keep it that way.
Is there a proposed mechanism that would produce highly entangled noise?
□ February 6, 2012 2:31 pm
Is there a proposed mechanism that would produce highly entangled noise?
Real-world external reservoirs commonly exhibit relaxation times ranging from seconds-to-minutes-to-hours and at the same time generically couple multiple quantum gates via Preskill-style $k$
-qubit interactions, and thus qualify sensu stricto as entangled noise.
Well-known examples of non-Markovian multi-qubit noise mechanisms include thermal magnetic noise (in conductors), spin magnetic noise (in nuclear spin reservoirs), electrostatic noise (in
insulators), patch effect noise (in conductors), to say nothing of (typically) slower noise processes associated to ionic transport (across permeable interfaces) and dislocation motion (in
lattices). And yes, in our own QSE prototypes we commonly encounter all of these noise mechanisms (and more).
These “schmutzy” interactions generically are associated to Hamilton flows that are (in Robert Alicki’s phrase) “not local flip or phase errors,” and that are nonstationary on time-scales of
seconds to hours (or longer), so that these reservoirs remember for a long time the QC states that they have “seen” in the past. Moreover, even if we imagine a quantum computer fabricated of
trapped ions floating in a dark cold optical cavity, still that optical cavity would be observed by (for example) detector/emitter diodes that (even when unpowered) couple their “schmutzy”
conduction bands to the cavity modes, thus generating precisely the Preskill-style $k$-qubit noise that is most fatal to scalable error correction (as presently conceived).
Now, do these mechanisms represent fundamental obstructions to QC, or do they represent practical obstructions … or are they best regarded as enticing new opportunities for fundamental
research in both physics and mathematics, and also for eminently practical new enterprises? Now those are four interesting questions, to which “plausibly”, “surely”, “definitely”, and
“assuredly” are (to my mind) four reasonable answers.
19. February 2, 2012 8:49 am
Just a quick remark
The “debate” format makes it seem like we have a conflict between two competing theories: one that says scalable QC is possible and one that says it isn’t. But that’s not the situation at all:
this is a conflict between (a) a theory, and (b) attempts to poke holes in that theory, without proposing a replacement. This can be seen most clearly in the exchange between Gil and Boaz above.
In response to Boaz’s extremely-pertinent question, of whether, if FTQC is impossible, that means there’s an efficient classical simulation of realistic quantum systems (and if so, what is it?),
Gil basically says that he doesn’t know and prefers to focus on his Conjectures 1-4 casting doubt on FTQC. This sort of response might be fine in a legal defense (“if not my client, who DID
commit the murder? who knows? maybe aliens, maybe the boogeyman. I don’t have to explain it, I just have to cast enough doubt on the prosecution’s case!”). But it’s more problematic in science,
where we want to know what the world is LIKE, and where the modus operandi is to accept a model provisionally until it’s replaced by a better model. Right now, there’s a detailed picture of the
world could be like such that QC is possible, but no serious competing picture of what the world could be like such that it isn’t. Obviously, that situation could change at any time, and while I
don’t expect such a change, I would welcome it as the scientific thrill of my life.
□ February 2, 2012 9:09 am
Dear all,
A quick answer. I agree with what Scott writes in the first paragraph about the assymetry. This is a discussion between a theory and attempts to poke hole in that theory.
Just to make it clear: the theory we discuss is *not* quantum mechanics, but the postulate that universal quantum computers are possible and its wonderful theoretic consequences.
I promised Boaz to give some more detailed answer to his good questions about computational complexity and about sumilating classically realistic quantum systems. Please stay tuned.
I realize that to say “I dont know” is not so scientific and indeed I commend Scott for never saying or even thinking anything like that.
“Obviously, that situation could change at any time, and while I don’t expect such a change, I would welcome it as the scientific thrill of my life.”
100,000$?, 10%?, something?
☆ February 2, 2012 9:57 am
Gil, pleading ignorance is fine, but if you want to be consistent about it, it seems to me that you should take the one explanatory theory we have and then be equally ignorant in every
direction. I.e., “who can say what the complexity of simulating physics really is? Maybe it’s more than BQP. Maybe it’s less than BQP. Maybe it’s incomparable.” Note that strictly
speaking, all three possibilities are perfectly consistent with QM (e.g., maybe there are powerful oracle unitaries available, or gigantic Hilbert spaces). What’s strange to me is
focussing on one of these possibilities over the others.
☆ February 2, 2012 10:18 am
BTW: sure, no problem. I hereby offer $100,000 for a demonstration, convincing to me, that scalable QC is impossible in the physical world.
☆ February 2, 2012 12:12 pm
That’s like offering $100,000 to anyone who can prove that Bigfoot doesn’t exist.
☆ February 2, 2012 12:44 pm
Craig: No, I don’t think so. Whether Bigfoot exists is a question about the contingent history of evolution on Earth. By contrast, whether scalable QC is possible is a question about the
laws of physics. It’s entirely conceivable that future developments in physics would conflict with scalable QC in the same way relativity conflicts with faster-than-light communication
and the Second Law conflicts with perpetuum mobiles. It’s such a development in physics that I’m offering $100k for.
☆ February 2, 2012 1:26 pm
The meta-assertion “ ‘Scalable QC is possible’ is a question about the laws of physics” can be parsed in differing yet equally valid ways. Consider for example the parallel constructions
‘One-meter tokamak reactors are possible’ and ‘Alcubierre warp-drives are possible.’ A theoretician might regard all three statements as true (in certain well-posed circumstances), an
experimental physicist might regard all three statements as posing wonderful challenges, and a systems engineer might regard all three statements as sufficiently unrealistic as to be
effectively false — and some folks (me for one) would argue that in our present state of knowledge, all three opinions are reasonable.
☆ February 3, 2012 3:13 pm
You quote Leonid Levin in your PhD thesis. If you are looking for “future developments in physics would conflict with scalable QC in the same way relativity conflicts with
faster-than-light communication and the Second Law conflicts with perpetuum mobiles”, then his criticism of scalable QC seems to me to be the answer you are looking for.
The current laws of physics only hold for <= 12 decimal places as Levin said. Scalable QC goes out of this range, so it is in conflict with the current laws of physics. Of course, nobody
has canonized this as a law of physics that has been taught in advanced courses in physics as far as I know, but it is a common theme in all physics experiments. Think about it. Why
haven't scientists gotten past this problem of limited precision if finite precision wasn't a law of physics?
A possible physical explanation of this limit is that the universe is discrete not continuous, but it doesn't really matter why, just that this is the way things are, just as Einstein's
relativity and the Second Law are just the way things are, as far as scientists say they are.
Similarly, it is possible to fold paper in half 12 times but probably not 13 times. http://pomonahistorical.org/12times.htm
The fact that one can extrapolate a theory to ridiculous conclusions does not mean the ridiculous conclusions are true. It also doesn't mean that the ridiculous conclusions are science
either. It just means that they are ridiculous conclusions. Scalable quantum computing is a ridiculous conclusion, based on current science, just as Bigfoot existing is a ridiculous
conclusion, based on current science (except for a TV show I saw on it recently where scientists said it might be true).
☆ February 3, 2012 8:16 pm
YAWN. I’ve answered that “argument” in many places—see for example page 6 of Multilinear Formulas and Skepticism of Quantum Computing. We can easily create states, in the lab, today,
right now, that have amplitudes of 2^(-10000) or even smaller: to do so, just prepare 20000 independent qubits in the state (|0>+|1>)/sqrt(2)! That already makes it completely obvious
that amplitudes can’t behave like energies or momenta or other observable quantities that we measure to a few digits of precision; they have to behave more like probabilities. (If you
flip a coin 10,000 times, the laws of probability don’t start breaking down because the probabilities take too many digits to write down!)
So then Levin is forced to say that, well, no, it’s not the number of decimal places in the amplitudes that he really meant to talk about, but some other property of the quantum state
that somehow involves its complexity or entangledness. So what is that property, exactly? That question was precisely the starting point of my paper, which tried to address the question
as sympathetically as possible but (I think, in retrospect) came up mostly empty. Anyway, the paper is full of simple observations about what I called the “Sure/Shor separator” problem;
it might be helpful for skeptics to understand those observations if they don’t want to retread ground from 10 years ago.
Look, I like and respect Levin a lot, but for ideological reasons, there’s no limit whatsoever to how howlingly-wrong he can be when it comes to quantum computing. There’s a reason why QC
skeptics like Gil Kalai don’t bother to repeat the precision “argument”: because, presumably, they know full well that it doesn’t survive a moment’s thought!
☆ February 4, 2012 7:50 pm
I looked at your paper, particularly page 6.
To make things more precise than Levin’s number of decimal places argument, I’ll state my hypothesis that scalable quantum computing should fail to yield any gain over classical
computing, since this has never been demonstrated in a laboratory. This separates Sure states from Shor states as you defined them.
☆ February 5, 2012 6:29 am
Craig, first of all, “gain over classical computing” is a property of computations, not of states. That already tells me you haven’t understood my paper at all. But more importantly,
saying “QC should fail to yield any gain over classical computing” isn’t even an attempt to answer the Sure/Shor separator question: it’s just a restatement of the question! The question
is: why does QC fail to yield a gain? Where does the breakdown happen? What physical property of all realistic quantum states would be violated by the states that occur in Shor’s
algorithm? (And please don’t tell me the tautological property of “being useful for getting a computational speedup”—I know you think that, but presumably Nature doesn’t know or care what
you’re using the state for, it just cares about some physical attribute of the state itself?) Do you recognize the urgency of these questions? Do you understand that the burden is on the
QC skeptics to answer them?
☆ February 5, 2012 6:08 pm
The breakdown occurs because of an overflow error that you get when the complexity of a problem is too high, just as if you were to try to write the program for the latest version of
Windows on a personal computer built in the late 1970′s.
Nature just doesn’t have enough memory for exponential computations. There is no experimental evidence to suggest otherwise.
I posted earlier why I thought this is true – because we live in a digital universe.
But it doesn’t really matter why. Just that the evidence points to this being true.
You have to be careful about extrapolation. Just because a theory is successful in making some crazy predictions doesn’t mean that the theory will be successful in making some crazier
It’s better to look for another theory that makes only the crazy predictions, but not the crazier predictions.
☆ February 5, 2012 6:30 pm
Here’s another question which is easier, but I’d like to know your answer: When I was a kid, I used to experiment with the copy machine in my father’s office. I would make a copy of a
picture, then make a copy of the copy of the picture, then make a copy of the copy of the copy of the picture, etc.
I found that after about 6 copies, the original picture was unrecognizable. I assume that copiers are better today than they were then, but still I bet after a finite number of copies,
the original picture would be unrecognizable.
Yet, in theory, this still shouldn’t be. Do you think it is possible to build a copy machine in which after any n copies, the original picture will be recognizable? And what is the
difference between this phenomenon and quantum computing?
☆ February 5, 2012 6:45 pm
Craig, you might be confusing quantum computers with analog computers.
This was an early source of confusion, which was addressed in the mid-1990s with the invention of quantum error correcting codes. These proved that quantum errors could be “digitized”
just like classical errors. However, I think those insights are not widely known or understood.
In the case of the photocopier, the answer is that a page of text written in English might slightly deteriorate with each copy, so simply rephotocopying each time. However, if you scan
it, use OCR and print a new copy from it, then if this procedure works once it’ll work an unlimited number of times. (More accurately, a number of times that is exponential in the amount
of error-correcting resources we use at each step.)
☆ February 5, 2012 6:41 pm
Craig wrote:
Do you think it is possible to build a copy machine in which after any n copies, the original picture will be recognizable?
Maybe have the copier embed a nearly-invisible error-correcting code into the copy, so that nearly no further deterioration will happen on later copies? Which sounds rather analogous what
they’re trying to do with quantum computing.
☆ February 5, 2012 10:39 pm
I’m not confusing the two. This is a separate question about photocopiers. Also, OCR only works for texts. I’m talking about copies of pictures here, in which OCR doesn’t work.
Ørjan Johansen,
Easier said than done. If it’s nearly invisible, the copier will not be able to read it. The copier can only read visible stuff. And if the copier embeds visible stuff into the copy, it’s
not a copy. The copy machine must make a copy at each step and read the copy at the next step. Error correcting codes aren’t going to help us here.
If you cannot create a copy machine that creates a perfect copy, then I don’t see how you are going to make large scale quantum computing work, as it seems to me that large scale quantum
computing is more difficult.
The relevant analogy here is that just as one can create of perfect copies of text, one can do small-scale quantum computing. And just as one cannot create perfect copies of pictures, one
cannot do large-scale quantum computing.
☆ February 5, 2012 10:43 pm
What you say would be correct if quantum computers were analog computers.
However, quantum errors can be digitized, just like classical errors.
And quantum information can be encoded in error-correcting codes, just like classical information.
See this paper and the citing articles.
☆ February 5, 2012 11:35 pm
But quantum computers are analog computers. The weights of each of the qubits are analog. You can do error-correcting codes for flipped qubits, but what if the weights have errors in them
and the errors grow after each unitary operation? Then the quantum computing develops errors which cannot be corrected, similar to errors which develop in copying a picture.
This doesn’t matter for equally weighted states – Sure states – since the errors have equal probability. But this does matter for Shor states.
☆ February 6, 2012 12:13 am
Have you read the 1995 Shor article? Or subsequent work on QECC? (Any textbook on quantum computing would also suffice.)
What you said was the prevailing view before QECC was invented.
But QECC directly addressed this concern 17 years ago. Have you found a flaw in all those papers? Or just not noticed them?
Not to pull rank, but before asserting that experts are wrong, it is polite to attempt to understand why they are claiming what they do, and not to first ask them to educate you about the
basics of the field.
In this case, here is a sketch of the reason why errors can be discretized.
Let V be the subspace of (C^2)^{\otimes n} corresponding to a quantum error correcting code on n qubits. If it can correct one error, then V is orthogonal to each of the subspaces that
results from applying sigma_i^{(j)} to V, where i ranges over {X,Y,Z} and j from 1, .., n. Also these subspaces should be orthogonal to each other. Thus if one error occurs (meaning one
Pauli), then a measurement can figure out which one happened, and can undo it.
What if each qubit is independently rotated in an unknown direction by up to an angle epsilon? Then the amount of weight in the single-error (correctable) sector is something like n
epsilon, and for small enough values of epsilon, the probability of an uncorrectable error resulting from the measurement will be O(n^2 epsilon^2).
☆ February 6, 2012 3:53 am
Dear Aram, let me reply the question to Craig. I have read all the papers you mentioned.
I spend few years (togather with Michal Horodecki, who is a young and brilliant guy, as you know) to uderstand the proofs of threshold theorems. We failled but at least we understood the
assumptions. A mathematical theorem is only relevant for physics if its assumptions can be derived from the physical principles. All theorems in Quantum information are about simplified
models and not about the fundamental theory (which in this case should be the quantum electrodynamics of electrons and nuclei ). Often a simple model is good enough to grasp the essence
of the phenomenon. But for the FTQC theory, which deals with extremally subtle effects, all those neglected “tails” or “irrelevant contributions” can matter. I think , that the physicists
like John Preskill understand very well this problem, and I appreciate very much their efforts to give a solid physical basis for FTQC. But up to know all those results are only partially
succesfull. They change my initial attitude from “this is a pure nonsense, forget it” to “this is most likely a wrong idea, but an interesting challenge concerning the very fundamental
problems of statistical mechanics and thermodynamics).
At present there is a strong evidence supporting the picture of QC as a kind of analog computer. For example, complex quantum systems behave identically as classical chaotic systems what
can be quantified using a version of Kolmogorov-Sinai dynamical entropy.
Classical Newtonian physics, before the discovery of chaos, would also suggest the feasibility of extremally powerful classical analog computers based on continuous variables.
☆ February 6, 2012 1:43 pm
Robert, I take your objections seriously. Sorry for responding to Craig’s first; his were just more annoyingly uninformed.
But I have to take issue with the very last point you made, which is that we needed chaos to tell us that Newtonian dynamics could not produce super-strong analog computers. In fact, if
you posit an infinitesimal but constant noise rate (like 10^-20), then analog computing pretty clearly fails, even without knowing anything about chaos. In other words, even if we had
never heard of chaos, we’d be able to notice our inability to prove a threshold theorem for analog computing that could cope with i.i.d noise.
But the threshold theorem of quantum computers uses essentially the same assumptions as does the threshold theorem for digital classical computers. Yes, quantum systems can, in some
cases, act chaotically, just as classical systems can. But I don’t see how you can say that this behavior is generic. Are superconductors chaotic? BECs? Lasers?
And a classical RAM chip is still described by quantum mechanics: if it is chaotic, how does it store a bit so well?
☆ February 7, 2012 9:24 am
Dear Aram, you have raised a number of very interesting questions, some of them concerning still quite controversial issues. To address them honestly one has to write a book, but as the
discussion becomes more and more interesting and substantial , I try to sketch my point of view. Sorry, for repeating myself, but certain issues are notoriously confusing.
I believe more in the possibility that interactions affecting a few terms at a time will still eventually drive the system to equilibrium. This is clearly possible with *some* sequence of
two-qubit interactions; consider the quantum Metropolis algorithm of Temme et al.
Yes, but this is a quantum algorithm performed on QC and cannot be effectively implemented classically while classical Metropolis works very well for classical systems. The difference
between quantum and classical is that flips of individual spins commute and are good error operators for classical systems. Therefore, the result of sampling does not depend of the order
of applied errors (flips). For quantum systems the error operators do not commute even for such “ almost classical” models like Kitaev ‘s (R.A. et.al. J.Phys.A.42, 2008). Therefore, for
quantum models we need additional averaging over the permutations in the sequence of errors what could be done efficiently only on QC.
BTW, the fact that quantum error operators depend not only of the coupling to the bath but also on the Hamiltonian is appreciated only by a handful of experts. In hundreds of papers and
dozens of books the authors simply add the convenient Lindblad generator to an arbitrary Hamiltonian. Some of them claim also that one can arbitrarily “design” Lindblad generators, e.g.
to drive the system to the requested state.
Another possibility is that systems are metastable and take exponentially long to reach their ground state.
Why should it help FTQC?
What I think is the more relevant possibility for quantum computing is that everything is far out of equilibrium, and work is continually performed to pump away errors and keep it that
This is a very popular but wrong intuition. Imagine a system coupled to a bath at (almost) zero temperature which is driven to its (almost) ground state. The possible high initial entropy
is reduced to (almost) zero but nevertheless the system ends in a unique final state i.e. does not contain any information about the initial input. What destroys information is not a high
entropy alone, but also the always positive entropy production. Adding a non-equilibrium device like a refrigerator can reduce entropy but at the price of increased entropy production.
But I have to take issue with the very last point you made, which is that we needed chaos to tell us that Newtonian dynamics could not produce super-strong analog computers. In fact, if
you posit an infinitesimal but constant noise rate (like 10^-20), then analog computing pretty clearly fails, even without knowing anything about chaos. In other words, even if we had
never heard of chaos, we’d be able to notice our inability to prove a threshold theorem for analog computing that could cope with i.i.d noise.
What you propose is exactly one of the tests for classical and quantum chaos. If you add a bit of noise to a classical chaotic system you see that the entropy of the state grows linearly
with time and the slope is practically independent on the noise magnitude. This slope is the Kolmogorov-Sinai entropy! The similar behavior is observed for many examples of quantum
systems with chaotic classical counterparts but also for quantum systems with randomly chosen Hamiltonians (R.A. et.al PRL 77, 1996; J.Phys.A, 2007 and refs there). So, it is quite
justified to say that this is a generic behavior.
Again , a popular but completely wrong intuition is that quantum systems are intrinsically more stable than classical because the Schroedinger eq. is linear in contrast to the nonlinear
Newton eq.
Are superconductors chaotic? BECs? Lasers? And a classical RAM chip is still described by quantum mechanics: if it is chaotic, how does it store a bit so well?
Firstly, few remarks about quantum states, macroscopic quantum states and classical states, a notoriously confusing issue. Notice, that all systems are quantum, only the states can be
classified as above, “classical systems” is a short-hand term for quantum systems in classical states.
1) One or few phonon state of macroscopic diamond is a quantum state , but not macroscopic quantum. One can entangle such states for two diamonds (realized experimentally), but one cannot
“entangle diamonds”.
2) A coherent state of 1000 phonons is a classical state.
3) A superposition of such a state with another one with 2000 phonons is a macroscopic quantum state (never realized).
4) Superconducting qubit states are quantum states, similarly to 1). They are superpositions of a ground state and a single-excited Cooper pair state (see arXiv:1012.0140, my previous
preprints on this topic questioning a quantum character of Josephson qubits were wrong).
5) Superconducting states of macroscopic samples and BEC states are classical states. The corresponding “condensate wave functions” are classical objects (e.g., they satisfy nonlinear
Ginzburg-Landau or Gross-Pitaevski equations). They cannot be entangled (similarly to waves on Lake Ontario with waves on Lake Sniardwy ), and there are useless for QI/QC. Amazingly,
there is PACS 03.75.Gg Entanglement and decoherence in Bose-Einstein condensates.
Large superconductors, Lasers and BEC, which are classical, can display chaos in some range of parameters (some randomly found refs: Xu, et. al, Chaos in superconducting tunnel junctions,
J.App.Phys.12, 1995; J.Ohtsubo, Semiconductor Lasers: Stability , Instability and Chaos, Springer 2010, K. Zhang, et.al, Hamiltonian chaos in a coupled BEC-optomechanical-cavity system,
PRA 81, 2010). But those systems are also open systems operating in the classical regime. Environment not only selects classical states as the only relatively stable ones, but also adds
friction to their dynamics. Friction can suppress chaos and “digitalizes” classical systems which can occupy discrete local minima of free energy separated from each other by macroscopic
barriers. This is exactly the regime in which all classical computers operate. The price paid for stability is the work necessary to overcome friction which is dissipated into the system
and makes classical gates irreversible. This does not harm classical computations, but I think, is deadly for quantum ones.
☆ February 6, 2012 1:50 pm
Craig, if you’ve found a flaw in the threshold theorem in the case of independent control errors, as you suggest, then you really should write it up!
If correct, it’ll make a big splash.
Here’s an example of a proof of the threshold theorem:
If it’s really wrong (and not just they missed a factor of 2, and so have to change the numerical value of the threshold), and you find something everyone has missed, then I promise your
work will be very influential.
☆ February 6, 2012 10:48 am
The quantum error correcting codes papers are good mathematics, but don’t answer the following question:
Quantum computing which factors integers works as follows: You have an initial state and you apply a polynomial number of unitary matrices to the initial state. In the real world, these
unitary matrices each have an small error. But after a polynomial number of unitary matrices are multiplied together, this small error turns into a very large exponential size error.
It’s just like copy machines making copies of copies of …. etc.
Error correcting codes won’t help us here for the same reason why I observed when I was a kid that after 6 copies of a picture, the picture is no longer recognizable. The error correcting
codes are part of the system. They don’t fundamentally improve the system, which is prone to error by definition.
When I took a course in Numerical Linear Algebra, one of the biggest no-no’s I learned was multiplying matrices together when you don’t have to. Yet, this is the basis of quantum
computing, not just once but many times!
☆ February 6, 2012 11:46 am
<Craig posts: “Error correcting codes are part of the system. They don’t fundamentally improve the system.”
Craig, skeptics and nonskeptics alike appreciate that one of the greatest and most surprising discoveries of QC/QIT has been that this seemingly plausible assertion is not correct for a
large class of quantum gate errors. Yet on the other hand, no error-correction method encompasses all classes of errors, and in this respect too QC skeptics and nonskeptics alike
appreciate that John Preskill’s GLL post “Sufficient condition on noise correlations for scalable quantum computing” (with its attached notes and references) has been perhaps the single
most outstanding contribution to this GLL discussion.
For me, a one-sentence question describing a key open issue is: “Is Nature sufficiently ingenious and profligate in her noise-like dynamics, that neither she nor we need ever evolve
dynamical trajectories on a covering Hilbert state-space of dimensionality sufficient for more-than-P computation?”
At the present time, we don’t know enough even to be confident that this one-sentence question has a definite one-word answer (“yes” versus “no”) … it may well be that definitive answers
await better-formed questions.
This uncertainty is of course very good news for young researchers. :)
☆ February 6, 2012 2:23 pm
I never said I found any flaw in the threshold theorem. I said I found a flaw in the paradigm for quantum computing, as far as I understand it. How is it possible that you can multiply a
bunch of n unitary matrices together, each having error epsilon without getting an exponential error of (1+epsilon)^n in the final matrix?
I never said I was an expert in large scale quantum computing, but I know a silly theory when I hear it. And I think this explains why the theory is silly.
☆ February 6, 2012 2:26 pm
They’re unitary, so the error is at most n * epsilon.
See Thm 2.2 of http://www.jstor.org/stable/55010 .
☆ February 6, 2012 2:39 pm
Thank you very much, aram.
Well then in the words of the famous Gilda Radner, “Never mind!”
Seriously, I appreciate you correcting me.
☆ February 6, 2012 8:59 pm
Thinking about this a little more, n*epsilon is still a large error, the order of n. Wouldn’t this still prohibit quantum computation for large n?
Quantum error correcting codes still cannot help us here.
☆ February 6, 2012 9:06 pm
Yup, so you need FTQC to bring the per-gate error rate down to something like 1/10n. Using conventional proposals for this, the overhead is poly(log(n)), unfortunately with bad constants
(like millions or billions).
☆ February 6, 2012 9:42 pm
Thank you aram,
Now, I see that this why this is a difficult problem.
I cannot prove this, but I think that large-scale quantum computers will not work because we live in a digital universe with a limited amount of memory space. Large-scale quantum
computers will generate an overflow error.
This is the only explanation I can see why scalable quantum computers won’t work. I just cannot believe that we live in a universe where it is possible to factor integers easily but
impossible to solve NP-hard problems easily.
☆ February 9, 2012 6:58 pm
It looks like I was right when I said, “That’s like offering $100,000 to anyone who can prove that Bigfoot doesn’t exist.”
Look at comment #97 here by Scott:
“Drew #95:”Your” money is in my bank account, retirement fund, etc. :-) , and it’ll stay there until someone offers a reason why quantum computing is impossible in principle—meaning, not
in my lifetime, not in the next hundred years, but forever.”
This was a response to comment #95, that proposed a solution to Scott’s problem.
Unless I’m misunderstanding this statement, this offer of $100K appears to be just a publicity stunt.
□ February 2, 2012 1:49 pm
“In response to Boaz’s extremely-pertinent question, of whether, if FTQC is impossible, that means there’s an efficient classical simulation of realistic quantum systems (and if so, what is
it?), Gil basically says that he doesn’t know”
Hi Scott,
I said (in response to my understanding of Boaz’s question) that I dont know if imposing Conjectures 1-4 on a model for noisy quantum computers suffices to reduce their power to BPP. (I also
said that while I cannot give a complete answer, I know something about this question, it is related to interesting questions in computational complexity, my papers discuss them, and I will
relate to them a bit later.)
A related question is if there are other ways to get a speed up over BPP even if fault tolerance quantum computer based on error correcting codes is impossible. It is also interesting. (We
need to find the right formal way to say “quantum fault tolerance based on error correcting codes is impossible.” You can regard my approach as trying to say this formally)
Another question is: if computationally superior quantum computation is impossible, and thus BPP is the compleity class describing decision languages that nature can answer, does this mean
that that there is an efficient classical simulation of realistic quantum systems? And if the answer is yes, can I kindly describe how these classical simulation goes.
This is an interesting question (although we need to work a little to put it on formal ground). It was amply discussed on your blog, and I even participated. We may come to this question
I must admit that I don’t now to describe how to simulate even physical processes for which we do expect simulation with classical computers exist, and also that it is quite a hard work,
which in many cases remains to be done, to describe how to simulate realistic quantum systems with quantum computers. So, overall, I do not understand your complaints.
“who can say what the complexity of simulating physics really is? Maybe it’s more than BQP. Maybe it’s less than BQP. Maybe it’s incomparable.” Note that strictly speaking, all three
possibilities are perfectly consistent with QM (e.g., maybe there are powerful oracle unitaries available, or gigantic Hilbert spaces). What’s strange to me is focussing on one of these
possibilities over the others.
In this discussion we focus on quantum error correction and not so much on computational complexity. This may seem strange to you but in my view there are good reasons for that. The
computational complexity questions are very interesting as well. We may discuss them as well. Again, I dont undersnad what is your complaint.
Your 100,000$ pledge is very generous and nice.
☆ February 2, 2012 5:57 pm
The following is in no way intended to be a criticism against Scott, just a general comment:
Pledging money is not generous and nice. It’s the following through on a pledge that is generous and nice. See Genesis בְּרֵאשִׁית 23.
20. February 2, 2012 10:54 am
A hugely enjoyable aspect of this debate are the opportunities it affords us for quoting inspiring passages, citing seminal articles … and especially, telling funny stories.
For inspiration it’s hard to beat Scott’s post (above), which which the following inspiring passage appears (here lightly universalized):
“There’s a detailed picture of [what] the world could be like such that QC is possible, but no serious competing picture of what the world could be like such that it isn’t. That situation
could change at any time, and we would welcome it as the scientific adventure of our lives.”
What is a suitable starting point (for students especially) to embark upon this adventure? Everyone has their own ideas, and so let me commend one possible starting point (of many) that is the
focus of two recent arxiv articles: Sarovar and Milburn’s Continuous quantum error correction (arXiv:quant-ph/0501049) and Oreshkov and Brun’s Continuous quantum error correction for
non-Markovian decoherence (arXiv:0705.2342).
How might these ideas in these articles be further developed? For inspiration we can turn to ergodic theory in general and the Kolmogorov-Arnold-Moser (KAM) Theorem in particular. This particular
research roadmap is inspired by the following parallels between ergodic theory and FTQC. For many centuries it was postulated that (save for a set of measure zero) Hamiltonian dynamical systems
generically were ergodic; similarly for several decades it was widely expected that quantum noise destroys any realistic possibility of QC. However, beginning in the 1950s numerical experiments
(very unexpectedly) observed quasi-stable trajectories for a special class of Hamiltonian dynamical systems; similarly for a special class of noise mechanisms, the unexpected existence of
error-correction methods has been demonstrated.
The great achievement of KAM theory was to prove quasi-stability rigorously and generally. It is natural to wonder whether a similarly rigorous KAM-type theory might exist for (continuous) FTQC.
Specifically, if we translate the continuous error-correction dynamics of Sarovar, Milburn, Oreshkov, and Brun (and many others) into the symplectic language of KAM theory, can we rigorously
prove (or disprove) the existence of continuous error-correcting quantum dynamics upon computational manifolds having dimensionality sufficiently great for universal quantum computation?
There are plausible arguments for-and-against such a “quantum super-KAM theorem.” A plausible argument “for” is the promising numerical results of Sarovar, Milburn, Oreshkov, and Brun (etc.). A
plausible argument “against” is (what I take to be) a geometrized version of the arguments of Geordie Rose and Gil Kalai, that we should expect quantum super-KAM mechanisms to fail generically in
QC, because the dimensionality of the computational space increases exponentially while the dimensionality of the error-correcting generators increases only polynomially.
Like anyone who works problems associated to these issues, I have my own set of experiences and ideas relating to the various obstructions (both numerical and theoretical) to proving / disproving
quantum super-KAM theorems — obstructions that are generically associated to the historically tough issues of spatial localization, renormalization, and causality (topics for which quantum field
theory in particular exhibits various interlocking pathologies). But a broader and more universal observation is that nowadays the math-and-physics tools required to “get started” in quantum
super-KAM research are sufficiently simple as to be exhibited to students on one page, and yet this post-Dirac theoretical toolset is sufficiently deep and broad as to amply provision 21st
century students for what Scott aptly calls “the scientific adventure of our lives.” Which is all good! :)
As for a funny story, here’s one that Ingo Müller tells about his thesis advisor, the prominent thermodynamicist Clifford Truesdell, who fought an lifelong (and ultimately losing) battle against
what he regarded as the heresy of “Onsagerism” (so perhaps Geordie Rose especially will appreciate this story):
Truesdell was a consummate theoretician. He showed nothing but disdain for experiments, be they conducted in the laboratory or on the computer.
So, when Truesdell visited me in Berlin, on the first day he came and said “Ingo, can I ask you for a favour?” And I, eager to please my visitor and one-time mentor, said “Of course,
Clifford. What can I do for you?”
Truesdell: “Please don’t show me your lab.”
So by the end of the 21st century, will FTQC be regarded as gospel or heresy? Or might it be regarded (per Truesdell) as theoretical gospel and experimental heresy. Right now, no-one knows … and
for sure, we will have plenty of fun finding out, both in our equations and in our laboratories. :)
21. February 2, 2012 12:16 pm
Is Aram, the other “debater”, writing a dissertation in Greek, as a reply?
□ February 5, 2012 8:24 pm
Ah, internet time…
22. February 2, 2012 2:09 pm
Dear all, let me make one comment on Aram’s short response, (it is related to computational complexity issues)
Aram wrote: “Conversely, Gil’s skepticism is based on error models that may have low single-qubit error rates, but are highly correlated even across large distances.”
This is true, but there is more to it. When we have “bad noise”(or detrimental noise as I call it) that obeys conjecture 1, (or conjectures 2 or 3) there can be a very large discrepancy between
“single qubit error-rate” and between error rate in terms of trace distance (of the joint state of all qubits) per a (small) time unit. The trace-distance rate seems the correct and relevant
measure. As a result, conjectures 1, 2, and 3 suggest that for highly entangled states the single qubit error-rate itself will scale up with the number of qubits. This will be the most
devastating aspect of correlated errors.
23. February 3, 2012 4:20 pm
A computational complexity question
I gave a longish respond to Boaz regarding computational complexity issues. It occurs to me that there is a potentially very nice problem that we can try to ask.
Consider the Conjecture 1. It says that we cannot create (essentially) a noiseless qubit, where by “qubit” we refer not just to “raw” qubits of our quantum computer but also to “protected” qubits
created on some sub Hilbert space. (Note that in nature we cannot single out what the “raw qubits” are.)
Suppose we make only this noisy qubits assumption assumption:
All qubits defined by our quantum computer are (substantially) noisy.
What will be the remaining computational power? Will such assumption be enough to push down the computational problem to BPP? The question is not fully formal but it look to me that it can be
formalized. It seems interesting regardless of quantum computer skepticism.
Actually, Scott asked in his early youth about what he called Shur/Shor separators: quantum states that if we cannot reach them we cannot factor. This would not be a Shur/Shor seperator in the
strict Scott’s sense, but it will have the same spirit.
24. February 3, 2012 5:06 pm
Gil postulates: “All qubits defined by our quantum computer are (substantially) noisy.”
Even in clinical medicine, techniques that (substantially) protect quantum order from noise are routinely practiced (for example, Vasos et al. “Storage of nuclear magnetization as long-lived
singlet order in low magnetic field” (2010)). So this GLL discussion is partly about whether these eminently practicable — even clinical — coherence protection techniques can be scaled (both in
fundamental physical principle and in feasible engineering practice) to extremely high orders of coherence sustained throughout intricate computational dynamics.
Seemingly, this is another case in which the boundaries between between (impossible) $\Leftrightarrow$(infeasible) $\Leftrightarrow$(practicable) are less mathematically and physically natural
than we would like.
□ February 3, 2012 6:12 pm
Dear John
“Substantial” here is in the computer science sense, namely greater than a constant say 10^-9.
QEC allows creating manipulable qubits with error going to 0 as the number of qubits used in the encoding increase. The computational complexity question is if imposing at least epsilon noise
on every quibit (raw or encoded) reduce the computational power to BPP.
25. February 3, 2012 9:05 pm
Thank you Gil for the response. I guess in a blog, especially in a comment, it’s hard to make things fully formal so I can’t say I fully understand this “noisy qubit assumption” (I guess because
I’m not completely sure what are raw vs protected qubits ). Still, I think I get the larger picture.
It seems that perhaps several issues are conflated together. There is the big question whether quantum mechanics can allow any kind of super polynomial speed up above classical computation, the
next question whether it allows to build a device capturing the full power of BQP, and the third question whether that device can be built by performing a gate by gate computation combined with
quantum error correction codes. Your conjectures are first and foremost about the last question, though it may be that if true, they end up answering the second and first one. is that a correct
What’s somewhat still confuses me is the following. You could have simply conjectured that for some fundamental reason, the noise rate will always stay above, say, 45% or whatever the number is
for the threshold theorem to fail. But you seem to assume that it would be physically possible to reduce the expected number of errors to an arbitrarily low amount, but somehow correlation will
stay high. If indeed it would be possible to reduce the noise to an arbitrary low level epsilon by spending f(epsilon) resources, do you have a conjecture what f(epsilon) is? In particular, by
your comment that log depth suffices for factoring, does it mean that even if f is exponential in 1/epsilon then we could still factor n bit numbers with poly(n) resources?
□ February 4, 2012 4:44 am
Dear All,
although you consistenly ignore all arguments based on physics let me try again.
Why the threshold theorems MUST be meaningless for real physical systems ?
Because, as the crucial parameter they use error/per gate which in all physical models is proportional to the square of the COUPLING!!! to environment. On the other hand the only known
protection mechanism, valid for classical models as well for quantum ones (topological models) depends on the TEMPERATURE!!!
Consider a piece of ferromagnet which encodes a bit in its magnetization (up or down). You can reduce the coupling to environment million or billion times, but as long as the temperature is
above the critical Curie one, spontaneous magnetization dissapears and the encoded information is lost. The same is true for quantum topological models (e.g. 4D Kitaev model). So, not the
strength of the coupling is relevant but the ratio between processes increasing and decreasing energy which is given by the Boltzmann factor
\exp{- E/kT}.
Obviously, coupling cannot be too large to avoid all “dressing” processes which could transform computer’s qubits into complicated “polarons” containing some environment’s degrees of freedom.
☆ February 4, 2012 9:11 am
Dear Robert,
As a fellow physicist I must take issue with some of the claims in comment. First of all, as you must be aware, most implementations of quantum computing use systems which are far from
thermal equilibrium with the environment (sometimes even internally). The role of fault-tolerance is to keep them from thermalising. The coupling to the environment is crucial, since this
is what governs the time scale for thermalisation. The error correction is extending the thermalisation timescale to be significantly longer than the computation. You seem to totally
neglect this.
The statement that topological models are the only known protection mechanism for both classical and quantum computation is clearly false. You also seem to be implying that topological
fault-tolerance is necessarily only passive error correction (by way of the claimed dependence only on temperature dependence), but this is somewhat misleading as there are active schemes
for error-correction and fault-tolerance based on topological codes.
Lastly, the talk of a critical temperature for passive correction is not an objection to fault-tolerant quantum computation, but rather seems to be an admission that it is possible. The
existence of a critical temperature for passive schemes is analogous to the error threshold in active correction. One expects a phase transition!
☆ February 4, 2012 9:18 am
Let me also quickly add that if all quantum systems thermalised in sub-exponential time (in the size of the system), then we would have a way to solve a much stronger class of problems
efficiently with physical systems: Finding ground states of local Hamiltonians is QMA-complete, and even finding ground states for the Ising model (no transverse field or anything messy)
is NP-complete. Indeed, the adiabatic model *tries* to reach thermal states of a target Hamiltonian quickly.
☆ February 4, 2012 10:55 am
Dear Joe,
thank you very much, finally something about physics and not only abstract structures derived from complexity theory.
1) “if all quantum systems thermalised in sub-exponential time (in the size of the system), then we would have a way to solve a much stronger class of problems efficiently with physical
systems. Finding ground states of local Hamiltonians is QMA-complete,”
Firstly, systems thermalize to finite temperature states and not to ground states (III -law of thermodynamics). To get a ground state you need not only fast relaxation but also fast
cooling forbiden by thermodynamics.
Secondly, even if you have a ground state then its measurement in computational basis is a problem. You do not have FT-measurements, for sure, not for observables which do not commute
with the Hamiltonian. Measurements errors cumulate and give exponentially small (in size) probability of the right answer.
2) “The statement that topological models are the only known protection mechanism for both classical and quantum computation is clearly false”
Not topological, of course, I use examples of quantum topological models , because they are simple enough to be rigorously analyzed. However, the mechanism based on free energy bariers ,
growing with the size of the system, between the states encoding information seems to be universal. Please, show me a single example of other mechanisms proved to work?
3) The difference between active and passive protection is not so big as one can think. Imagine a circuit model with all the QEC machinery working simply as a memory. Then it should be
equivalent to an open quantum system driven by periodic Hamiltonian (error correction rounds). Such systems are very similar to systems with constant Hamiltonians (see my paper with
Daniel Lidar and Paulo Zanardi, Phys. Rev. A 73, 052311 (2006)) and therefore, the mechanisms of protection should be essentially the same.
4) The fact that the systems are far from equilibrium does not help, the laws of thermodynamics must be obeyed, in particular entropy production must be positive .
Probability of deviations from this principle decreases exponentially with the size of a system x time (“fluctuation theorems”) and hence also does not help. One should remember that
information is not destroyed only by large entropy, which can be removed from the system by cooling, but by the always non-negative entropy production. The later is minus x change of
relative entropy (relative entropy is a measure of state distinguishability). In a non-equilibrium environment (e.g. many baths with different temperatures) the entropy production is
larger than in an equilibrium one (single heat bath) due to transport processes.
□ February 4, 2012 10:09 pm
Your conjectures are first and foremost about the last question, though it may be that if true, they end up answering the second and first one. is that a correct understanding?
Dear Boaz, yes this is correct. But note that the question about quantum codes extends beyond the gate/qubit model and also note that the proposed conjectures are not asymptotic, unlike your
three formulations. Let me come back to the other interesting question you raised later. (Probably next week end).
26. February 4, 2012 11:22 am
I hope Robert Alicki realizes that I have always taken his criticism seriously. In particular, my paper with Ng proving a quantum accuracy threshold theorem for a system coupled to an oscillator
bath at nonzero temperature [Phys. Rev. A 79, 032318 (2009)] was partly motivated by one of Alicki’s objections to previously known threshold theorems.
I have also found Gil Kalai’s skepticism stimulating, and I enjoyed our discussions during Gil’s visit to Caltech last year. Gil’s visit provided the impetus for me to work out some sufficient
conditions for scalable quantum computing, something I had been meaning to do anyway. The results are a bit much to explain in a blog post, but I have posted my notes at
for those who want to see more details.
Gil says that while he is skeptical of quantum computing he is not skeptical of quantum mechanics. Therefore, I presume he takes for granted that a quantum computer (the “system”) and its
environment (the “bath”) can be described by some Hamiltonian, and that the joint evolution of the system and bath is determined by solving the time-dependent Schroedinger equation. If Gil’s
conjectures are true, then, no matter how we engineer the system, the Hamiltonian and/or the initial quantum state of the joint system must impose noise correlations that overwhelm fault-tolerant
quantum protocols as the system scales up.
In this setting, it is interesting to ask what conditions on the initial state and on the Hamiltonian suffice for scalable large scale quantum computing. The main result in my notes is the proof
of a theorem that establishes such a condition. Actually, it extends to a more general setting a result proved in a paper with Aharonov and Kitaev [Phys. Rev. Lett. 96, 050504].
As others have suggested in their comments, the key is that noise acting collectively on many qubits simultaneously should be adequately suppressed. I consider a Hamiltonian coupling the system
to the bath, which includes terms acting collectively on many system qubits. Loosely speaking, the sufficient condition is that the terms in the Hamiltonian that act on k system qubits have an
operator norm that decays exponentially with k, and that for each fixed k, the operator norm decays quickly enough as the qubits separate in space. Under these conditions, correlations in the
noise are weak enough for fault-tolerant quantum computing to succeed.
Of course, I do not mean to suggest that these conditions are necessary. Part of the point of the paper with Ng cited above is that we can still prove a threshold theorem in cases where the these
norm conditions on the Hamiltonian are violated, by making additional assumptions about the initial state of the bath. In the theorem proved there, it is a Gaussian (e.g. thermal) state, with
spatial correlations that decay quickly enough.
In any case, Gil believes that Hamiltonian noise models like the one discussed in my notes are not physically realizable; that could well be, though I would like to understand better his reasons
for holding that opinion. And it is a big leap from there to the statement that large-scale quantum computing is impossible.
Skepticism about the feasibility of fault-tolerant quantum computing is reasonable, and welcome; it raises the stakes as the scientific/engineering community strives to extend the reach of
quantum computing. Gil’s conjectures also highlight the importance of attaining firm experimental evidence for quasiparticles obeying nonabelian statistics in laboratory systems, since quantum
states supporting anyons are highly entangled many-particle states similar to the code states of quantum error-correcting codes.
Since I just typed up my (sloppy) handwritten notes yesterday, the typed notes may contain errors and obscurities. I would welcome comments, questions, corrections, and criticisms, though I may
ignore them anyway. (This is my foray into Open Science, in homage to Michael Nielsen.)
□ February 5, 2012 11:48 am
I agree that the issues of correlated noise are subtle and important, and I’m glad that Jon and his collaborators are addressing them — although I see no particular motivation for the
conjecture that these correlations conspire to prevent large-scale QC, as per my snarky comparison with heavier-than-air flight, which Wim van Dam independently thought of on the Pontiff.
But Gil, your conjectured limit on entanglement (conjecture C) does seem to me to be inconsistent with QM. I am also confused by the fact that some of your conjectures are phrased as “for any
(noisy) quantum computer…” What’s a “noisy quantum computer”? You don’t mean a circuit of polynomial size with the usual model of gate and phase errors, since then we would already know it to
be false.
So what class of physical processes constitute noisy QCs in your definition? How do they differ from a general physical systems, which you agree are governed by QM, and which certainly can
become highly entangled? You refer to processes that do or do not “enact fault-tolerance”, but that is just a physical process like any other. So I still don’t understand what your
conjectures really mean physically, or whether they are well-defined.
☆ February 9, 2012 6:31 am
“But Gil, your conjectured limit on entanglement (conjecture C) does seem to me to be inconsistent with QM.”
Ok lets think about it together. What is the difference between
(*) A state with very high entanglement (according to the measure I use)cannot be created or approximated by a (realistic model) of a noisy quantum computer)
(**) A state which represent a random unitary operator on n qubits cannot be created or approximated by a (realistic model) of a noisy quantum computer.
I think neither (*) nor (**) viaolate QM.
(Regarding your example, the entanglement measure I use should give something linear or quadratic in n but tell me the precise state you refer to and I will double check.)
□ February 5, 2012 11:55 am
Gil, looking again at conjecture C, why can’t I have a bundle of n fiber-optic cables, and send half of a pair of entangled photons down each one? Doesn’t this already give (close to) a
maximally-entangled state on n qubits, violating your conjecture? And isn’t this already feasible with current technology? You can say that photons are hard to store, but then I can use a
Bose-Einstein condensate, etc. etc… What am I missing?
☆ February 5, 2012 12:57 pm
Chris, from both a fundamental math-and-physics point-of-view, and from a practical quantum systems engineering point-of-view, the following superficially similar propositions are in fact
very different:
Case 1: launch entangled photons into low-efficiency detectors, versus
Case 2: launch entangled photons into high-efficiency detectors.
In the context of Case 1, it is an excellent approximation to regard the photon-source currents as physically and informatically independent of the photon-detector currents. This hugely
simplifies the subsequent analysis, which is why the celebrated Feynman Lectures (to cite one textbook of many) embrace the approximation of source-sink independence implicitly.
But in the Case 2 limit of near-perfect detector efficiency, the photon-source currents are (of course) correlated nearly perfectly with photon-detector currents (via Schwinger-style
Maxwell field equations, naturally) — wholly contrary to the physical intuitions that the Feynman Lectures (again as one textbook of many) encourage students to cultivate.
The physical regime of strong source-sink correlations (aka cavity electrodynamics) dominates the physics of (to cite just example) Aaronson-Arkhipov experiments for computing the
permanent. And here there is no substitute for grappling head-on with exceedingly difficult field-theoretic issues associated to spatial localization, renormalization, and causality (as
Robert Alicki’s GLL posts already have been rightly emphasizing).
Needless to say, it is precisely in these same areas of spatial localization, renormalization, and informatic causality that standard QM encounters significant difficulties (and even
outright pathologies). The point being, that for better or worse (definitely better IMHO), the research strategy of focusing one’s attention upon QC/QIT offers no refuge from these
still-unmet fundamental challenges in mathematics and physics … if anything, these fundamental challenges are sharpened … which is good. :)
Elevator Summary Math-and-physics students in particular should be aware of a Great Truth of QM/QC/QIT, that overmuch regard for the simplified intuitions of even the respected textbooks
(the Feynman Lectures in particular) can subtly yet needlessly ensnare one’s physical and mathematical investigations in a disheartening Groundhog Day.
27. February 4, 2012 1:36 pm
John Preskill notes: “It is interesting to ask what conditions on the initial state and on the Hamiltonian suffice for scalable large scale quantum computing.”
John, I hope you don’t mind “fan mail” for both your articles and for your outstanding on-line lecture notes, which are deserving of appreciation and thanks similarly to the work of many other
GLL posters. Yet isn’t it both mathematically natural and physically illuminating to add to your list “conditions on the state-space geometry.”
Speaking as one of the increasing cohort 21st century STEM professionals who are increasingly (to borrow a phrase from Samuel Clemens) “filled with the easy confidence that geometry inspires,” it
may perhaps be that case that a narrow focus upon QC as a mainly algebraic discipline that is exclusively concerned with initial conditions, Hamiltonians, and POVMs, comprises a mathematical
point-of-view that is sufficiently restrictive — yet unnecessarily restrictive — as to afford little opportunity (for young researchers in particular) to escape our collective “QC Groundhog Day.”
28. February 5, 2012 12:49 pm
No, the burden is not on the QC skeptics. The AC advocates are claiming something that has never been observed, and that is contrary to longstanding understandings about how the world works. The
burden is on the QC advocates.
□ February 5, 2012 2:09 pm
No, if you accept quantum mechanics, the burden is on you to explain why a computer couldn’t be built that takes advantage of the phenomena of superposition, interference, and entanglement
that have been the entire core of quantum mechanics, verified over and over, since 1926.
Believing quantum mechanics but not accepting the possibility of QC is somewhat like believing Newtonian physics but not accepting the possibility of humans traveling to Mars. (“Sure, they’ve
gone to the moon, but Mars? That’s just nutty! It’s never been observed, and is contrary to longstanding understandings about where human beings live.”) You might (or might not) feel blithely
secure that your position won’t be empirically challenged for some time. But if you’re not at least troubled by the tension and thinking about how to resolve it (as Gil is), then your
position is already as feeble as can possibly be.
What is it about the laws of physics that prevents humans from going on Mars? What would go wrong if you tried to send someone there? What is it that could possibly explain why humans can go
to the Moon, but not Mars? We don’t literally have to send someone to Mars before we can ask these questions! And the burden is on you to answer them. Take your time…
☆ February 5, 2012 3:54 pm
Obviously the burden is on those, who presents such extraordinary claims like :” the larger is the system the more efficiently one can beat decoherence ”
In the present “publish or perish” era only very few and rather older guys, like Gil (sorry!) and me, can and want to spend some time searching for holes in the proposed designs of FTQC
as our predecessors did with proposals of perpetuum mobile 2 centuries ago.
The analogy to space travel is not appropriate. The right question is
What is it about the laws of physics that prevents spontaneous creation of Boeing 747 on a scrap-heap?
☆ February 6, 2012 12:08 am
Robert, I see no reason whatsoever why a QC should be considered as a priori implausible as a 747 being assembled in a scrap-heap. While your previous comments revealed nothing but
contempt for complexity theory, this really is largely a complexity issue: if, for example, QCs would solve NP-complete problems or the halting problem in linear time, then I might also
suspect that something as-yet unknown about physics “must” prevent them from working (though even then, I wouldn’t share your certainty). But factoring? Solving Pell’s equation? The idea
that elegant uses of quantum interference would let you nab those special number-theoretic problems, but NOT NP-complete problems, is weird enough to be true.
☆ February 6, 2012 10:58 am
“While your previous comments revealed nothing but contempt for complexity theory, this really is largely a complexity issue:”
Scott, I think you’re going too far here – why the complexity theory perspective is surely relevant and Robert Alicki’s somewhat bitter tone is a little puzzling, I think he made it clear
enough that in his view the issue with quantum fault tolerance is of *thermodynamical* nature and concerns the error models used to prove threshold theorems. Simply saying that BQP is
unlikely to be very powerful so QC doesn’t look completely implausible doesn’t address any of the physical issues Alicki raised.
□ February 5, 2012 11:01 pm
I accept quantum mechanics, but not quantum computing. It is just not true that quantum computing follows from the quantum mechanics of 1926. I guess you tried to show that in your paper and
failed. So you claim that somehow the burden is on someone else to show the opposite.
The Mars analogy is ridiculous. The feasibility of a Mars trip is an easy extrapolation from the Moon trip. But there is no demonstrated feasibility of quantum computing.
☆ February 6, 2012 12:30 am
You obviously didn’t read my paper and have no idea what’s in it. I did exactly the opposite of what you say there: I tried and failed to substantiate the skeptical position, by finding
some sort of criterion that would separate the quantum states that are already confirmed experimentally from the ones needed for Shor’s algorithm. If such a separating criterion existed,
it of course would immediately suggest something for the experimentalists to try: namely, see whether or not Nature respects that criterion! On the other hand, if no such separating
criterion exists, then my Moon vs. Mars analogy is a perfectly valid one.
So, can you propose such a separating criterion? If not, then does your inability to explain where the extrapolation from 1926 quantum mechanics to Shor’s algorithm breaks down bother or
trouble you in the least?
☆ February 6, 2012 1:02 am
Scott, you couldn’t prove what you wanted to prove, but you claimed that you were right anyway because you could not prove the opposite. Gil has proposed hypotheses that separate quantum
mechanics from quantum computing. You just don’t accept them.
QC has not gotten to the Moon and is just waiting for Mars funding. QC has not gotten off the ground. It just seems like a lot of wishful thinking to me. I don’t see any evidence for it.
☆ February 6, 2012 7:43 am
“Gil has proposed hypotheses that separate quantum mechanics from quantum computing.”
That’s the first thing you’ve said in this entire discussion that hints at what you might actually believe! So then, do you agree with Gil’s hypotheses? Or with some of them but not
You’ll notice that Gil calls his hypotheses “conjectures”: not only can he adduce little evidence for them (not surprisingly, given the strangeness of the error models they would
require), they’re not precisely stated, and even if they held in some form, it’s far from clear whether or not they would actually kill QC! If you read Gil’s paper, he’s honest enough to
admit that many of his conjectures seem “merely” to reduce QC to logarithmic-depth—i.e., still enough to implement Shor’s factoring algorithm!
The point of my paper was to explain why separating quantum mechanics from quantum computing is such a hard problem. I’ve said many times that if someone solves that problem, it’ll be the
scientific thrill of my life (and I even accepted Gil’s challenge to put money behind that statement). Certainly, it would be much more exciting than the “mere” building of a quantum
computer, since it would require overturning so much of what scientists (and not just QC researchers, but chemists, nuclear physicists, high-energy physicists, etc., few of whom think
there’s any problem of principle with QC) currently believe about the way quantum mechanics works.
I agree with you that in my analogy, QC hasn’t yet gotten to the moon. I’d say it’s still in the stage of shooting model rockets into the air — i.e., maybe the 1920s or so. Then, as now,
you had skeptics loudly proclaiming the impossibility of the goal—yet while those skeptics considered themselves “conservatives,” they were really radicals, in that (without, of course,
ever admitting it) they rejected what was already understood about the laws of physics. In that case, the time to vindication of the “conservative” viewpoint was ~40 years.
☆ February 6, 2012 9:17 am
Scott asserts: “Not just QC researchers, but chemists, nuclear physicists, high-energy physicists, etc., few of whom think there’s any problem of principle with QC.”
The complement of the preceding Great Truth is something like:
Scarcely any chemists, nuclear physicists, high-energy physicists, etc., believe there are no problems of principle with QC.”
And this too would be a Great Truth, both in a pragmatic yet important sense (chemists and nuclear physicists seldom compute on linear state-spaces, for the practical reason that there is
little need to do so) and also in a fundamental sense (the struggle to unify relativity with QM has convinced many fundamental physics researchers that both theories are incomplete).
So will we humans ever create practicable computational technologies that require exponential-dimension Hilbert spaces for their accurate operation? That is one of the key points being
discussed here on GLL … and it is still at-issue.
And on that happy day — be it soon or be it distant — that we humans create new theoretical frameworks that unify principles of relativity and QM, will exponential-dimension QC-supporting
Hilbert spaces be part of that description? That too is one of the key points being discussed here on GLL … and it too is still at-issue.
With diverse Great Truths being preached, and numerous practical and fundamental problems increasingly appreciated as being wide open, it’s becoming a terrific era to conceive novel
mathematical tools, conduct ground-breaking scientific research, and launch transformational enterprises in QM / QC / QIT. :)
☆ February 7, 2012 5:56 pm
Scott says that understanding the impossibility of quantum computing is a hard problem. To me it seems easy. QC is implausible and there is no evidence for it.
☆ February 7, 2012 8:49 pm
ROTFL! Sure, it’s easy to understand the impossibility of quantum computing, in exactly the same way it’s easy to understand how the earth can be resting on a giant turtle. The key is not
to ask what the turtle’s standing on, and likewise, not to ask what the flaw is in our current understanding of quantum mechanics that makes QC impossible. All sorts of scientific
problems can be quickly cleared up this way, once we learn to stop asking annoying followup questions and embrace doofosity!
☆ February 7, 2012 10:05 pm
Comparing the impossibility of quantum computing to the earth resting on a turtle is a bad analogy: No large scale QC machine has ever been built. And no turtle holding up the earth has
ever been found. So it’s more natural to compare the possibility of quantum computing to the earth resting on a turtle.
I said before I’m not an expert but a skeptic. I have a question. Let’s look at Grover’s algorithm. The precision required at each step for the entries of the unitary matrix is epsilon=1/
2^{n/2}. In the real world, how is it possible to get this precision?
I hope you realize that this is not the same thing as 10K photons polarized at 45 degrees, as you talk about in your paper. In this example, the 1/2^{n/2} is just something you need for
normalization. In the Grover’s algorithm example, the 1/2^{n/2} is more than this.
☆ February 7, 2012 10:09 pm
You don’t need this much precision. If you approximate the whole matrix to operator-norm error 0.1 then it’ll work fine.
☆ February 7, 2012 10:11 pm
Craig is right. Scott is the one with goofy turtle proposal. I accept all of the known science, and QC is the wildly speculative proposal. Scott, your position is like asking for proof
that time travel is impossible.
☆ February 7, 2012 10:12 pm
Thanks aram,
Wow. That’s a surprise.
☆ February 7, 2012 10:16 pm
I assume that this is the same for Shor’s algorithm. If this is the case, then perhaps it is possible to adjust Shor’s algorithm so that you could factor in polynomial time on a classical
computer? Somehow I don’t believe this, but I’m curious what is preventing someone from doing this?
☆ February 7, 2012 10:20 pm
I don’t know what you proposed modification is, but if you try it, you might be able to see what goes wrong. Or it’ll work, and that’ll be great too.
☆ February 7, 2012 10:34 pm
I don’t have any proposed modification. It’s just that if you have a large matrix of entries with not much precision relative to the size of the matrix, my intuition tells me that you
should be able to dramatically reduce the size of the matrix and get a similar result.
But then again, the final state must be an exponential size vector to be meaningful, so perhaps not.
☆ March 5, 2012 1:02 am
I agree with you. We are in the realm here of rhetoric and semiotics it seems to me. The physics I learned in grad school ( Ph.D. 1979 ) just isn’t connecting to the QC concept. And how
is a 3 GHz processor a “classical” device? All the engineering is based on quantum physics of the solid state. We have quantum computers, and indeed it’s a quantum world. How about the
zero phonon line? That’s “quantum entanglement” of the entire crystal. It happens routinely. The QC is part of the trend started by the Aspect experiment – “parlor trick” physics, as I
think of it. That is, experiments to tease out and illustrate the intuitive extremes of QM implications. The sodium doublet just isn’t good enough for some people.
But when we come to the point of engineering the terms in a configuration interaction molecule oribital calculation, I lose track. I’m just not seeing anything that explains to me how
this is proposed to be done.
☆ March 5, 2012 9:04 am
The error of quantum computerology is stated clearly in Scott Aaronson’s thesis, http://www.scottaaronson.com/thesis.pdf : He claims, “Either the Extended Church-Turing Thesis is false,
or quantum mechanics must be modified, or the factoring problem is solvable in classical polynomial time.”
There is a fourth alternative that he never considers: That quantum mechanics is correct, the Extended Church-Turing Thesis is true, and the factoring problem is not solvable in classical
polynomial time… but that quantum complexity theory gives a bad model of reality. On paper, the cost of maintaining an n qubit machine may be polynomial, but in reality, the cost of
maintaining an n qubit machine should be exponential, since the amount of information that an n qubit machine stores is exponential.
In this world, there are no free lunches. Quantum computerology doesn’t take this into account. The fact that quantum computers haven’t been built, yet many have tried, confirms this.
Also, since this post on this blog, I’ve looked into different interpretations of quantum mechanics. I’ve found them all to be correct in their own way, each illustrating different
aspects of the theory. Most importantly, they each illustrate that QM is counterintuitive, but still not magic. However, quantum computing is magic.
☆ March 5, 2012 12:50 pm
Craig wrote:
On paper, the cost of maintaining an n qubit machine may be polynomial, but in reality, the cost of maintaining an n qubit machine should be exponential, since the amount of
information that an n qubit machine stores is exponential.
If that were true, Craig, that would be an important insight, worthy of a publication venue more glorious than comment #100 on a month-old blog post.
But if you developed this idea further, you’d realize how deeply wrong it is. Or if not, then the referees would tell you.
☆ March 5, 2012 1:44 pm
What would their reasoning be?
☆ March 5, 2012 2:25 pm
Craig, the referees might suggest that your intuitions might be “dualized, then formalized.” That is (1) dualize the intuition by observing that FTQCs can efficiently error-correct
quantum noise processes and can efficiently simulate quantum noise processes. Then (2) formalize the intuition with the help of articles like Riera, Gogolin,and Eisert’s recent
“Thermalization in nature and on a quantum computer” ().
Then analyze a Kalai-eque question (that now is well-posed): “Is it at all plausible that the set of thermalizing noise mechanisms that FTQCs are known to efficiently error-correct is
partially or wholly disjoint from the set of thermalizing noise mechanisms that FTQCs are known to efficiently simulate?” Or in the opposite sense: “Can noise-free QCs efficiently
simulate FTQCs that are exposed to thermalizing noise?”
Before reading (arXiv:1102.2389v3), it would have been evident (to me) that the likely answers were “no” and “yes” … now it’s not so clear! :)
The dualizing intuition is that the more nearly thermalizing noise approximates a Markovian process, the easier that noise is to error correct, yet the harder its dynamics are to simulate
(although working through the details properly would be an immense amount of labor, needless to say).
In any case, there does exist an accessible body of literature, and an interesting class of well-posed problems, that are reasonably suited to QM/QC/QIT students who are desirous of
improving their intuitions and technical skills, and that broadly relate to “the costs of maintaining an $n$-bit FTQC,”
☆ March 5, 2012 5:06 pm
None of these papers give any theoretical or experimental reasons for believing that the obstacles to quantum computing can be overcome. Craig’s hypothesis might be wrong, but I would
like to see the paper that proves him wrong.
☆ March 6, 2012 10:37 pm
Maybe a better mega-engineering analogy would be the space elevator, as popularized in Arthur C. Clark’s novel. In that case, every objection can be answered, it seems, yet there are good
reasons to believe it will never be built, and never could be built. I also have to think of Archimedes’ boast, “Give me a place to stand and I will move the earth.” We might say, “Give
me a Hadamard gate …”
☆ March 6, 2012 11:14 pm
aram said, “But if you developed this idea further, you’d realize how deeply wrong it is.”
I would like to know where exactly the idea would go wrong if I tried to develop it further. By the way, I did not originate this idea.
29. February 5, 2012 4:10 pm
The burden is on [QC skeptics] to answer [questions about] the laws of physics. Take your time …
Thank you for this terrific post, Scott! Because thinking about the laws of physics, and applying these laws in service of urgent practical and/or humanitarian challenges, surely is no “burden”,
but rather is a sobering privilege that is intimately united with the great “adventure of our lives.”
Cristopher Moore’s excellent GLL post about photon entanglement offers us one good starting-point (among many). Let us imagine that Alice and Bob stand at opposite ends of a low-loss multi-fiber
Aaronson-Arkhipov interferometer, and they seek to verify the predictions of standard QM at the highest feasible levels of photon entanglement.
Of course, as Alice installs photon sources, and Bob installs photon detectors, each of high-and-higher efficiency, the cavity-QED analysis of their apparatus becomes more-and-more intricate, to
the point that the formerly sharp distinction between photon sources and detectors becomes wholly obscure.
With a view toward simplifying this navel-viewing maze of cavity-QED complexities, Alice and Bob therefore symmetrize their Aaronson-Arkhipov apparatus, by installing at the end of each fiber one
trapped ion that is observed by one continuous Lindblad process, with each trapped ion serving simultaneously as a photon source and a photon sink. Thus each trapped ion ${i}$ continuously
observes current ${j_i(t)}$, and the experimental record is simply the hierarchy of stationary-process correlation functions
$\displaystyle \big\{\,\langle\,j_i(0)\,\rangle, \langle \,j_i(0)\,j_k(\tau)\,\rangle, \langle\,j_i(0)\,j_k(\tau)\,j_n(\tau')\,\rangle,\,...\big\}$
Now, it so happens that Alice and Bob’s friend Carl is skilled in the computational art of simulating thermodynamic correlation functions (typically in a biomedical, solid-state, or quantum
chemistry context). Moreover, Carl is filled with the blithe confidence that modern geometric (thermo)dynamics inspires, and so Carl offers to bet one pizza that he simulate Alice and Bob’s
experimental data with a level of accuracy that is sufficient to pass any feasibly computable statistical validation test that Alice and Bob’s experimental data can pass.
“How can you make such an wager?” ask Alice and Bob. “My pizza is reasonably safe …” asserts Carl:
“Both Hamiltonian dynamics and Lindbladian processes naturally pullback onto product state-spaces of any desired rank. Moreover the basis of that product state-space can be chosen so that all
the common conservation laws of physics are satisfied identically and independent of rank, and similarly resonant line-widths can be as narrow as required (independent of rank). Because all
thermodynamic principles and dynamical conservation laws are satisfied identically, and because all mathematical operations are natural, no particular mathematical creativity or physical
insight on my part is required to program the simulation.”
“In fact (aside from choices that depend mainly upon mathematical technique rather than inspiration) my main simulation design choice is the state-space rank that is required to model the
correlation functions, and up to the present era no experiment ever conducted, in any laboratory, has required a simulation rank that is more-than-polynomial in the number of sources /
“That is why I (Carl) am confident that you (Alice and Bob) are going to end-up buying me a pizza!” :)
As a quantum systems engineer, Carl appreciates the simplifying power of the mathematical naturality that is associated to modern geometric thermodynamics and dynamics, and yet Carl is utterly
indifferent to the fundamental question of whether the state-space of Nature really is a Hilbert space: for Carl the main payoff of QM/QC/QIT research is the immensely powerful practical insight
that Nature’s state-space is effectively a non-Hilbert product state-space — for Carl and his fellow engineers, the economic worth and the thrilling enterprises that are associated to this
insight suffice to amply justify all the QM/QC/QIT research ever conducted.
Alice and Bob are left to wonder however “Gosh, maybe the state-space of Nature isn’t a Hilbert space.” And it is clear that Alice and Bob (and many physics colleagues) will have a busy time of
it, investigating this question …\ and perhaps winning-back a pizza from their engineering colleague Carl! :)
30. February 5, 2012 7:50 pm
Dear all, there are several questions to me by Boaz and Chris which I will be happy to relate to ans also to some other comments. This will have to wait for one week. There are several issues we
plan to discuss in the next posts and I prepared also a list of some further issues that were raised. (But again – next weekend.)
Let me make one “meta” comment which is simply to reapeat what I said here on the blog in June http://rjlipton.wordpress.com/2011/06/24/polls-and-predictions-and-pnp/#comment-12296 , and on
several occasions earlier: Overall, I dont think that my work and other skeptical works should change people’s a priori beliefs on the feasibility of quantum computers. But I do think that the
issues that I research and that we discuss here are interesting and important and I am very glad with the opportunity to discuss them.
□ February 6, 2012 4:51 am
One more thing. I am very thankful to John Sidles for his beautiful comments.
31. February 7, 2012 9:47 am
Sorry, but my reply was somehow misplaced, and the separation between questions and answers dissapeared. I am very clumsy even with classical computers, so I try again.
Dear Aram, you have raised a number of very interesting questions, some of them concerning still quite controversial issues. To address them honestly one has to write a book, but as the
discussion becomes more and more interesting and substantial , I try to sketch my point of view. Sorry, for repeating myself, but certain issues are notoriously confusing.
“I believe more in the possibility that interactions affecting a few terms at a time will still eventually drive the system to equilibrium. This is clearly possible with *some* sequence of
two-qubit interactions; consider the quantum Metropolis algorithm of Temme et al.”
Yes, but this is a quantum algorithm performed on QC and cannot be effectively implemented classically while classical Metropolis works very well for classical systems. The difference between
quantum and classical is that flips of individual spins commute and are good error operators for classical systems. Therefore, the result of sampling does not depend of the order of applied
errors (flips). For quantum systems the error operators do not commute even for such “ almost classical” models like Kitaev ‘s (R.A. et.al. J.Phys.A.42, 2008). Therefore, for quantum models we
need additional averaging over the permutations in the sequence of errors what could be done efficiently only on QC.
BTW, the fact that quantum error operators depend not only of the coupling to the bath but also on the Hamiltonian is appreciated only by a handful of experts. In hundreds of papers and dozens of
books the authors simply add the convenient Lindblad generator to an arbitrary Hamiltonian. Some of them claim also that one can arbitrarily “design” Lindblad generators, e.g. to drive the system
to the requested state.
“Another possibility is that systems are metastable and take exponentially long to reach their ground state.”
Why should it help FTQC?
“What I think is the more relevant possibility for quantum computing is that everything is far out of equilibrium, and work is continually performed to pump away errors and keep it that way.”
This is a very popular but wrong intuition. Imagine a system coupled to a bath at (almost) zero temperature which is driven to its (almost) ground state. The possible high initial entropy is
reduced to (almost) zero but nevertheless the system ends in a unique final state i.e. does not contain any information about the initial input. What destroys information is not a high entropy
alone, but also the always positive entropy production. Adding a non-equilibrium device like a refrigerator can reduce entropy but at the price of increased entropy production.
“But I have to take issue with the very last point you made, which is that we needed chaos to tell us that Newtonian dynamics could not produce super-strong analog computers. In fact, if you
posit an infinitesimal but constant noise rate (like 10^-20), then analog computing pretty clearly fails, even without knowing anything about chaos. In other words, even if we had never heard of
chaos, we’d be able to notice our inability to prove a threshold theorem for analog computing that could cope with i.i.d noise.”
What you propose is exactly one of the tests for classical and quantum chaos. If you add a bit of noise to a classical chaotic system you see that the entropy of the state grows linearly with
time and the slope is practically independent on the noise magnitude. This slope is the Kolmogorov-Sinai entropy! The similar behavior is observed for many examples of quantum systems with
chaotic classical counterparts but also for quantum systems with randomly chosen Hamiltonians (R.A. et.al PRL 77, 1996; J.Phys.A, 2007 and refs there). So, it is quite justified to say that this
is a generic behavior.
Again , a popular but completely wrong intuition is that quantum systems are intrinsically more stable than classical because the Schroedinger eq. is linear in contrast to the nonlinear Newton
“Are superconductors chaotic? BECs? Lasers? And a classical RAM chip is still described by quantum mechanics: if it is chaotic, how does it store a bit so well?”
Firstly, few remarks about quantum states, macroscopic quantum states and classical states, a notoriously confusing issue. Notice, that all systems are quantum, only the states can be classified
as above, “classical systems” is a short-hand term for quantum systems in classical states.
1) One or few phonon state of macroscopic diamond is a quantum state , but not macroscopic quantum. One can entangle such states for two diamonds (realized experimentally), but one cannot
“entangle diamonds”.
2) A coherent state of 1000 phonons is a classical state.
3) A superposition of such a state with another one with 2000 phonons is a macroscopic quantum state (never realized).
4) Superconducting qubit states are quantum states, similarly to 1). They are superpositions of a ground state and a single-excited Cooper pair state (see arXiv:1012.0140, my previous preprints
on this topic questioning a quantum character of Josephson qubits were wrong).
5) Superconducting states of macroscopic samples and BEC states are classical states. The corresponding “condensate wave functions” are classical objects (e.g., they satisfy nonlinear
Ginzburg-Landau or Gross-Pitaevski equations). They cannot be entangled (similarly to waves on Lake Ontario with waves on Lake Sniardwy ), and there are useless for QI/QC. Amazingly, there is
PACS 03.75.Gg Entanglement and decoherence in Bose-Einstein condensates.
Large superconductors, Lasers and BEC, which are classical, can display chaos in some range of parameters (some randomly found refs: Xu, et. al, Chaos in superconducting tunnel junctions,
J.App.Phys.12, 1995; J.Ohtsubo, Semiconductor Lasers: Stability , Instability and Chaos, Springer 2010, K. Zhang, et.al, Hamiltonian chaos in a coupled BEC-optomechanical-cavity system, PRA 81,
2010). But those systems are also open systems operating in the classical regime. Environment not only selects classical states as the only relatively stable ones, but also adds friction to their
dynamics. Friction can suppress chaos and “digitalizes” classical systems which can occupy discrete local minima of free energy separated from each other by macroscopic barriers. This is exactly
the regime in which all classical computers operate. The price paid for stability is the work necessary to overcome friction which is dissipated into the system and makes classical gates
irreversible. This does not harm classical computations, but I think, is deadly for quantum ones.
□ February 7, 2012 12:50 pm
Robert, your points are interesting. I hope to return to them soon, perhaps in a format less deeply buried on a comment thread.
□ February 7, 2012 2:05 pm
Actually, I can probably express what I want to say concisely here.
I agree with many of your points, but don’t think that their conclusions spell doom for FTQC. For examples, yes, quantum Kraus operators don’t commute, and we have only limited control over
the Lindblad terms, but I don’t think these are fundamental difficulties.
And yes, a FTQC will have to produce lots of entropy, but it’s ok as long as it dumps it somewhere else. Just like my desktop computer expels a lot of heat via its exhaust fan. Maybe this is
bad for the environment, but the computer still works.
I think your point about friction is crucial. I agree that it is necessary for QECC. But I think “friction” just means error-correction, and there, the repetition code is only one code among
many. Error-correction in the Shor code is also a sort of friction, and this kind will protect both amplitude and phase information. While this has never been previously observed in nature,
neither had large-scale classical computers until pretty recently, and in both cases the theory is simple enough that we can predict its behavior if we were to build it.
☆ February 8, 2012 5:15 am
Dear Aram, I am happy that we were able to reduce a vast “battlefield” to a rather small “boxing ring”.
Two comments:
“For examples, yes, quantum Kraus operators don’t commute, and we have only limited control over the Lindblad terms, but I don’t think these are fundamental difficulties.”
I think it is fundamental. Even very small inconsistencies in the “wishful -thinking” Lindblad or Kraus terms violate thermodynamics (it was noticed a long time ago that the Bloch
equation for spin-1/2 with external control did it). These efects cumulate and produce a big “Maxwell demon” which can yield miracles.
“And yes, a FTQC will have to produce lots of entropy, but it’s ok as long as it dumps it somewhere else.”
So, you believe that one can always dump it somewhere else, I believe that the information bearing degrees of freedom always get their share.
☆ February 8, 2012 9:53 am
HI Aram, Robert,
I don’t know much about physics, and in particular have no idea what Lindblad terms or Kraus operators are (though they sound like things I wouldn’t want to meet in a dark alley).
I was wondering if it was possible to explain in a simplified matter to laymen like me what is your argument. If this is not possible or too much work then feel free to ignore the
In particular, I’m wondering about the following question: suppose that I had a PC with a sealed case without a fan. Then presumably, it will only be able to run it for T computational
cycles before shutting down. I imagine, perhaps wrongly, that this T is not an absolute constant, but rather something having to do with the size of the PC, and perhaps the amount of
money I paid for it.
So I guess my question is whether in quantum computing also, if I didn’t want a machine that runs forever, but only one that can handle T computational cycles, shouldn’t I be able to
build one by spending f(T) dollars, for some function f()? This is of course a rephrasing of the question I asked Gil before, to which he promised an answer, but perhaps Robert/Aram have
a different answer.
☆ February 8, 2012 10:25 am
Hi Boaz, I don’t know if Aram agrees with me, but my simple explanation is the following:
Quantum computer uses not only probabilities that certain states are occupied (for example |0> or |1>), but also specifically quantum coherences described by certain complex numbers.
Noise influences those probabilities what finally can lead to a situation when all states are equally probable – maximum entropy state, But this increase of entropy is associated to a
heat exchange with an environment and can be reduced by cooling. Unfortunately, noise kills also the coherences. This is a process which does not change energy but only entropy, and
therefore, I think , cannot be reduced by any type of cooling.
So, you have to spend exponential amount of money, buy an exponential number of quantum computers, perhaps one of them give you a solution for your hard problem.
☆ February 8, 2012 11:44 am
if there are no fresh qubits and you have a constant rate of i.i.d. noise, then you get log depth (which, btw, suffices for Shor’s algorithm). That’s from this paper: http://arxiv.org/abs
I agree with Robert about everything except the part where he says cooling is impossible; I would say cooling is impossible without bringing in fresh low-temperature states, or using up
an initially given supply of these.
☆ February 8, 2012 11:12 am
Robert Alicki,
Do you mean that in order to prevent n qubits from decohering into its maximum entropy state where everything is equally probable, one needs the order of 2^n energy, one unit of energy
for each of the 2^n weight-probability-amplitudes?
☆ February 8, 2012 11:28 am
Of course not, I mean something completely different. You can easily supress entropy increase due to the change of populations (probabilities), because this entropy is related to energy
(heat), but you cannot do anything effective to entropy increase due to pure decoherence (dephasing). Therefore, you do not have any exponential speedup associated with quantumnes
(coherences). – for hard problems you need exponential resources.
☆ February 8, 2012 12:13 pm
It is useful to formulate the question mathematically. Then we can focus our attention on which assumptions in the mathematical arguments ought to be questioned.
From a mathematical viewpoint, Robert Alicki’s statement that dephasing cannot be prevented by cooling is incorrect. The quantum accuracy threshold theorems show that if we have access to
fresh qubits and the noise is sufficiently weak and sufficiently weakly correlated, then scalable quantum computing is possible. Cooling is required if we want to reuse qubits because we
have only a limited supply available. Of course, cooling is also needed for fault-tolerant classical computing to work. Therefore, if we want to make a distinction between classical and
quantum fault tolerance, we should take cooling for granted and consider where else the assumptions in the quantum threshold theorem could be wrong.
Robert objects that the noise models assumed in threshold theorems are unphysical. But some of his objections apply only to Markovian noise. There are also threshold theorems that apply
to non-Markovian noise models, and I have not understood why Robert objects to these noise models on fundamental grounds.
In my earlier comment I gave an example: a Hamiltonian noise model such that a threshold theorem can be proven if the operator norms of the terms in the Hamiltonian obey a suitable
criterion. Conceptually, this model has the nice feature that we don’t need to assume anything about the initial state of the bath beyond the assumption that is it possible to cool
individual qubits (prepare a qubit in a state close to the standard “spin-up” state). This model may not be realistic, but why is it unacceptable as a matter of principle? It would be
helpful if Robert or Gil could exhibit families of Hamiltonians which allow fault-tolerant classical computing but disallow fault-tolerant quantum computing. Such examples would help to
clarify where the disagreement lies.
Robert also mentions that during the course of a long computation the bath may be driven to a highly adversarial state “which can yield miracles.” We tried to address this worry in the
paper I cited in my earlier comment, where we proved a threshold theorem for Gaussian noise. In that case we assumed a rather special type of noise model, in which the bath is a (linear)
system of harmonic oscillators, and the intial state of the bath is Gaussian (for example a thermal state at some specified temperature). The noise can be completely characterized by the
two-point correlation functions of the bath, and we showed that scalable quantum computing is possible if the correlation functions obey suitable conditions. During the computation the
bath might be driven far from its initial state, but our argument shows that it does not become sufficiently “adversarial” to overcome fault-tolerant protocols.
Here, too, the noise model is not fully realistic, but what is wrong with it as a matter of principle? Is it that we must include nonlinearities in the bath dynamics? Is it that the
initial state of the bath will not be Gaussian? Is it that the correlation functions will not obey the conditions we assumed? Is it something else I am overlooking?
Knowing precisely where the objection lies would make it easier for me to think about the problem. Actually, I tried to show that the threshold proof could be extended to the case where
the bath is weakly nonlinear, but I ran into difficulties and got stuck. Would a result like that impress the skeptics? I have also tried to weaken the conditions on the correlation
functions by making additional physically motivated assumptions, but that approach, too, has not yet succeeded. Is that an important direction to pursue further?
☆ February 8, 2012 1:15 pm
The problem with mathematical formulation is that we do not know exactly how far we can go with idealizations in our models. Just now I am working with my colleagues on models of cooling
using “quantum refrigerators”. One can find very realistic models of systems which can be cooled to 0 K at finite time, what contradicts the III-law of thermodynamics. Nature does not
allow for such a violation because those systems are always accompanied by others which cannot be cooled so fast.
The first step which could help us to solve a FTQC mystery is the precise formulation of the procedures involving “fresh qubits” and the related minimal conditions necessary to prove
threshold theorems. One can then try to construct the completely Hamiltonian models with realistic environments to test these conditions. “Fresh qubits” are also the part of FTQC which I
cannot accept as granted, but there are more difficult to scrutinize than the assumptions about noise correlations. It seems they are a necessary ingredient also of the last papers of
My second objection to the recent John’s models is the “small norm assumption” for the interaction Hamiltonian. Take a natural implementation – dipol-dipol coupling of spins. It can be
indeed very small, but this coupling decays with the distance as 1/R^3. It means that the total norm for the interaction with a spin bath diverges logaritmically with the bath’s radius.
Of course, one can say that the coupling could be screened, but for screening we need another degrees of freedom which produce their own noise, etc, etc.
We do not really know which assumptions are really necessary to protect our models from anomalies violating e.g. the laws of thermodynamics. But we should try to find them!
☆ February 8, 2012 12:53 pm
John Preskill remarks: “The noise can be completely characterized by the two-point correlation functions of the bath.”
This is true, and moreover, these same correlation functions completely characterize the renormalization of the qubit dynamics by the bath reservoir.
As a practical example, to initialize a qubit as “spin-up” it is necessary to initialize (for example) image qubits residing in the walls of the vacuum chamber also as “spin-up”. In
practice, as devices push the boundaries of sensitivity and computational efficiency, it commonly is observed that renormalization effects are order unity, rather than small corrections.
Thus separating the computational state-space from the bath state-space is ubiquitously (in experimental practice) neither as mathematically natural nor as physically distinct as might be
desired, and it is therefore entirely legitimate (AFAICT) to wonder whether fundamental obstructions may be associated to these renormalization mechanisms.
Mathematically speaking, in the context of quantum computing, the familiar fluctuation-dissipation relations and Kubo-Green transport relations are thus realized as
fluctuation-dissipation-transport-entanglement relations, and (again AFAICT) the large-$N$ and low-$T$ scaling behaviors of these (well-posed?) quantum entanglement relations are
(potentially at least) among the topics of interest that Robert Alicki’s posts are highlighting.
Needless to say, whether or not these renormalization relations are associated to fundamental QC obstructions, they definitely are associated to practical QC obstructions.
☆ February 8, 2012 2:35 pm
You are absolutely right, the renormalization problems become even worse when external time-dependent control is included. The same holds for the problem of separation of “physical
dressed qubit” from the “baths”. But this is completely ignored by the majority QI community. I understand the local reasons at my University in Gdansk. The last obligatory course on
quantum field theory including renormalization etc.was delivered by myself about 15 years ago. Later on this topic was regarded as to difficult for graduate students. But what about the
leading world’s schools in USA, UK, etc.?
☆ February 8, 2012 3:59 pm
Robert: you remark that fresh qubits are “part of FTQC which I cannot accept … but … are more difficult to scrutinize than the assumptions about noise correlations.”
It is true that all threshold theorems I know of assume that we can prepare fresh qubits with small, weakly correlated errors. Until this recent exchange, I had not appreciated that you
are objecting to this assumption.
Is your view is that no matter how hard we try to prevent it, the fresh qubits will have strongly correlated errors? If so, this seems to be a particular type of correlated noise, so why
is it “more difficult to scrutinize”? And why doesn’t the same claim apply to classical bits? (Or does it?)
☆ February 8, 2012 5:23 pm
Assume, we have only one fresh qubit which is a spin-1/2 to avoid the problem of correlations. To preserve this spin in an almost pure state at finite temperature we need to keep it in a
strong enough magnetic field. Then as I understand the “quantum cooling” , it is essentially a unitary swap between the fresh qubit and computer qubit. However, swap Hamiltonian does not
commute neither with the strong magnetic field nor with the computer Hamiltonian. You can try to switch off them for the time of making swap. If you do it fast then such an impulse
perturbs strongly the total system. If you do it slowly then magnetic field does not protect fresh qubit for a quite long time. To estimate the final result you need to solve a difficult
problem of time evolution of an open system in time dependent external potential. I can find a suitable master equations in the regime of weak coupling to a bath only in two extreme
cases: very slow time-dependence or fast enough periodic one. The first is useless, perhaps the second can model periodic rounds of EC. So, this is a new problem for the theory of open
systems and it would be helpful if “QC-people” could explain clearly the whole procedure to allow proper modeling. For classical computers we must cool macroscopic object representing a
bits – “cold bath”, from which refrigerator transports heat to a “hot bath” – external reservoir. This is not difficult to describe both in quantum and classical formalism.
☆ February 8, 2012 6:27 pm
Robert, I agree that the physics of this is complicated. But highly polarized spins *are* something we can create, in a variety of settings. Some of these settings don’t end up being
useful for quantum computing, e.g. electronic states of Rubidium atoms. But their presence suggests that providing fresh nearly-pure input qubits is a (hard) engineering problem rather
than a fundamental physics problem.
☆ February 9, 2012 3:23 am
Aram, you are right, experimentalist can do a lot. But I think it does not help much. Imagine a classical chaotic systems. The error grows in time t like A \exp{L t}. The experimentalists
can improve the initial error A, but they cannot do anything about Lyapunov exponent L (related to K-S entropy). So they can only logarithmically improve the “life-time” of analog
computation. The similar behavior is numerically confirmed in chaotic quantum systems, but of course not in terms of exponentially diverging trajectories, but in terms of entropy
production (quantum K-S entropy, a possible keyword “Alicki-Fannes entropy” :)).
☆ February 8, 2012 5:20 pm
The way I parse one of the points that Robert Alicki has been making (and with which I agree) renormalization dynamics enforces a well-defined complementarity relation between
mathematical rigor and physical rigor in FTQC. Specifically, fluctuation-dissipation-entanglement theorems ensure that the qubit-reservoir entanglement associated to ground-state
renormalization is unphysically diverges in precisely the same Markovian limit for which the FTQC theorems exhibit their greatest power.
The upshot (it seems to me) is that QC skeptics and QC nonskeptics alike are reasonably justified in deploring a general prevalence of hand-waving arguments, and a general lack of
mathematical and physical rigor, specifically in regard to the tightly dovetailed dynamical flows that are associated to localization, renormalization, and decoherence.
32. February 7, 2012 9:02 pm
This post has made me very uncomfortable, because it revealed to me how much I didn’t understand about the subject of quantum computing. I have lots of questions now, starting with how should I
interpret quantum mechanics?
I found this very well written paper from 1996, which seems to give an answer to at least some of my questions: http://arxiv.org/pdf/quant-ph/9605002.pdf
I’ve never seen anything like this before and it’s not even listed in Wikipedia. I found it by listening to this Google talk: http://www.youtube.com/watch?v=dEaecUuEqfc
It’s good to watch the talk before you read the paper, but not required.
33. February 8, 2012 7:13 pm
I think this debate here is a lot like the debate between Creationism/Intelligent Design and Macro-evolutionism.
Macro-evolutionists believe it is correct to extrapolate backwards in time the fact that life is not static but dynamic in order to explain life’s origins, even though the odds of life
originating from nonliving matter and developing into what it is today are so astronomically small.
Quantum-computerologists believe it is correct to extrapolate that because quantum mechanics has been found to be true for simple experiments, it must be true for complicated experiments (quantum
computers), even though the amount of information that a large-scale quantum computer must hold is greater than the estimated number of atoms in the universe.
□ March 7, 2012 2:21 am
Look up Holevo’s Theorem (http://en.wikipedia.org/wiki/Holevo's_theorem). Holevo showed in 1973 that n qubits can represent only n classical bits of information. The “quantum computers hold
an exponential amount of information” meme is wrong.
Yes, an n-qubit quantum state can be in superposition over 2^n basis states. Quantum theory could break down in some magical way. Even classical mechanics could break down. Maybe if you flip
a coin n times, there are only 1000 different possible result strings, regardless of n. Anything is possible. Maybe God will step in and change the laws of physics if we ever get close to
building a quantum computer. He has managed to fool all of the world’s biologists into believing in evolution, so clearly He is very devious indeed.
☆ March 7, 2012 11:02 am
Holevo’s Theorem uses a different definition of information than I was talking about. My point of view is identical to the point of view given here:
There is an important difference between evolution theory and quantum computing that I didn’t mention: One cannot travel backwards in time to verify whether evolution explains how life
originated. But one can travel forwards in time to verify whether quantum computing will become a reality.
☆ March 7, 2012 11:36 pm
Craig, your arguments, like Goldreich’s, do a good job of identifying why many people don’t like quantum mechanics. Unfortunately for your side, quantum mechanics became the standard
theory of physics after giving the only correct formulation of chemistry, thermodynamics, radiation, electromagnetism, and basically everything else outside of black holes. Many people
have disliked quantum mechanics since it was invented, but no one has been able to change the basic model at all. Worse, any corrections from unifying with gravity are probably going to
leave the low-energy theory pretty much unchanged, just as we still use Newton’s laws despite relativity. So we’re pretty much stuck with exponentially long vectors.
As for what I wrote before, if you bothered to work through your theories, instead of asking people to do the work for you, you would find that any attempt to make Goldreich-style
criticism sensible will fail to give anything coherent. Scott wrote a paper on this, but it’s better to try to work it out for yourself.
Also, you *can* travel into the future to witness evolution.
☆ March 10, 2012 9:09 pm
I don’t have anything against quantum mechanics. I just don’t think quantum computing is feasible. I’ve already worked through my theories – maintaining exponential vectors requires an
exponential cost. It’s that simple. I’m not relying on anybody to do the work for me. I read Scott Aaronson’s paper on Sure states and Shor states. I’m not moved by it.
I think the recent quantum computing frenzy is another example of how people don’t want to be told that something is impossible. It’s similar to the P vs NP problem. This is the only
famous math problem that I am aware of in which there are lots of people claiming P!=NP and lots of papers claiming the exact opposite, P=NP. Why are there so many papers out there
claiming P=NP, when all of the evidence points to P!=NP? Because people don’t want to be told that something is impossible. P!=NP puts a limit on the ability of humans to solve problems,
and most people don’t want to hear this.
☆ March 10, 2012 10:10 pm
There are lots of papers proving P=NP and P!=NP because, with very few exceptions, they are all written by cranks. I would be wary of drawing any general conclusions from that.
Restricting to the set of correct papers, there are more papers with algorithms than papers with lower bounds. Even NP-hardness proofs are basically algorithms. This is because an
algorithm just requires proving that something you built works, but proving a lower bound requires ruling out all possible algorithms, which is something we are not very good at.
Razborov-Rudich give some formal evidence along this line.
If you accept quantum mechanics, then you’ll notice that the Hamiltonian for n 2-level systems has energy scale O(n), and requires 2^n dimensions to describe. End of story. Also, when an
electron orbits a nucleus, what is the “cost”? Who is paying it?
☆ March 10, 2012 10:00 pm
Craig, I agree with all you’ve just said except for one fact: maybe P=NP but in any case we’ll never find the algorithm. Some yet unknown law of physics prevents us from finding the
algorithm and from building a quantum computer as well. In fact we’ll never know whether P=NP. The set of algorithms is infinite but this kind of infinity is of no use in the real world.
We can’t prove theorems like “there exists an algorithm such that …” without actually showing the algorithm. Complexity theory calls for new math and new physics: what common sense tells
us is NOT that P!=NP. It tells us that efficient algorithms are harder to find than inefficient ones.
☆ March 10, 2012 11:17 pm
Aram said, “There are lots of papers proving P=NP and P!=NP because, with very few exceptions, they are all written by cranks. I would be wary of drawing any general conclusions from
The papers are written by all kinds of people. But why don’t people write papers that claim that Fermat’s Last Theorem is false? Because the implications of FLT are meaningless. But the
implications of P=NP are not meaningless; hence, lots of people disagree.
“If you accept quantum mechanics, then you’ll notice that the Hamiltonian for n 2-level systems has energy scale O(n), and requires 2^n dimensions to describe. End of story. Also, when an
electron orbits a nucleus, what is the “cost”? Who is paying it?”
I’m not claiming that exponential quantum states cannot exist in nature, only that most exponential quantum states, including the Shor state and the ones that would be necessary for a
quantum computer to work, require an exponential amount of energy to generate and maintain. As evidence, I point to the fact that there are lots of papers about quantum algorithms, but
still no quantum computers that can run the algorithms.
☆ March 11, 2012 8:57 pm
Aram alludes to the quantum states of electrons in atomic orbitals. I have trouble reconciling the physics and the calculations of atomic states with the seemingly glib presumptions about
the various quantum gates among qubits. If you look at the Wikipedia article on “configuration interaction” you will see mentioned that the calculations of such are very difficult and
expensive computationally. Here are simple quantum systems which tax our ability to model them mathematically, yet the premise of quantum computing is that a simple quantum system can be
constructed which do certain difficult and expensive calculations for us in an automatic way. I have trouble reconciling this dissonance.
Another vexing question to me is the question of indistinguishability. This is an extremely important part of atomic physics and quantum physics generally, yet I never see mentioned
whether qubit states are to be bosons or fermions, or how the particle exchanges are to be treated. I thought photon indistinguishibility was central to their “entanglement”, since it is
just the symmetric wave function AB + BA, which expresses this.
☆ March 11, 2012 9:12 pm
Lewis captures what is beautiful about quantum computing when he says:
I have trouble reconciling the physics and the calculations of atomic states with the seemingly glib presumptions about the various quantum gates among qubits. If you look at the
Wikipedia article on “configuration interaction” you will see mentioned that the calculations of such are very difficult and expensive computationally. Here are simple quantum systems
which tax our ability to model them mathematically, yet the premise of quantum computing is that a simple quantum system can be constructed which do certain difficult and expensive
calculations for us in an automatic way. I have trouble reconciling this dissonance.
Indeed, it is hard to simulate quantum systems, because in the worst case things are exponential in the number of degrees of freedom, and our methods for taming this exponential dimension
in special cases of interest are a complicated and incomplete patchwork.
And yet Nature does this somehow every second! Fire a proton at a nucleus and Nature will have no problem calculating its scattering cross section. Strongly interacting systems are so
difficult to solve that it’s at the limit of our abilities to even estimate the mass of the proton, and yet when you try to weigh something, Nature never gives you an hourglass symbol or
spinning beach ball.
Feynman’s genius idea in proposing quantum computers is that we can manipulate quantum systems into doing these simulations for us, instead of using our crude classical computing tools.
As a result, we no longer have to pay the exponential price that our classical simulations paid.
Another vexing question to me is the question of indistinguishability.
Different systems have different answers to this. But to see the general idea, consider encoding n qubits into n Hydrogen spins. If these protons are well localized at positions r_1, …,
r_n, then the wavefunction looks something like
1/sqrt(n!) sum_{\pi in S_n} |r_{pi(1)}> \otimes |s_{pi(1)}> \otimes … \otimes |r_{pi(n)> \otimes |s_{\pi(n)}>,
where S_n is the group of permutations of n objects, \otimes is the tensor product and s_1, .., s_n are the spin degrees of freedom.
If we can address the spins according to their position, then we can treat this as equivalent to the simpler system
|s_1> \otimes … \otimes |s_n>
People in the QC community usually skip this first step, because in most implementations it’s not that interesting.
☆ March 11, 2012 9:30 pm
Also, Lewis writes:
I thought photon indistinguishibility was central to their “entanglement”, since it is just the symmetric wave function AB + BA, which expresses this.
This is a slippery issue.
Let me say briefly that it’s certainly not “central” and it is somewhat debatable whether this state is entangled at all. I am not an expert, but here are some papers that discuss this
issue if you want to read more:
In any case, this is generally NOT the sort of entanglement that people propose to use for quantum computing and communication.
☆ March 11, 2012 11:15 pm
Regarding this in Aram’s comment above:
And yet Nature does this somehow every second! Fire a proton at a nucleus and Nature will have no problem calculating its scattering cross section. Strongly interacting systems are so
difficult to solve that it’s at the limit of our abilities to even estimate the mass of the proton, and yet when you try to weigh something, Nature never gives you an hourglass symbol
or spinning beach ball.
Feynman’s genius idea in proposing quantum computers is that we can manipulate quantum systems into doing these simulations for us, instead of using our crude classical computing
tools. As a result, we no longer have to pay the exponential price that our classical simulations paid.
Dick and I approach this issue as one about the primacy of notation. We regard notation as inherently classical, under part of the poly-time Church-Turing thesis that we regard as simply
true. Put one way, we are disturbed by the idea of an atomic notation for a non-P-time process such as “(QFT x)” or “(Shor x)”, and see some support in papers like this and this noted to
us by Aram, plus this and very recently this by Martin van der Nest (along with other work by him which we are having to examine more closely). We certainly believe that Nature does not
“compute with exponentially long vectors” vectors, and we recognize that Dirac notation provides a great lexical savings by representing tensor products as concatenations, but we feel
something besides it is needed to tell the story in the cleanest way.
□ March 7, 2012 9:48 am
Rachel, perhaps a reasonable alternative to conceiving “quantum mechanics breaking down in a magical way” is conceiving “quantum mechanics emerging in a natural way” … and this second
point-of-view induces greater sympathy to Craig’s questions.
It definitely is true that (in Feynman’s phrase) “The fundamental laws of physics, when discovered, can appear in so many different forms that are not apparently identical at first, but, with
a little mathematical fiddling you can show the relationship.”
That is why a Great Truth of quantum mechanical pedagogy is this: “Students should learn quantum mechanics as early as possible, preferably at the undergraduate level, and thus necessarily
via the simplest physical examples analyzed by most elementary mathematical techniques. Because learning time is precious!”
This being a Great Truth, we know its opposite is a Great Truth too: “Students should acquire broad foundations in mathematics before learning any quantum mechanics, and thus be equipped to
conceive any and all quantum mechanical postulates — with reasonable fluency — equivalently within algebraic, geometric, analytic, and computational frameworks. Because mathematical
naturality is precious!”
Either way, it is essential to learn to fluently translate among many kinds of physics and many kinds of mathematics. For warm-hearted inspiration in this regard, tempered by realistic
appreciation for the immense difficulties of mathematical learning, the sidebar links of Terry Tao’s What’s New weblog are a good resource.
34. February 9, 2012 12:53 am
Dear all, one additional rather important point about Chris’s first remark.
I mentioned my opinion that universal quantum computers that can create (up to negligable errors and using encoding) an arbitrary quantum evolution and arbitrary states described by them,
represent a new physical reality. Chris wrote ” I don’t think this is correct. Quantum computers will simulate quantum systems in exactly the same way that classical computers simulate classical
ones.” Chris is correct, this is excactly like classical computers simulate classical evolutions, but I am also correct that this is an entirely new physical reality.
Transferrng our intuition and experience from classical computers to quantum ones, cannot be taken for granted. In fact, to a large extent, this is the essence of the discussion between Aram and
35. February 9, 2012 4:09 am
Short comment on the John’s model:
“Conceptually, this model has the nice feature that we don’t need to assume anything about the initial state of the bath ”
This is exactly what worries me. Classical mechanisms of stability depend on the baths temperature i.e, on “initial state of the bath”. For temperatures above a certain critical model-dependent
one the free-energy barriers dissappear and the system cannot encode any stable information. So in some sense John’s model is “to good to be true”.
36. February 10, 2012 4:26 am
All I am going to say is that the nose on many animals are quantum computing devices. They receive information that reliably gives us an idea of what we are smelling and tasting. Quantum
computers use a similar method of defining information (qubit). If noise is an issue, then rerunning the calculation to determine error values can be a solution. This can take the median of data
charted in software to determine said noise values. Though without doing math on it, I suspect that error values in a slight vacuum or lower physical temperatures wouldn’t be hard to over come.
Most particles run in predictable formations after all. And the lower the amount of particles or slower the particle movements, the easier to predict they become.
□ February 10, 2012 11:18 am
You could be right. One time, I visited someone’s house and pet their dog. Then I came home to my own house and my dog knew right away that I had pet another dog, as she was sniffing my pants
for a long time.
How did she know?
Probably the smell of the other dog became entangled with my pants, and my dog was able to disentangle it by measuring it with her quantum computer nose.
37. February 20, 2012 6:28 pm
It is the same physical phenomenon which prevents mathematicians from devising quick algorithms to well-known supposedly hard problems, and which also prevents computer engineers from building
quantum machines for solving the same hard problems more quickly. Both attempts are doomed to failure and this is why I believe P=NP. NP-complete problems can’t be solved quickly for reasons that
belong to physics, not to mathematics.
38. February 23, 2012 5:28 am
I am a grad student in Electrical and Computer engineering and work on quantum circuit synthesis. Most of the time, the cost of synthesizing quantum oracles (for example, the decision oracle in
Grover’s search) is ignored by considering it as a black box – which is not the case for an an experimentalist.
Even if QC is possible in theory,using the gate model of QC, the sheer quantum circuit complexity of the decision oracle (without considering FTQC) becomes quite unmanageable as the size of the
problem increases. Now, theorists don’t care about this. They state things in terms of query complexity – like Grover’s search requires O(sqrt(N)) calls to the quantum oracle as opposed to O(N)
calls to a classical oracle.
The question is: is the complexity of the classical and quantum oracles comparable? In theory, yes; from practical quantum circuit design, NO. The quadratic speedup actually vanishes and a QC
doing a Grover’s search would be slower than a classical computer unless it has really high clock frequencies.
This is intuition gained from experience and hard to construct as a theory.
( Personally, i am tired of this field. No one in the job market cares about my thesis anyways! Wasted 6 yrs in grad school. moved to parallel programming. )
39. February 23, 2012 5:19 pm
Yogi posts: I am a grad student … and work on quantum circuit synthesis … Personally, I am tired of this field. No one in the job market cares about my thesis anyways! Wasted 6 yrs in grad
Yogi, I hope you don’t mind an appreciation of your post that regards it as expressing a “Great Truth” of QM/QC/QIT (that is, a truth whose logical opposite also expresses a Great Truth). The
duality of the Great Truths of science — and their close kin, the “Burning Arrows” of novel math postulates — has been a perennial theme here on GLL.
Because your particular great truth, Yogi, is particularly important to us all, and yet it is a particularly sensitive great truth, perhaps it is worthwhile to make a case for both sides of it.
For raw material, we adopt version 1.0 of the “Quantum Information Science and Technology (QIST) Roadmap” (LANL document LA-UR-02-6900, 2002), which is a document that is familiar to many GLL
Inspired by Yogi’s post, and benefiting too from a decade of experience, what Great Truths can we distill as lessons-learned from that 2002 QIST Roadmap?
Great Truth #1:Read as a “Roadmap,” QIST 2002 failed dismally
Prima facie evidence for Great Truth #1 comes in the form of this timeline (quoted verbatim from the QIST 2002 Roadmap): By the year 2007: encode a single qubit into the state of a logical qubit
formed from several physical qubits; perform repetitive error correction of the logical qubit; and transfer the state of the logical qubit into the state of another set of physical qubits with
high fidelity
By the year 2012: implement a concatenated quantum error-correcting code. It is sobering-yet-evident — for example, from the discussions here on GLL — that in 2012 there exists no clear roadmap
for achieving the above QIST Roadmap goals by qubit technologies that are known to be scalable.
Still more sobering — especially for career-launching students like Yogi — is that there has been (to my knowledge) no structured review of why the QIST Roadmap’s milestones were so badly missed.
Moreover, none of QIST’s Technology Experts Panel (TEP) ever comment in this regard on public forums like GLL, The Quantum Pontiff, Shtetl Optimized, the GASARCH/Fortnow weblog (etc.). This
silence is demoralizing for everyone. :(
Moreover, perhaps other researchers have had experiences like the following: a high-ranking ARDA manager challenged me, in an elevator, by asking “Briefly, what have we got for our $500M
investment in QM/QC/QIT?” It was not easy to answer this (literally) “elevator question” … and yet it is an entirely reasonable question.
So perhaps GLL is a venue where such an “elevator answer” might be conceived, valid for both graduate students and program managers (the former being considerably the harder audience!), within
the stimulating context that the Kalai/Harrow debate provides?
Great Truth #2:Read as a “Guidance,” QIST 2002 succeeded brilliantly
This post is getting to be a bit long, so in regard to “Great Truth #2” I will make just one point, with the hope that it will stimulate further comments from GLL readers.
Ten years in retrospect, QIST 2000′s focus on a “Quantum Computing Roadmap” was doubly problematic: (1) quantum computing was too narrow a focus, and (2) roadmaps offer insufficient scope for the
learning-and-adapting that quantum technology development requires.
To be sure, back in 2002 QIST’s roadmap focus was not obviously wrong. Indeed the semiconductor industry’s International Roadmapping Committee (IRC) had been brilliantly successful.
None-the-less, nowadays we appreciate that quantum computing, both then and now, is not suited to an IRC-style roadmapping/timelining effort.
Let us suppose, therefore, that now is a good time to articulate a Great Truth #2 that affords to Yogi’s generation of students sufficient technological scope and narrative clarity to generate
family-supporting jobs and dignified career opportunities, in the required super-abundance that a planet of seven billion people required. And in service of this ambitious goal, we can embrace
the archetypal GLL “Burning Arrow” strategy of changing even the language in which the new Great Truth is stated.
In light of lessons-learned from QIST 2002, it is reasonable to avoid language associated to a “quantum computing roadmap”, and the central spotlight may even have to shift away from “quantum
computing”per se.
Our own UW/QSE group is testing an environment that pulls back the key QIST ideas to a research venue that is associated with a broadened terminology: Intent ${\Leftarrow}$Deliverable Guidance $
{\Leftarrow}$Roadmap Learn-and-Adapt ${\Leftarrow}$TimelineThis new broader terminology is sufficiently adaptive that the wonderful merits of the fundamental quantum research catalyzed by QIST
2000 are appreciated within it, and even the now-traditional goal of achieving practical quantum computing can be pursued within it.
Please let me say that other adaptations are entirely reasonable, for the common-sense reason that quantum research has entered into a period of creative rethinking, and during such eras it is
neither feasible, nor necessary, nor desirable that everyone think alike.
Fortunately, we can all be completely confident that good answers to concerns like Yogi’s can be evolved, and will be evolved, and even now are being evolved. And this is an appropriate unifying
focus here on GLL! :)
□ February 23, 2012 8:35 pm
I think you make some great points about QIST being at once too optimistic and also too narrowly focused on quantum computing, rather than quantum information processing more broadly. I don’t
think the excessive optimism is a problem with the idea of quantum computing so much as the way researchers interact with roadmaps and with funding agencies.
40. March 5, 2012 4:35 pm
> “If blogs like GLL were around centuries ago there might have been a more penetrating discussion than even the Royal Society could foster.”
One thing I can say in favor of quantum computing is, in 1670 the idea of a blog like GLL would have sounded even more magical than quantum computers do today. :)
□ March 8, 2012 6:48 pm
I just read Feynman’s 1985 Optics News article on Quantum Mechanical Computers, and the concept he discusses, which he passes off as “entertaining nonsense”, is very quotidian compared to the
post-Shor’s theorem stuff on the table today. There the “bits” are simply “written on atoms”, and controlled by reversible quantum operations in strict analogy to existing binary computer
designs. He considers the entire machine to evolve according to some hypothetical Hamiltonian . What I’m saying is that you might as well say 1985 as 1670 when it comes to the magicality of
the current conceptions.
☆ March 8, 2012 7:29 pm
If I had the money, I’d be ready to offer $1,000,000 to the first builders of a true quantum computer!
41. April 29, 2012 12:15 pm
There are some new posts on Scott Aaronson’s blog. Do you care to participate?
□ April 29, 2012 12:48 pm
My reply is here.
□ May 3, 2012 7:39 am
Scott Aaronson: Let me try to summarize where is stand and ask you some thing very important.
In February 2012, Scott Aaronson, MIT specialist on Quantum Computing, waged publicly an $100.000 prize for someone that proved, to you, that a scalable quantum computer would be impossible
to be built.
J.Especial accepted his challenge presented a recently published article on Bell Inequalities.
Scott, Especial, me and other scientists collaborated in this two day discussion, asking questions and posting opinions.
Finally I asked Scott if he agreed that without entanglement it would not be possible to build a quantum computer. And he said yes!
And now my important question for Scott is:
Considering that you, or someone you trust, need time to revise in detail Especial’s article, and then you come out of that process agreeing with the conclusions of the article, which are –
that there is no experimental evidence to this date, for you or anyone else, to claim that entanglement is a real phenomenon – is this sufficient for you?
Do you want to suggest names of someone you trust – EPR specialists , mathematicians, physicists – and continue this discussion in a more solid scientific ground?
Let’s do it together. It is not about the money. It is about the Prize.
I do think there is a win-win situation for you, me, Especial and the whole world.
You said : (I took all the money parts out of your phrase because that is not the point now)
“The reason I made my [...] bet was to draw attention to the case that quantum computing skeptics have yet to offer. If quantum computing really does turn out to be impossible for some
fundamental reason, then [....] , I’ll be absolutely thrilled. Indeed, I’ll want to participate myself in one of the greatest revolutions in physics of all time, a revolution that finally
overturns almost a century of understanding of quantum mechanics.”
And that was the reason I approached you.
With your help we can get the attention of the Physics Community to this unfortunate mistake that has huge implications in the our future and our children’s future.
Waiting for your reply
Teresa Mendes (on behalf of Teresa Mendes)
☆ May 4, 2012 1:41 pm
There is evidence of entanglement. It’s arguably indirect evidence. But when you add it all up it’s overwhelming.
☆ May 4, 2012 2:21 pm
Yes, there is overwhelming evidence of entanglement, but not the sort of entanglement that would be needed for a qubit computation.
☆ December 17, 2012 5:31 pm
Well here’s what old Musatov says, “It’s better to be indirect than outdirect, or worse without directive and bijective subjunctive looking for the lost div of the house of Israel”
42. May 6, 2012 4:23 pm
Aram: A large number of inconclusive experimental results does not become
conclusive evidence no matter how large that number may be. It’s a matter of
quality not of quantity.
Bell’s theorem directly applies only to ideal EPRB experiments. Since real
experiments to this date have never been ideal, it is incorrect to directly
apply Bell inequalities valid only for ideal experiments to these non-ideal
Bell’s theorem needs to be generalized to non-ideal conditions, using some
additional hypothesis (e.g. “fair-sampling detection” when the non-ideality
in question is inefficient detection) and only then can the generalized Bell
inequalities be used to test the experimental result against the conjunction
of local-realism with the additional hypothesis.
The problem is that the generalizations made by Clauser and Horne in 1974
and by Garuccio and Rapisarda in 1981 are wrong and in consequence the
conclusions that have been drawn from the experimental evidence using them,
namely that the conjunction of local-realism with fair-sampling detection
has been rejected, is also wrong.
The correct generalized Bell inequalities have just been published by
J. Especial (http://aflb.ensmp.fr/AFLB-371/aflb371m746.pdf) and when these are
used to interpret the experimental results it is found that all published
experimental evidence is indeed compatible with the conjunction of
local-realism and fair-sampling detection.
This is why the situation has very recently changed.
43. May 30, 2012 1:20 pm
I systematically referred to Cris as Chris. I apologize for the mistake. –Gil
1. Updates, Boolean Functions Conference, and a Surprising Application to Polytope Theory | Combinatorics and more
2. 100,000 dólares al que pruebe que la computación cuántica es imposible « Donde El Futuro Es El Presente.
3. RT.COM -> Это происходит, на каком языке? -> WHAT R U –PEOPLES DOING?זה קורה ב איזה שפה?זה קורה באיזה שפה?זה קורה באיזה שפה?זה קורה באיזה שפה– BIBI ?Dit geb
4. solitons, cellular automata, quantum mechanics, and disagreeing with scott aaronson | Turing Machine
5. Meeting with Aram Harrow, and my Lecture on Why Quantum Computers Cannot Work. | Combinatorics and more
6. My Quantum Debate with Aram Harrow: Timeline, Non-technical Highlights, and Flashbacks I | Combinatorics and more
7. Mittag-Leffler Institute and Yale, Winter 2005; Test your intuition: Who Played the Piano? | Combinatorics and more
Recent Comments
Mike R on The More Variables, the B…
maybe wrong on The More Variables, the B…
Jon Awbrey on The More Variables, the B…
Henry Yuen on The More Variables, the B…
The More Variables,… on Fast Matrix Products and Other…
The More Variables,… on Progress On The Jacobian …
The More Variables,… on Crypto Aspects of The Jacobian…
The More Variables,… on An Amazing Paper
The More Variables,… on Mathematical Embarrassments
The More Variables,… on On Mathematical Diseases
The More Variables,… on Who Gets The Credit—Not…
John Sidles on Multiple-Credit Tests
KWRegan on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests | {"url":"http://rjlipton.wordpress.com/2012/01/30/perpetual-motion-of-the-21st-century/","timestamp":"2014-04-18T05:32:02Z","content_type":null,"content_length":"484870","record_id":"<urn:uuid:8c89f366-ab0a-4e79-b932-57daf5b5944a>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hey Ortho! What’s your Altitude?
6.4: Hey Ortho! What’s your Altitude?
Created by: CK-12
This activity is intended to supplement Geometry, Chapter 5, Lesson 4.
Problem 1 – Exploring the Altitude of a Triangle
1. Define Altitude of a Triangle.
Draw the altitudes of the triangles in the Cabri Jr. files ACUTE, OBTUSE, and RIGHT and then sketch the altitudes on the triangles below. To do this, start the Cabri Jr. application by pressing APPS
and selecting CabriJr. Open the file ACUTE by pressing $Y=$Open..., and selecting the file. Construct the altitude of $\triangle{ABC}$ZOOM, selecting Perp., clicking on the side of the triangle, and
then clicking on the opposite vertex. Repeat for the files OBTUSE and RIGHT.
2. Draw the altitudes for $\triangle{ABC}$$\triangle{DEF}$$\triangle{GHJ}$
3. Fill in the blanks of the following statements about whether the altitude of a triangle is inside, outside, or on a side of the triangle.
a. For the acute $\triangle{ABC}$$B$
b. For the obtuse $\triangle{DEF}$$E$
c. For the right $\triangle{GHJ}$$H$
Problem 2 – Exploring the Orthocenter
Open the file TRIANGLE. You are given $\triangle{ABC}$
4. What do you notice about the altitudes of all three vertices?
5. The point of concurrency for the altitudes is the orthocenter. Create and label this point $R$$B$$\triangle{ABC}$$ABC$
6. Can you move vertex $B$$\triangle{ABC}$$ABC$
7. Can you move vertex $B$$\triangle{ABC}$$ABC$
Problem 3 – Exploring the Altitude of an Equilateral Triangle
Open the file EQUILATE. You are given an equilateral triangle $ABC$$\overline{BD}$$P$$P$Length tool found by pressing GRAPH and selecting Measure > D. & Length. Also, find the length of $\overline
8. Use the Calculate tool to calculate $EP + FP + GP$$A$$P$
Position $1^{st}$ $2^{nd}$ $3^{rd}$ $4^{th}$
9. What is the relationship between the measurements of $BD$$EP + FP + GP$
10. Complete the following statement: The sum of the distances from any point in the interior of an equilateral triangle to the sides of the triangle is ________________.
Problem 4 – Exploring the Orthocenter of a Medial Triangle
The medial triangle is the triangle formed by connecting the midpoints of the sides of a triangle.
Open the file MEDIAL2. You are given a triangle, its medial triangle, and the orthocenter of the medial triangle.
11. What triangle center (centroid, circumcenter, incenter, or orthocenter) for $\triangle{ABC}$$O$$\triangle{DEF}$
You can only attach files to None which belong to you
If you would like to associate files with this None, please make a copy first. | {"url":"http://www.ck12.org/book/Texas-Instruments-Geometry-Student-Edition/r1/section/6.4/","timestamp":"2014-04-19T21:20:03Z","content_type":null,"content_length":"112216","record_id":"<urn:uuid:dba24ec9-3c3d-4688-a27b-4209dbadeb36>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abstracts - grouped by symposium
Abstracts are displayed on the website for information only and are not to be considered a published document. Some inconsistencies in display of fonts may occur in some web browser configurations.
Abstracts will appear on the website within 10 working days of the date of submission.
16th Canadian Symposium on Fluid Dynamics
Applications of Invariant Theory to Differential Geometry
Classical Analysis in honour of David Borwein's 80th Birthday
General Topology and Topological Algebra
Graphs, Games and the Web
Hopf Algebras and Related Topics
Nonlinear Dynamics in Biology and Medicine
Numerical Algorithms for Differential Equations and Dynamical Systems
Qualitative Behaviour and Controllability of Partial Differential Equations
Contributed Papers Session | {"url":"http://cms.math.ca/Events/summer04/abs/talks.html?nomenu=1","timestamp":"2014-04-18T18:14:05Z","content_type":null,"content_length":"36289","record_id":"<urn:uuid:42797ddb-4177-4632-aaf7-44b09efc703f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to read graph.
1. The problem statement, all variables and given/known data
The black curve represents the position of a spot on the x-axis projection of a spinning wheel.
a) Which curve represents the velocity of that spot? blue
b) What is the angular speed of the wheel?
c) What is the tangential speed of the wheel, assuming the full radius of the wheel is represented in this projection.
d) Does the x-projection have an angular acceleration at any time on the graph? Explain.
3. The attempt at a solution
I have no idea how to read this graph. The only thing I can guess is that a) would be the blue line? Please help!
1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution | {"url":"http://www.physicsforums.com/showthread.php?t=583888","timestamp":"2014-04-18T00:30:52Z","content_type":null,"content_length":"43311","record_id":"<urn:uuid:4b010be1-77b9-468e-b73e-4c25e3567992>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bank Robbery
October 20th 2008, 01:31 PM
Bank Robbery
Two bank robbers, Ron and Jon, have managed to steal 42 crates of gold and silver bars from a bank, 392 of the bars they stole being gold.
Of the 42 crates, there are only two distinctive types regarding the amount and total worth of the bars that each crate contains. One type of crate contains 11 bars and is worth £140k, the other
contains 20 bars and is worth £280k.
Jon, being the brains behind the operation, was elected to calculate how the takings would be divided between him and Ron and decided that he would take all of the crates that contained 20 bars
and are worth £280k and leave the rest to Ron.
Ron isn't very happy that he's been left with the crates that are individually half as valuable as Jon's. Given that each gold bar is worth £15k and each silver bar is worth £10k, can you help
Ron work out how much money in material value Jon has deviously given himself more than he has Ron?
October 20th 2008, 07:06 PM
Hello, Obsidantion!
Two bank robbers, Ron and Jon, have managed to steal 42 crates
of gold and silver bars from a bank, 392 of the bars they stole being gold.
Among the 42 crates, there are two distinctive types.
One type of crate contains 11 bars and is worth $140k,
the other contains 20 bars and is worth $280k.
Jon was elected to calculate how the takings would be divided between them,
and decided that he would take all of the crates that contained 20 bars
and leave the rest to Ron.
Ron is unhappy that he's left the crates worth half as much Jon's.
Given that each gold bar is worth $15k and each silver bar is worth $10k,
work out how much Jon has given himself more than he has Ron?
Small crates: 11 bars worth $140k
Let $x$ = number of gold bars.
Let $11-x$ = number of silver bars.
$x$ gold bars at $15k each are worth: . $15kx$ dollars.
$11-x$ silver bars at $10k each are worth: . $10k(11-x)$ dollars.
Total value is $140k: . $15kx + 10k(11-x) \:=\:140k$
Solve for $x\!:\;\;x \:= \:6 \quad\Rightarrow\quad 11-x \:= \:5$
$\boxed{\text{Small crate: }\begin{array}{c}\text{6 gold bars} \\ \text{5 silver bars} \end{array}}$
Large crates: 20 bars worth $280k
Let $x$ = number of gold bars.
Let $20-x$ = number of silver bars.
$x$ gold bars at $15k each are worth: . $15kx$ dollars.
$20-x$ silver bars at $10k each are worth: . $10k(20-x)$ dollars.
Total value is $280k: . $15kx + 10k(20-x) \:=\:280k$
Solve for $x\!:\;\;x \:=\:16\quad\Rightarrow\quad 20-x :=\:4$
$\boxed{\text{Large crate: }\begin{array}{c}\text{16 gold bars} \\ \text{4 silver bars} \end{array}}$
Let $S$ = number of small crates.
Then $42-S$ = number of large crates.
$S$ small crates with 6 gold bars each: . $6S$ golds bars.
$42-S$ large crates with 16 gold bars each: . $16(42-S)$ gold bars.
Total gold bars is 392: . $6S + 16(42-S) \:=\:392$
Solve for $S\!:\;\;S \:=\:28 \quad\Rightarrow\quad 42-S \:=\:14$
Hence: . $\boxed{\begin{array}{c}\text{28 small crates}\\ \text{14 large crates} \end{array}}$
Jon took the 14 large crates worth $280k each.
. . His share is: . $14 \times \280k \:=\:\3920k$
Ron took the 28 small creates worth $140k each.
. . His share is: . $28 \times \140k \:=\:\3920k$
They shared the loot equally!
October 21st 2008, 04:58 AM | {"url":"http://mathhelpforum.com/math-challenge-problems/54772-bank-robbery-print.html","timestamp":"2014-04-20T14:29:32Z","content_type":null,"content_length":"13599","record_id":"<urn:uuid:65dfe207-ac9b-48ee-a767-6bc50a1ecd9b>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fei-tsen Liang
Boundary Regularity for Capillary Surfaces
For solutions of capillarity problems with the boundary contact angle being bounded away from $0$ and $\pi$ and the mean curvature being bounded from above and below, we show the Lipschitz continuity
of a solution up to the boundary locally in any neighborhood in which the solution is bounded and $\partial\Omega$ is $C^2$; the Lipschitz norm is determined completely by the upper bound of $|\cos\
theta|$, together with the lower and upper bounds of $H$, the upper bound of the absolute value of the principal curvatures of $\partial\Omega$ and the dimension $n$.
Capillary surface, boundary regularity.
MSC 2000: 35J60, 53A10 | {"url":"http://www.emis.de/journals/GMJ/vol12/12-2-9.htm","timestamp":"2014-04-18T10:36:35Z","content_type":null,"content_length":"1173","record_id":"<urn:uuid:be58c4b2-6380-4814-8d3d-2277fb57daa5>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
we consider real-valued continuous functions defined on some open interval … of ℝ. We are only interested in the local behaviour of these functions … ∴ identify two functions if they agree on
some (possibly very small) open interval…. This identification defines an equivalence relation, and the equivalence classes are the “germs of real-valued continuous functions at 0”. These germs
can be added and multiplied and form a commutative ring.
(Of course 0 is just an arbitrary centre—but hey, why not 0?)
I feel like this is something I imagined, certainly not in this level of specificity, but at least wished for at some point in the past. Why should I be multiplying numbers like 13 and 27,714 when
things in life are so less precise?
But we can still get the general idea of 13 and 27,714 without being so sticklerish about it. And no, the germ doesn’t need to be defined on the frazzwangled continuum, it could be done on other
topologies as well. (Wikipedia gets into it.)
Here’s the only picture I could find of a germ online (the crayon splotches are my addition).
http://www.cs.bham.ac.uk/~sjv/GeoFuzzy.pdf :
Yes, it’s no wonder this Grothendieck stuff is called an impressionistic mathematics. | {"url":"http://isomorphismes.tumblr.com/tagged/generalised+number","timestamp":"2014-04-21T09:35:55Z","content_type":null,"content_length":"227902","record_id":"<urn:uuid:15e4f5c6-13ee-4a12-a97b-f209aa0baa79>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lipschitz condition on the first derivative of a function?
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
If the derivative of a function is lipschitz,,,does it mean that the function itself is also lipschitz? Any proof for that?
up vote 0 down vote favorite real-analysis
add comment
If the derivative of a function is lipschitz,,,does it mean that the function itself is also lipschitz? Any proof for that?
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally
applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help
center, please edit the question.
No. $f(x)=x^2$ on the whole real line is not Lipschitz, but the derivative $f'(x)=2x$ is. However, if the function is defined on a bounded interval, then the statement is true.
up vote 2 down Indeed, if $f'$ is Lipschitz, then it is continuous and thus bounded, and a function with bounded derivative is Lipschitz.
vote accepted
add comment
No. $f(x)=x^2$ on the whole real line is not Lipschitz, but the derivative $f'(x)=2x$ is. However, if the function is defined on a bounded interval, then the statement is true. Indeed, if $f'$ is
Lipschitz, then it is continuous and thus bounded, and a function with bounded derivative is Lipschitz. | {"url":"http://mathoverflow.net/questions/107715/lipschitz-condition-on-the-first-derivative-of-a-function","timestamp":"2014-04-19T12:31:13Z","content_type":null,"content_length":"45467","record_id":"<urn:uuid:0a18aec5-d195-4dde-a423-dd7d8820f378>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sketching trigonometric graphs and manipulating expressions
June 5th 2006, 05:25 PM #1
Mar 2006
Sketching trigonometric graphs and manipulating expressions
Hi there, I really need some help with the following:
1. a) Sketch the graph of f(x) = sec x for -3π ≤ x ≤ 3π
b) Write down an expression which explains the relationship between sec x and cos x
Hi there, I really need some help with the following:
1. a) Sketch the graph of f(x) = sec x for -3π ≤ x ≤ 3π
b) Write down an expression which explains the relationship between sec x and cos x
b) secx = 1/cosx
secx and cosx are reciprocal of each other
June 5th 2006, 06:13 PM #2
June 5th 2006, 06:44 PM #3
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/trigonometry/3277-sketching-trigonometric-graphs-manipulating-expressions.html","timestamp":"2014-04-17T11:57:00Z","content_type":null,"content_length":"36874","record_id":"<urn:uuid:25d5bc96-a474-4c7d-8103-6dd9c491c93d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electron. J. Diff. Eqns., Vol. 2000(2000), No. 21, pp. 1-17.
Colombeau's theory and shock wave solutions for systems of PDEs
F. Villarreal Abstract:
In this article we study the existence of shock wave solutions for systems of partial differential equations of hydrodynamics with viscosity in one space dimension in the context of Colombeau's
theory of generalized functions. This study uses the equality in the strict sense and the association of generalized functions (that is the weak equality). The shock wave solutions are given in terms
of generalized functions that have the classical Heaviside step function as macroscopic aspect. This means that solutions are sought in the form of sequences of regularizations to the Heaviside
function that have to satisfy part of the equations in the strict sense and part of the equations in the sense of association.
Submitted January 13, 2000. Published March 12, 2000.
Math Subject Classifications: 46F99, 35G20.
Key Words: Shock wave solution, Generalized function, Distribution.
Show me the PDF file (210K), TEX file, and other files for this article.
│ │ Francisco Villarreal │
│ │ Departamento de Matematica │
│ │ FEIS-UNESP │
│ │ 15385-000, Ilha Solteira, Sao Paulo, Brazil │
│ │ e-mail: villa@fqm.feis.unesp.br │
Return to the EJDE web page | {"url":"http://ejde.math.txstate.edu/Volumes/2000/21/abstr.html","timestamp":"2014-04-17T21:36:57Z","content_type":null,"content_length":"1964","record_id":"<urn:uuid:14d488fe-a378-4120-bf7b-733c69079b2a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Neil Calkin's Home
Neil Calkin's Homepage
I am a professor at Clemson University, in the Department of Mathematical Sciences, in the Algebra and Combinatorics group.
I am currently teaching the following courses:
MthSc 855: MWF 12:20-1:10, E 005
MthSc 311: MWF 8:00-8:50, M 302
MthSc 382 (Junior Honors seminar): Thursday 9:30-10:20, M 103
MthSc 481 (Putnam prepartion seminar): MW 3:35-4:45, E 104
I am running a seminar to prepare students for the Putnam Mathematics Competition
MthSc 481(Putnam Seminar): TBA
My interests are in combinatorial and probabilistic methods, particularly in number theory. Some of my papers are available here.
Contact Information
Office: O-115 Martin Hall: office hours MWF 9:00-10:00, 11:30-12:30
864-656-3437 (Office)
864-646-9973 (Home)
864-650-4579 (Cell)
Department phone lines:
Voice: (864) 656-3434
FAX: (864) 656-5230
Electronic Mail: calkin@math.clemson.edu
Snail Mail (Office):
Neil Calkin,
Department of Mathematical Sciences
Martin Hall
Box 341907
Clemson, SC 29634-1907
Snail Mail (Home):
Neil Calkin,
186, E. Main St,
Pendleton, SC 29670 | {"url":"http://www.math.clemson.edu/~calkin/","timestamp":"2014-04-18T05:29:50Z","content_type":null,"content_length":"2727","record_id":"<urn:uuid:86a3ee9e-5794-4562-ba4e-f7f6b9339a12>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relays Used In The Construction Of Electric Circuits ... | Chegg.com
Relays used in the construction of electric circuits function properly with probability .9. Here is a diagram http://mathrelay.blogspot.com/ Assuming that the circuits operate independently, which of
the following circuit designs yields the higher probability that current will flow when the relays are activated? Referring to circuit B, if we know that current is flowing, what is the probability
that switches 1 and 4 are functioning properly.
Statistics and Probability | {"url":"http://www.chegg.com/homework-help/questions-and-answers/relays-used-construction-electric-circuits-function-properly-probability-9-diagram-http-ma-q953461","timestamp":"2014-04-20T07:56:17Z","content_type":null,"content_length":"21204","record_id":"<urn:uuid:dc664768-6647-48cd-81ab-ecf32d8c4323>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quartiles, Deciles and Percentiles (Grouped Data)
Posted by mbalectures | Posted in Descriptive statistics | 47,363 views | Posted on 25-06-2010 |
When data is arranged in ascending or descending order, it can be divided into various parts by different values such as quartiles, deciles and percentiles. These values are collectively called
quantiles and are the extension of median formula which divides data into two equal parts. Since the basic purpose of these partition values is to divide data into different parts therefore a
relationship exists between them. This relationship is given below and is elaborated with the help of simple problems.
Problem: In a work study investigation, the times taken by 20 men in a firm to do a particular job were tabulated as follows:
Prove that:
Second Quartile Q2
In case of frequency distribution, quartiles can be calculated by using the formula:
Fifth Decile D5
In case of frequency distribution, deciles can be calculated by using the formula:
50th Percentile P50
In case of frequency distribution, percentiles can be calculated by using the formula:
Median of Frequency Distribution
In case of frequency distribution median can be calculated with the help of following formula.
Hence it is proved that Q2 = D5 = P50 = Median
In order to see the relationship between Quartiles, Deciles and Percentiles in case of Ungrouped data click here. | {"url":"http://mba-lectures.com/statistics/descriptive-statistics/603/relationship-between-quartiles-deciles-and-percentiles-grouped-data.html","timestamp":"2014-04-17T06:40:53Z","content_type":null,"content_length":"42465","record_id":"<urn:uuid:e1254181-77f3-4d1e-a86b-2277b290e122>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
The topic cycle is discussed in the following articles:
combinatorial analysis
• TITLE: combinatorics (mathematics)SECTION:
...xn, the edges being evident by context. The chain is closed if x0 = xn and open otherwise. If the chain is closed, it is called a cycle, provided its vertices (other than x0 and xn) are
distinct and n ≥ 3. The length of a chain is the number of edges in it. | {"url":"http://www.britannica.com/print/topic/147892","timestamp":"2014-04-19T10:14:04Z","content_type":null,"content_length":"6454","record_id":"<urn:uuid:cad9668d-149c-4d4c-be8a-20d6e76f1ecf>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
'Ramping it Up' printed from http://nrich.maths.org/
At a time T, a car mounts a curb of height $H$, and its front suspension undergoes a sudden change. At a time 2T, the car drops down the other side of the curb to $\frac{1}{4}H$. It is suggested that
the graph of this process is as given as a function $f(t)$ above.
Try to work out the graph of $f $ '$(t)$ against $t$. What parts make sense and what parts don't? Could you alter the graph of $f(t)$ slightly to make the derivative function make more sense?
Next imagine that a car drives at a steady $20$ km h$^{-1}$ over a concrete block of height $10$cm and rectangular cross section of width $50$ cm. Make a mathematical model of the height of the front
suspension over time. Be clear as to your assumptions and make a clear note of any possible refinements to your model.
What does your model say about the derivative of the height over time?
What is the maximum G-force the suspension of a car is likely to be subjected to in normal use?
On a physical note, think about the implications that this sort of modelling process might have on the design of shock-absorbers for cars. | {"url":"http://nrich.maths.org/6648/index?nomenu=1","timestamp":"2014-04-20T15:57:20Z","content_type":null,"content_length":"4637","record_id":"<urn:uuid:9311f2cd-2035-4ec3-a467-b829e53416fd>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Atlantic: Breaking News, Analysis and Opinion on politics, business, culture, international, science, technology, national
The International Astronomical Union drops the mic.
How far away from Earth is the sun? Not just, you know, very, very far, but in terms of an actual, measurable distance? When you're calculating, how do you decide which location on Earth to measure
from? How do you decide which spot on the path of Earth's orbit will serve as the focal point for the measurement? How do you account for the sheer size of the sun, for the lengthy reach of its fumes
and flames?
The measurable, mean distance -- also known as the astronomical number -- has been a subject of debate among astronomers since the 17th century. The first precise measurement of the Earth/sun divide,
Nature notes, was made by the astronomer and engineer Giovanni Cassini in 1672. Cassini, from Paris, compared his measurements of Mars against observations recorded by his colleague Jean Richer,
working from French Guiana. Combining their calculations, the astronomers were able to determine a third measurement: the distance between the Earth and the sun. The pair estimated a stretch of 87
million miles -- which is actually pretty close to the value astronomers assume today.
But their measurement wasn't, actually, a number. It was a parallax measurement, a combination of constants used to transform angular measurements into distance. Until the second half of the
twentieth century -- until innovations like spacecraft, radar, and lasers gave us the tools to catch up with our ambition -- that approach to measuring the cosmos was the best we had. Until quite
recently, if you were to ask an astronomer, "What's the distance between Earth and the sun?" that astronomer would be compelled to reply: "Oh, it's the radius of an unperturbed circular orbit a
massless body would revolve about the sun in 2*(pi)/k days (i.e., 365.2568983.... days), where k is defined as the Gaussian constant exactly equal to 0.01720209895."
Oh, right. Of course.
But rocket science just got a little more straightforward. With little fanfare, Nature reports, the International Astronomical Union has redefined the astronomical number, once and for all -- or, at
least, once and for now. According to the Union's unanimous vote, here is Earth's official, scientific, and fixed distance from the sun: 149,597,870,700 meters. Approximately 93,000,000 miles.
For astronomers, the change from complexity to fixity will mean a new convenience when they're calculating distances (not to mention explaining those distances to students and non-rocket scientists).
It will mean the ability to ditch ad hoc numbers in favor of more uniform calculations. It will mean a measurement that more properly accounts for the general theory of relativity. (A meter in this
case is defined as "the distance traveled by light in a vacuum in 1/299,792,458 of a second" -- and since the speed of light is constant, the astronomical unit will no longer depend on an observer's
location with the solar system.) The new unit will also more accurately account for the state of the sun, which is slowly losing mass as it radiates energy. (The Gaussian constant is based on solar
So why did it take so long for the astronomy community to agree on a standard measurement? For, among other things, the same reason this story mentions both meters and miles. Tradition can be its own
powerful force, and the widespread use of the old unit -- which has been in place since 1976 -- means that a new one will require changes both minor and sweeping. Calculations are based on the old
unit. Computer programs are based on the old unit. Straightforwardness is not without its inconveniences.
But it's also not without its benefits. The astronomical unit serves as a basis for many of the other measures astronomers make as they attempt to understand the universe. The moon, for example, is
0.0026 ± 0.0001 AU from Earth. Venus is 0.72 ± 0.01 AU from the sun. Mars is 1.52 ± 0.04 AU from our host star. Descriptions like that -- particularly for amateurs who want to understand our world as
astronomers do -- just got a little more comprehensible. And thus a little more meaningful.
This article available online at: | {"url":"http://www.theatlantic.com/technology/print/2012/09/after-hundreds-of-years-astronomers-finally-agree-this-is-the-distance-from-the-earth-to-the-sun/262493/","timestamp":"2014-04-18T16:53:29Z","content_type":null,"content_length":"18864","record_id":"<urn:uuid:f551a54d-98ed-4313-ae91-ff35aabf8773>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find a Woburn Tutor
...I am also well-qualified to tutor all areas of math (algebra, geometry, trigonometry, pre-calculus and calculus). Math was always my strongest subject in school, and it plays a daily role in
my career as a research physicist. My basic tutoring approach is to customize the sessions to the abilit...
7 Subjects: including calculus, physics, geometry, algebra 1
...Chemistry is something I understand and can help to explain so that you will understand it too. I am currently working at Merrimack College for the Chemistry Department as an adjunct lecturer.
I was a chemistry student in university for 14 years and then worked in industry for 10+ years on orga...
1 Subject: chemistry
Bonjour! I love seeing people of all ages improve, gain confidence in their language ability, and enjoy language learning! I have lived 5 years in France, taught French/ESL/History, hold a
Master's degree from the Sorbonne and from Middlebury College (in French). Contact me if you would like to help yourself or your child learn French!
3 Subjects: including French, ESL/ESOL, European history
...Math is also used to model and evaluate possible mechanisms of the reaction pathway. Truly, math is the queen of the sciences as well as a necessity in everyday life. I would like to tutor
students at any age and in subjects ranging from pure mathematics to practical applications in chemistry, physics and biology.
12 Subjects: including chemistry, geometry, algebra 1, algebra 2
My name is Azzedine, I have a strong command of French -Arabic and Berber (in addition to writing and speaking English). These assets make me an excellent tutor - utilizing a personal approach
and an effective manner of teaching. I was born in Algeria and throughout my life traveled back and forth between France, Algeria and America. My mother inspired me to teach.
1 Subject: French
Nearby Cities With Tutors
Arlington, MA Tutors
Belmont, MA Tutors
Billerica Tutors
Brighton, MA Tutors
Burlington, MA Tutors
Chelsea, MA Tutors
Everett, MA Tutors
Lexington, MA Tutors
Malden, MA Tutors
Medford, MA Tutors
Melrose, MA Tutors
Reading, MA Tutors
Stoneham, MA Tutors
Wakefield, MA Tutors
Winchester, MA Tutors | {"url":"http://www.purplemath.com/woburn_ma_tutors.php","timestamp":"2014-04-18T08:51:20Z","content_type":null,"content_length":"22992","record_id":"<urn:uuid:8b28a894-b1a0-4f03-9b53-5b3771743343>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
Libertyville, IL Algebra Tutor
Find a Libertyville, IL Algebra Tutor
...While there I helped many students achieve success in their courses. I can help with common areas of trouble from simply understanding and interpreting the information in class homework to
addressing specific areas of difficulty such as solving equations, factoring, graphing functions, working w...
34 Subjects: including algebra 1, algebra 2, English, reading
I have a Masters degree in Chemistry. I can help students with Chemistry and Math (pre-calculus, differential equations). I have worked as a volunteer tutor and have helped people working towards
their GEDs. I have worked as a graduate assistant when I was working towards my Masters degree in Chem...
13 Subjects: including algebra 1, algebra 2, English, chemistry
...I have experience working with students with disabilities, test taking strategies, study skills, reading and math. I have a Master's degree in Reading and Literacy and an Educational
Specialist degree in School Leadership. I have taken both the Praxis and ILT assessments so I can provide assistance in preparing for these tests.
28 Subjects: including algebra 1, algebra 2, reading, grammar
...I am all but dissertation in my Phd in European languages and civilization at Brown University. I have a BA in European languages and literature at the University of Chicago.I have taught
Classical Greek for three years at Brown University and one year at Phillips Academy, Andover. I have continued to tutor Greek in later years.
37 Subjects: including algebra 1, English, algebra 2, reading
Math and science have been my great interest since childhood. Everything around us has some explanation involving these subjects! I would like to share my enthusiasm with others.
2 Subjects: including algebra 1, geometry
Related Libertyville, IL Tutors
Libertyville, IL Accounting Tutors
Libertyville, IL ACT Tutors
Libertyville, IL Algebra Tutors
Libertyville, IL Algebra 2 Tutors
Libertyville, IL Calculus Tutors
Libertyville, IL Geometry Tutors
Libertyville, IL Math Tutors
Libertyville, IL Prealgebra Tutors
Libertyville, IL Precalculus Tutors
Libertyville, IL SAT Tutors
Libertyville, IL SAT Math Tutors
Libertyville, IL Science Tutors
Libertyville, IL Statistics Tutors
Libertyville, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/libertyville_il_algebra_tutors.php","timestamp":"2014-04-16T16:10:31Z","content_type":null,"content_length":"24113","record_id":"<urn:uuid:a120d4d2-f7b1-493b-adb2-62d93563424e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numerical computations with the trace formula and the selberg eigenvalue conjecture
Seminar Room 1, Newton Institute
I will report on some numerical computations with the trace formula on congruence subgroups of the modular group. In particular I will discuss how to check numerically the validity of the Selberg
eigenvalue conjecture for specific congruence subgroups.
This is joint work with Andrew Booker. | {"url":"http://www.newton.ac.uk/programmes/RMA/seminars/2004070211501.html","timestamp":"2014-04-19T06:59:25Z","content_type":null,"content_length":"3840","record_id":"<urn:uuid:13d37a70-5701-4be8-beb3-d82b4f6e9edd>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
How many well orderings of $\aleph_0$ are there?
up vote 8 down vote favorite
What is known about the set of well orderings of $\aleph_0$ in set theory without choice? I do not mean the set of countable well-order types, but the set of all subsets of $\aleph_0$ which (relative
to a pairing function) code well orderings. And I would be interested in an answer in, say, ZF without choice. My actual concern is higher order arithmetic.
I would not be surprised if ZF proves there are continuum many. But I don't know.
At the opposite extreme, is it provable in ZF that there are not more well orderings of $\aleph_0$ than there are countable well-order types?
set-theory proof-theory
I actually answered this question on math.SE before. math.stackexchange.com/questions/165010/… – Asaf Karagila Nov 17 '12 at 7:32
The set of infinite binary sequences has size continuum. Each of that sequences determine uniquely a well ordering of N: let A and B, the inverse image of {0} and {1} respectively; let the set AUB
whith the usual order in A and in B, and m < n if m is in A and n is in B. So there are continuum many well-orderings of N that include the types w (if A is finite), all the w+n (if B is finite)
and w+w (otherwise). How to assign a different type of order to each sequence with A and B infinite? Freddy William Bustos, 20 de enero de 2013 – user30793 Jan 20 '13 at 7:06
Since there are $2^{\aleph_0}$ pairs $(A,B)$ of the sort you describe, since there are $\aleph_1$ order-types of countable well-orderings, and since it's consistent with ZFC that $2^{\aleph_0}>\
aleph_1$, there is no way to assign a different order type of well-orderings of $\mathbb N$ to each pair $(A,B)$ without assuming some additional axioms (at least the continuum hypothesis). –
Andreas Blass Jan 20 '13 at 18:59
add comment
4 Answers
active oldest votes
Colin, there are continuum many, as you suspect.
In fact, there are continuum many well-orderings of type $\omega$. The set of infinite binary sequences has size continuum. Given such a sequence $x=(x_0,x_1,\dots)$, let $i\in\{0,1\}$
be least such that $x_n=i$ infinitely often. Consider the enumeration of the naturals $a=(a_0,a_1,\dots)$ that begins with $a_0=i$. Having defined $a_n$, let $a_{n+1}$ be the first
natural number not used so far, if $x_n=i$, and let $a_{n+1}$ be the second number not used so far, otherwise.
up vote 9 down Since there are infinitely many $k$ such that $x_k=i$, the $a_n$ enumerate all naturals. Since from the sequence we can easily recover $x$, this assignment $x\mapsto a$ is injective.
vote accepted The ordering $a_0\lt a_1\lt a_2\lt\dots$ is a well-ordering of the naturals in type $\omega$.
It follows immediately that, for any countable infinite $\alpha$, there are continuum many well-orderings of the naturals in type $\alpha$. This is because one can simply fix a
bijection between $\alpha$ and $\omega$, and use it to "transfer" the procedure just described.
add comment
There are two answers already, but I think this argument is simpler than both of the previous answers.
Any two permutations of $\omega$ give two different wellorderings of order type $\omega$. We show that there are $2^{\aleph_0}$ permutations of $\omega$. Given a function $f:\omega\to 2$
let $\sigma_f$ be the permutation that for all $n\in\omega$ exchanges $2n$ and $2n+1$ iff $f(n)=1$. It is clear that $f\mapsto\sigma_f$ is 1-1. Hence there are at least $2^{\aleph_0}$
up vote 15 wellorderings of order type $\omega$ on $\omega$.
down vote
As Andres pointed out, this transfers to every countable order type $\alpha$.
add comment
This is an aside that I mentioned elsewhere long ago but deserves mention here since it homes in on the counterintuition that probably led Colin to doubt the answer.
As Colin pointed out, every $R \subset \omega$ can be interpreted as a binary relation on $\omega$ through a pairing function. This leads to a partition $\mathcal{B}$ of $\mathcal{P}(\omega)$
into isomorphism classes of binary relational structures $(\omega,R)$. Every countable infinite ordinal $\alpha$ has its own isomorphism class $B_\alpha \in \mathcal{B}$ and therefore $\
aleph_1 \preceq \mathcal{B}$. We can also see that $2^{\aleph_0} \preceq \mathcal{B}$ in a multitude of ways. For example, we can map each $X \subseteq \omega$ to the isomorphism class of the
up vote directed graph consisting of one directed cycle of length $n+1$ for each $n \in X$ and infinitely many isolated points to fill space. In fact, we see that $\aleph_1 + 2^{\aleph_0} \preceq \
7 down mathcal{B}$ since the ranges of these two maps are disjoint. This is all provable without the axiom of choice.
There are models of ZF in which $2^{\aleph_0}$ and $\aleph_1$ are incomparable cardinals. Solovay's model where all sets of reals are Lebesgue measurable is such an example. In such models, $
\mathcal{B}$ must have cardinality strictly greater than $2^{\aleph_0}$... Yes, that's right: $\mathcal{B}$ is a partition of $\mathcal{P}(\omega)$ that has more pieces than there are
elements in $\mathcal{P}(\omega)$!
Another example (in the same model) of more equivalence classes than elements is "Vitali's revenge". Vitali produced a non-measurable set as a choice set for the partition of $\mathbb R$
3 into cosets modulo $\mathbb Q$. In Solovay's model (or any model where all sets of reals are Lebesgue measurable), Vitali's construction is blocked, there is no choice set, but the
partition comes back with the even more counterintuitive property of having strictly more pieces in this partition of $\mathbb R$ than there are elements in $\mathbb R$. – Andreas Blass
Nov 17 '12 at 14:54
@Andreas: That's a great one too! The argument that I know (due to Sierpinski) directly produces a non-measurable set from an injection $\mathbb{R}/\mathbb{Q}\to\mathbb{R}$. Do you know
whether $\aleph_1 \preceq \mathbb{R}/\mathbb{Q}$ is provable in ZF? – François G. Dorais♦ Nov 17 '12 at 17:42
I think saying "has more pieces" is set theoretical fasttalk rather than revealing the source of the counter to intuition. It would be (to me) pedagogically more satisfying to talk about
reasons for failure of things like the pigeonhole principle, or to say that there is no map in the model that witnesses an injection from the underlying set into "the pieces" of that set.
Gerhard "Ask Me About Talking Fast" Paseman, 2012.11.17 – Gerhard Paseman Nov 17 '12 at 17:46
2 @Francois: I don't think $\aleph_1\preceq\mathbb R/\mathbb Q$ in the Solovay model (or in models of AD). If you had $\aleph_1$ cosets of $\mathbb Q$, their union would be uncountable, so
it would include a perfect set, so the continuum would be the union of $\aleph_1$ countable sets $C_\alpha$ ($\alpha<\omega_1$). But then you could get a nonmeasurable set by a Fubini
argument; the binary relation "$x$ is in an earlier $C_\alpha$ than $y$" would be a nonmeasurable subset of the plane. – Andreas Blass Nov 17 '12 at 18:03
1 Francois, $\mathbb R/\mathbb Q$ is in Solovay's model a successor of $\mathbb R$, so it cannot embed $\aleph_1$, because then it (rather, its cardinality) would have to equal $\mathbb R+\
aleph_1$, which is easy to show is different from it. (Andreas's argument, of course, is a more direct argument.) – Andres Caicedo Nov 17 '12 at 18:33
show 3 more comments
Consider the tree of finite partial attempts to build a well-ordering, and notice that it has size continuum.
More rigorously, let:
$$T = \{ f : n \to \omega\ |\ n \in \omega, f \mbox{ injective } \}$$
up vote ordered by extension. This is clearly an $\omega$ branching tree of height $\omega$, and its branches are precisely the injections $\omega \to \omega$. But we're interested in the set of
6 down well-orderings of $\omega$. Now, those injections which are bijections give us distinct well-orderings, but perhaps there are too few of them. What about the branches that aren't
vote surjections? We can create distinct well-orderings out of them too: if a branch $b$ is not surjective and $X$ is the set of naturals missed by its range, consider the well-ordering obtained
by taking $b$, then concatenating on to its end the numbers in $X$, ordered naturally.
So the branches of our tree are in bijection with a set of well-orderings of $\omega$, and there are continuum many branches, so there are continuum many well-orderings. Note that the set of
well-orderings we get is not even the set of all well-orderings. In particular every well-ordering we get has order type $\leq \omega + \omega$.
add comment
Not the answer you're looking for? Browse other questions tagged set-theory proof-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/112651/how-many-well-orderings-of-aleph-0-are-there/119387","timestamp":"2014-04-17T07:48:42Z","content_type":null,"content_length":"76160","record_id":"<urn:uuid:e5f3499c-d926-485b-aa00-95644f2e7461>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
Foundations of Complexity
Previous Lesson
Next Lesson
To properly classify problems we will sometimes need to define models of computation that do not exist in nature. Such is the case with nondeterminism, the topic of this lesson.
A nondeterministic Turing machine can make guesses and then verify whether that guess is correct. Let us consider the map coloring problem. Suppose you are presented with a map of a fictitious world
and you want to give each country a color so no two bordering countries have the same color. How many colors do you need?
The famous Four Color Theorem states that four colors always suffice. Can one do it in three?
Let L be the set of maps that are 3-colorable. A nondeterministic Turing machine can "guess" the coloring and then verify quickly for every bordering pair of countries that they have different
We let NP be the class of problems that use nondeterministic polynomial time. We can give an equivalent definition of NP using quantifiers: L is in NP if there is a polynomial p and a deterministic
Turing machine M such that for all x, x is in L if and only if there is a w, |w| ≤ p(|x|) and M(x,w) accepts in time at most p(|x|).
The string w is called a witness. In the case of map coloring, w is the coloring of the countries.
Can we solve map coloring in deterministic polynomial-time? Is every NP problem computable in deterministic polynomial-time? Therein lies the most important question of all, which we will discuss in
the next lesson. | {"url":"http://blog.computationalcomplexity.org/2002/12/foundations-of-complexitylesson-9.html","timestamp":"2014-04-19T22:55:07Z","content_type":null,"content_length":"145056","record_id":"<urn:uuid:14a54794-7301-4b2e-a490-7267e7c7ca82>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is there more than one Crank-Nicolson scheme?
Hi everybody...
I want to solve the diffusion equation in 1D using the Crank-Nicolson scheme. I have two books about numerical methods, and the problem is that in "Numerical Analysis" from Burden and Faires, the
differences equation for the diffusion equations is:
On the other hand, in "Numerical and analytical methods for scientists and engineers using mathematica", the same equation is expressed as:
[itex]i[/itex] represents the space steps, [itex]j[/itex] the time steps, [itex]k[/itex] is [itex]\Delta t [/itex], [itex]h[/itex] is [itex]\Delta x[/itex]
Should this schemes yield the same results? Why the differences?
I mean, in the first term of the first scheme, the numerator is [itex]w_{i,j+1}-w_{i,j}[/itex], but in the second scheme is [itex]w_{i,j}-w_{i,j-1}[/itex].
In addition to this, the last 3 terms of the equations (inside the brackets) are [itex]w_{i+1,j+1}-2w_{i,j+1}+w_{i-1,j+1}[/itex] and [itex]w_{i+1,j-1}-2w_{i,j-1}+w_{i-1,j-1}[/itex].
Are both schemes named Crank-Nicolson?
Can somebody help me with this?? Thanks!! | {"url":"http://www.physicsforums.com/showthread.php?p=3440988","timestamp":"2014-04-19T19:50:30Z","content_type":null,"content_length":"24091","record_id":"<urn:uuid:c3ad00ee-5631-40bb-ae03-d30e65d90bfa>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Typed Tagless Interpretations
We demonstrate a tagless final interpreter for call-by-name, call-by-value and call-by-need simply-typed lambda-calculus with integers and constants. We write a lambda-term once, which GHC(i)
immediately type checks. Once the term is accepted, it can be evaluated several times using different interpreters with different evaluation orders. All the interpreters are written in the
tagless final framework and so are efficient and assuredly type-preserving. We obtain a higher-order embedded domain-specific language with the selectable evaluation order.
The interpreters implementing different evaluation orders are very much alike. In fact, they are written so to share most of the code save for the interpretation of lam . The semantics of
abstraction is indeed what sets the three evaluation orders apart.
We define the DSL as a type class EDSL . The type class declaration defines the syntax and its instances define the semantics of the language. Our DSL is typed; the DSL types are built using the
constant IntT and the binary infix type constructor :-> .
data IntT
data a :-> b
infixr 5 :->
We could have used Haskell's Int and arrow types as DSL types. For clarity, we chose to distinguish the object language types from the meta-language (Haskell) types.
class EDSL exp where
lam :: (exp a -> exp b) -> exp (a :-> b)
app :: exp (a :-> b) -> exp a -> exp b
int :: Int -> exp IntT -- Integer literal
add :: exp IntT -> exp IntT -> exp IntT
sub :: exp IntT -> exp IntT -> exp IntT
After introducing a convenient `macro' let_ (which could have been called `bind') we write a sample object language term as follows:
let_ :: EDSL exp => exp a -> (exp a -> exp b) -> exp b
let_ x y = (lam y) `app` x
t2 :: EDSL exp => exp IntT
t2 = (lam $ \z -> lam $ \x -> let_ (x `add` x)
$ \y -> y `add` y)
`app` (int 100 `sub` int 10)
`app` (int 5 `add` int 5)
We embed the DSL into Haskell. We define the interpretation of DSL types into Haskell types as the following type function.
type family Sem (m :: * -> *) a :: *
type instance Sem m IntT = Int
type instance Sem m (a :-> b) = m (Sem m a) -> m (Sem m b)
The interpretation is parameterized by the type m , which must be a Monad. The use of type families is not essential, merely convenient. In fact, we can easily re-write the whole code in
Haskell98, see below. We interpret EDSL expressions of the type a as Haskell values of the type S l m a:
newtype S l m a = S { unS :: m (Sem m a) }
where l is the label for the evaluation order, one of Name , Value , or Lazy .
Here is the call-by-name interpreter (with sub elided). One of the reasons to parameterize the interpreter over MonadIO is to print out the evaluation trace, so that we can see the difference
among the three evaluation strategies in the number of performed additions and subtractions.
data Name
instance MonadIO m => EDSL (S Name m) where
int = S . return
add x y = S $ do a <- unS x
b <- unS y
liftIO $ putStrLn "Adding"
return (a + b)
lam f = S . return $ (unS . f . S)
app x y = S $ unS x >>= ($ (unS y))
We evaluate the sample term under call-by-name
runName :: S Name m a -> m (Sem m a)
runName x = unS x
t2SN = runName t2 >>= print
obtaining the result 40 and observing from the trace that subtraction was not performed (because the value of int 100 `sub` int 10 was not needed to compute the result of t2). On the other hand,
the sub-expression int 5 `add` int 5 was evaluated four times.
The call-by-value evaluator differs from the call-by-name one only in the interpretation of the abstraction:
lam f = S . return $ (\x -> x >>= unS . f . S . return)
The evaluation of the lambda abstraction body always starts by evaluating the argument, whether the result will be needed or not. That is literally the definition of call-by-value. The very same
sample term can be interpreted differently:
runValue :: S Value m a -> m (Sem m a)
runValue x = unS x
t2SV = runValue t2 >>= print
giving in the end the same result 40. Although the result of the subtraction was not needed, the trace shows it performed. On the other hand, the argument sub-expression int 5 `add` int 5 was
evaluated only once. In call-by-value, arguments of evaluated applications are evaluated exactly once.
The call-by-need evaluator differs from the others again in one line, the interpretation of abstractions:
lam f = S . return $ (\x -> share x >>= unS . f . S)
The evaluation of the body of the abstraction always starts by lazy sharing the argument expression. Again, this is the definition of call-by-need. We run the very same term t2 with the new
evaluator, obtaining the same result 40 and observing from the execution trace that subtraction was not evaluated (because it was not needed) but the needed argument expression int 5 `add` int 5
was evaluated once. In call by need, arguments of evaluated applications are evaluated at most once. | {"url":"http://okmij.org/ftp/tagless-final/","timestamp":"2014-04-21T12:22:02Z","content_type":null,"content_length":"37380","record_id":"<urn:uuid:890c0e92-db74-4d8f-b244-0256aa83d7eb>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Une proc'edure d'apprentissage pour r'eseau `a seuil asym'etrique
Results 1 - 10 of 43
- Machine Learning , 1995
"... The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a
very high-dimension feature space. In this feature space a linear decision surface is constructed. Special pr ..."
Cited by 2155 (32 self)
Add to MetaCart
The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very
high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine.
The idea behind the supportvector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable
training data.
- Machine Learning , 1992
"... Abstract. This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE
algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinfor ..."
Cited by 321 (0 self)
Add to MetaCart
Abstract. This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms,
are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement
tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented,
some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can
be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting
behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.
, 1988
"... Most connectionist or "neural network" learning systems use some form of the back-propagation algorithm. However, back-propagation learning is too slow for many applications, and it scales up
poorly as tasks become larger and more complex. The factors governing learning speed are poorly understood. ..."
Cited by 224 (0 self)
Add to MetaCart
Most connectionist or "neural network" learning systems use some form of the back-propagation algorithm. However, back-propagation learning is too slow for many applications, and it scales up poorly
as tasks become larger and more complex. The factors governing learning speed are poorly understood. I have begun a systematic, empirical study of learning speed in backprop-like algorithms, measured
against a variety of benchmark problems. The goal is twofold: to develop faster learning algorithms and to contribute to the development of a methodology that will be of value in future studies of
this kind. This paper is a progress report describing the results obtained during the first six months of this study. To date I have looked only at a limited set of benchmark problems, but the
results on these are encouraging: I have developed a new learning algorithm that is faster than standard backprop by an order of magnitude or more and that appears to scale up very well as the
problem size increases.
- COGNITIVE SCIENCE , 1990
"... A novel modular connectionist architecture is presented in which the networks composing the architecture compete to learn the training patterns. As a result of the competition, different
networks learn different training patterns and, thus, learn to compute different functions. The architecture pe ..."
Cited by 181 (5 self)
Add to MetaCart
A novel modular connectionist architecture is presented in which the networks composing the architecture compete to learn the training patterns. As a result of the competition, different networks
learn different training patterns and, thus, learn to compute different functions. The architecture performs task decomposition in the sense that it learns to partition a task into two or more
functionally independent vii tasks and allocates distinct networks to learn each task. In addition, the architecture tends to allocate to each task the network whose topology is most appropriate to
that task, and tends to allocate the same network to similar tasks and distinct networks to dissimilar tasks. Furthermore, it can be easily modified so as to...
- IEEE Transactions on Neural Networks , 1995
"... Abstract | We survey learning algorithms for recurrent neural networks with hidden units, and put the various techniques into a common framework. We discuss xedpoint learning algorithms, namely
recurrent backpropagation and deterministic Boltzmann Machines, and non- xedpoint algorithms, namely backp ..."
Cited by 135 (3 self)
Add to MetaCart
Abstract | We survey learning algorithms for recurrent neural networks with hidden units, and put the various techniques into a common framework. We discuss xedpoint learning algorithms, namely
recurrent backpropagation and deterministic Boltzmann Machines, and non- xedpoint algorithms, namely backpropagation through time, Elman's history cuto, and Jordan's output feedback architecture.
Forward propagation, an online technique that uses adjoint equations, and variations thereof, are also discussed. In many cases, the uni ed presentation leads to generalizations of various sorts. We
discuss advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones, continue with some \tricks of the trade" for training, using, and simulating continuous time
and recurrent neural networks. We present somesimulations, and at the end, address issues of computational complexity and learning speed.
, 1992
"... Learning control involves modifying a controller's behavior to improve its performance as measured by some predefined index of performance (IP). If control actions that improve performance as
measured by the IP are known, supervised learning methods, or methods for learning from examples, can be us ..."
Cited by 51 (2 self)
Add to MetaCart
Learning control involves modifying a controller's behavior to improve its performance as measured by some predefined index of performance (IP). If control actions that improve performance as
measured by the IP are known, supervised learning methods, or methods for learning from examples, can be used to train the controller. But when such control actions are not known a priori,
appropriate control behavior has to be inferred from observations of the IP. One can distinguish between two classes of methods for training controllers under such circumstances. Indirect methods
involve constructing a model of the problem's IP and using the model to obtain training information for the controller. On the other hand, direct, or model-free,...
- Neural Networks , 1997
"... Many neural net learning algorithms aim at finding "simple" nets to explain training data. The expectation is: the "simpler" the networks, the better the generalization on test data (! Occam's
razor). Previous implementations, however, use measures for "simplicity" that lack the power, universali ..."
Cited by 50 (31 self)
Add to MetaCart
Many neural net learning algorithms aim at finding "simple" nets to explain training data. The expectation is: the "simpler" the networks, the better the generalization on test data (! Occam's
razor). Previous implementations, however, use measures for "simplicity" that lack the power, universality and elegance of those based on Kolmogorov complexity and Solomonoff's algorithmic
probability. Likewise, most previous approaches (especially those of the "Bayesian" kind) suffer from the problem of choosing appropriate priors. This paper addresses both issues. It first reviews
some basic concepts of algorithmic complexity theory relevant to machine learning, and how the Solomonoff-Levin distribution (or universal prior) deals with the prior problem. The universal prior
leads to a probabilistic method for finding "algorithmically simple" problem solutions with high generalization capability. The method is based on Levin complexity (a time-bounded generalization of
Kolmogorov comple...
- Brain Development and Cognition: A Reader , 1993
"... Developmental psychology and developmental neuropsychology have traditionally focused on the study of children. But these two fields are also supposed to be about the study of change, i.e.
changes in behavior, changes in the neural structures that underlie behavior, and changes in the relationship b ..."
Cited by 32 (0 self)
Add to MetaCart
Developmental psychology and developmental neuropsychology have traditionally focused on the study of children. But these two fields are also supposed to be about the study of change, i.e. changes in
behavior, changes in the neural structures that underlie behavior, and changes in the relationship between mind and brain across the course of development. Ironically, there has been relatively
little interest in the mechanisms responsible for change in the last 15–20 years of developmental research. The reasons for this de-emphasis on change have a great deal to do with a metaphor for mind
and brain that has influenced most of experimental psychology, cognitive science and neuropsychology for the last few decades, i.e. the metaphor of the serial digital computer. We will refer to this
"... For many pattern recognition tasks, the ideal input feature would be invariant to multiple confounding properties (such as illumination and viewing angle, in computer vision applications).
Recently, deep architectures trained in an unsupervised manner have been proposed as an automatic method for ex ..."
Cited by 28 (7 self)
Add to MetaCart
For many pattern recognition tasks, the ideal input feature would be invariant to multiple confounding properties (such as illumination and viewing angle, in computer vision applications). Recently,
deep architectures trained in an unsupervised manner have been proposed as an automatic method for extracting useful features. However, it is difficult to evaluate the learned features by any means
other than using them in a classifier. In this paper, we propose a number of empirical tests that directly measure the degree to which these learned features are invariant to different input
transformations. We find that stacked autoencoders learn modestly increasingly invariant features with depth when trained on natural images. We find that convolutional deep belief networks learn
substantially more invariant features in each layer. These results further justify the use of “deep ” vs. “shallower ” representations, but suggest that mechanisms beyond merely stacking one
autoencoder on top of another may be important for achieving invariance. Our evaluation metrics can also be used to evaluate future work in deep learning, and thus help the development of future
algorithms. 1
- SIAM J. on Optimization , 1998
"... . We consider an incremental gradient method with momentum term for minimizing the sum of continuously di#erentiable functions. This method uses a new adaptive stepsize rule that decreases the
stepsize whenever su#cient progress is not made. We show that if the gradients of the functions are bounded ..."
Cited by 25 (1 self)
Add to MetaCart
. We consider an incremental gradient method with momentum term for minimizing the sum of continuously di#erentiable functions. This method uses a new adaptive stepsize rule that decreases the
stepsize whenever su#cient progress is not made. We show that if the gradients of the functions are bounded and Lipschitz continuous over a certain level set, then every cluster point of the iterates
generated by the method is a stationary point. In addition, if the gradient of the functions have a certain growth property, then the method is either linearly convergent in some sense or the
stepsizes are bounded away from zero. The new stepsize rule is much in the spirit of heuristic learning rules used in practice for training neural networks via backpropagation. As such, the new
stepsize rule may suggest improvements on existing learning rules. Finally, extension of the method and the convergence results to constrained minimization is discussed, as are some implementation
issues and numerical exp... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1552938","timestamp":"2014-04-19T01:00:53Z","content_type":null,"content_length":"39653","record_id":"<urn:uuid:543c7ae1-19bd-45e3-98d1-c7403e7b87cc>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scientific Papers - Vi
The significance of the first three terms is brought out if we suppose that r is constant (a), so that the last term vanishes. In this case the exact solution is
a + p , a + a. - r-
log 0
-. > M.
-1=^- +
T a
, (11)
a \ a / in agreement with (10).
In the above investigation -^ is supposed to be zero exactly upon the circle of radius a. If the circle whose centre is taken as origin of coordinates be merely the circle of curvature of the curve fy = 0 at the point (8 = 0) under consideration, -ty- will not vanish exactly upon it, but only when r has the approximate value c03, c being a constant. In (6) an initial term R0 must be introduced, whose approximate value is —c9'ARi. But since -R0" vanishes with d, equation (7) and its consequences remain undisturbed and (10) is still available as a formula of interpolation. In all these cases, the success of the approximation depends of course upon the degree of slowness with which y, or r, varies.
Another form of the problem arises when what is given is not a pair of neighbouring curves along each of which (e.g.) the stream-function is constant, but one such curve together with the variation of potential along it. It is then required to construct a neighbouring stream-line and to determine the distribution of potential upon it, from which again a fresh departure may be made if desired. For this purpose we regard the rectangular coordinates x, y as functions of f (potential) and 77 (stream-function), so that
in which we are supposed to know f(£) corresponding to 77 = 0, i.e., so and y are there known functions of £. Take a point on 77 = 0, at which without loss of generality £ may be supposed also to vanish, and form the expressions for x and y in the neighbourhood. From
. K -f iy = A0 + iB0 + (A! + iBi) (£ + iy) + (Az we derive as — A0 + Al% — B-M + A% (£2 — ^
= An
+ As%3 + Aamical similarity for viscous fluids were formulated in this memoir. Beynolda's important application was 80 years later.b)J1(k1b) ' 4&1F0'(M) | {"url":"http://archive.org/stream/ScientificPapersVi/TXT/00000174.txt","timestamp":"2014-04-16T19:57:06Z","content_type":null,"content_length":"12620","record_id":"<urn:uuid:2de23dcb-90ec-44ec-a89a-8ab55c184d45>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
error: expected initializer before '&' token
perhaps this will simplify. below is the header and then the
#ifndef BIGINT_ABROWNIN
#define BIGINT_ABROWNIN
#include <iostream> //provides istream and ostream
#include <cstdlib> // provides size_t
namespace abrowning6 {
class BigInt {
//TYPEDEFS and MEMBER CONSTANTS
typedef std::size_t size_type;
typedef int value_type;
static const size_type CAPACITY = 100;
BigInt ();
void insert(const value_type& entry);
void erase_all();
size_type size() const { return used;}
bool is_item () const;
value_type current() const;
int getB1() const {return big_int1;}
int getB2() const {return big_int2;}
friend std:
friend std::istream & operator >>
(std::istream & ins, const BigInt & target);
value_type data[CAPACITY];
size_type used;
size_type current_index;
int sign;
int big_int1;
int big_int2;
// NONMEMBER FUNCTIONS for the big_int class
BigInt operator + (const BigInt& big_int1, const BigInt&
BigInt operator - (const BigInt& big_int1, const BigInt&
BigInt operator * (const BigInt& big_int1, const BigInt&
BigInt operator / (const BigInt& big_int1, const BigInt&
BigInt operator % (const BigInt& big_int1, const BigInt&
#endif //BIGINT_H
#include <cassert> //provides assert
#include "big_int.h"
#include <cstdlib>
#include <iostream>
namespace abrowning6 {
sign (+1),
{// Constructor has no work to do
const BigInt::size_type BigInt::CAPACITY;
void BigInt::insert(const value_type& entry){
assert (size() < CAPACITY);
data[used] = entry;
++ used;
void BigInt::erase_all(){
used = 0;
BigInt operator + (const BigInt& big_int1, const BigInt&
assert (big_int1.size() + big_int1.size() <= BigInt::CAPACITY);
big_int1 + big_int2;
BigInt operator - (const BigInt& big_int1, const BigInt& big_int2){
big_int1 - big_int2;
BigInt operator * (const BigInt& big_int1, const BigInt& big_int2){
assert (big_int1.size() * big_int2.size() <= BigInt::CAPACITY);
big_int1 * big_int2;
BigInt operator / (const BigInt& big_int1, const BigInt& big_int2){
big_int1 / big_int2;
BigInt operator % (const BigInt& big_int1, const BigInt&
BigInt modulus;
big_int1 % big_int2;
friend ostream & operator <<(ostream & outs, const BigInt&
outs << big_int1.get_b1() << " " << big_int1.get_b2();
return outs;
friend istream & operator >>(istream & ins, const BigInt&
ins >> target.big_int1 >> target.big_int2;
return ins;
1,5 Top | {"url":"http://www.velocityreviews.com/forums/t452502-error-expected-initializer-before-and-token.html","timestamp":"2014-04-20T16:38:12Z","content_type":null,"content_length":"60198","record_id":"<urn:uuid:6e169ac5-6fd2-4298-90cc-4b00264f3453>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yankees win the SALCS... - River Avenue Blues
Yankees win the SALCS…
…Where the S stands for spreadsheet. Baseball Prospectus’s Clay Davenport reran his LCS projection numbers and has come up with new figures to express the odds of each team advancing to, and then
winning, the World Series. Before we go into how greatly the computer favors the Yankees I want to quote from Davenport’s post, because his methodology is a special kind of wonkiness.
Game 3, LA vs Philladelphia, expecting Kuroda (for the Dodgers) and Lee to pitch. The Phillies had a team EQA of .276; in a 4.5 rpg environment that works out to 5.22 runs (.276 divided by .260,
raised to the 2.5, times 4.50 = 5.22). Home game, so raise by 4% to get 5.43. They’re going against a RHP, and they had a .779 OPS aginst RHP, and .781 overall. Run scoring changes with the ratio
of OPS, squared, but we can only count on the starter to be in the game for about six innings (and frequently less). So we’ll have six innings with a run rate of 5.43 * (779/781)^2, and three
innings where we’ll use the 5.43 rate, so now we have them at 5.41. Their opponent, Kuroda, carries a 4.82 NRA but, once again, he’s only in the game for six innings. The other three go to the
Dodger bullpen, which we’ve rated – by taking the average NRA of the five relievers most likely to be used – at 2.88. The total Dodger team rating with Kuroda becomes 4.17. So we take the Philly
run total of 5.41, multiply by 4.17/4.50, to get an estimate of 5.01 runs.
If we do the same math for the Dodgers, we end up with an estimate of 3.89 runs. The win probability for the Phillies is just the Pythagorean percentage from 5.01 runs scored and 3.89 allowed –
or .624.
Because we don’t typically use it here, NRA is defined as, “Normalized Runs Allowed. ‘Normalized runs’ have the same win value, against a league average of 4.5 and a pythagorean exponent of 2, as the
player’s actual runs allowed did when measured against his league average.” Now that we have the spreadsheet nerd business out of the way, we can see how much the computer favors the Yankees.
Using the above-described simulation, the Yankees would win 73.34 percent of the time in the ALCS against the Angels. That’s a pretty heavy advantage against the Angels, and I suspect the teams are a
bit more evenly matched than that. Even more remarkably, the Yankees win the World Series in these simulations 40.55 percent of the time, against 8.3 percent for the Angels, 28.4 percent for the
Dodgers, and 22.7 percent for the Phillies.
Unfortunately for the computers, they’ll play the real games on the field. But we can still have fun with the numbers these players produced during the season. If nothing else, this shows just how
dominant the 2009 Yankees were, and should continue to be.
27 Comments»
1. i just want #27
pretty please?
2. How many SWS titles does the franchise have? I’m sure Jeter has his SWS ring from 1998.
I love math.
□ The Yanks totally won the 2001 SWS and the 2004 SWS too. That’s two right there.
☆ But didn’t they lose in 2000?
3. I’m glad he noted that CC Sabathia’s probable three starts, and Joe Saunders starting Game Two were factored in.
5. Raging clue.. RAGING CLUEEEEEEEE
6. Are these results official? Does this count?
7. I actually buy the RLYW results in the SALCS over the BP results. They have the Yankees winning the series “only” 60.6% of the time or so.
9. I do love numbers, but these statistical evaluations are now ridiculous!! This is a great site for fans and I appreciate all your work to keep us informed. With that being said, I think RAB
should stick to news updates and opinions. Leave the “advanced” statistical analysis to Theo and James.
□ This isn’t really advanced.
□ i, for one, love the statistical analysis and i’m glad the guys have dived in. it’s a part of the game – why completely disregard it?
□ They didn’t do this themselves, they’re reporting other people’s findings.
10. Statistical predictions are nice, but you are familiar with the following phrase? … “That’s why they play the games.”
□ “Unfortunately for the computers, they’ll play the real games on the field. ”
Uh, considering that’s exactly what I wrote, I’d say I am.
11. Yes!! See you in the Spreadsheet Canyon of Heroes!!
□ Are the Spreadsheet World Series Champion locker room shirts and hats available at Modell’s yet?
12. It’s always nice to see how many people responding who don’t know the purpose of a forecast (I work in financial forecasting).
No one is saying the Yankees WILL win. They’re saying that, sight unseen, the Yankees have a higher percentage chance of winning. So, if someone offered you the following wager: you put up $1,
and if the Yankees win, I’ll give you $2, you should take it.
13. Yeah that’s all well and good, but you know even with spreadsheet games, you don’t play them on a spreadsheet.
[That was a feeble attempt at humor intended to catch the attention of TSJC]
14. I’m very dubious about this, for two reasons.
First, just about every number that goes into this sort of thing seems to be based on empirical data that necesarily contains measurement error. That is, if you throw an OBP into the simulation
you’re saying that the number you use is the true long-run OBP, which it probably isn’t. It might be close, but mash together a lot of numbers and relationships that all contain error and the
final result is very imprecise. Or consider run environment. Isn’t that likely to be affected by the weather, and wouldn’t that change the results?
I’d be more impressed if Davenport told us what the confidence interval, or something, of that number, is.
Second, the number doesn’t check against some simple calculations. Suppose the Yankees are 55% to win any individual game against the Angels. Then their chance of winning the series is about 60%.
Are the Yankees much more than 55% to beat the Angels in a given game? Make it an unrealistic 60% and you get a series win probability of .710, a still lower than the simulation results.
It would be interesting to know what the simulation would predict about the regular season series.
15. [...] Yankees. Back in October, on the eve of the ALCS against the Angels, we found out that the Yankees won the sALCS. Clay Davenport of Baseball Prospectus ran simulations of the ALCS and then
World Series and the [...] | {"url":"http://riveraveblues.com/2009/10/yankees-win-the-salcs-18499/","timestamp":"2014-04-20T21:07:15Z","content_type":null,"content_length":"90529","record_id":"<urn:uuid:c9f92aa2-dd50-4b0d-a654-2a8995081ea9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rockleigh, NJ Math Tutor
Find a Rockleigh, NJ Math Tutor
...For many GRE students, the quantitative reasoning (QR) section is the most intimidating. Sometimes that's because students feel they weren't very good at math in school, or because it has been
awhile since they have used formal equations in their everyday life. I help students prepare for the Q...
18 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...Pretty soon, I'm gonna be a doctor! (No, not THAT type of doctor. Please don't send me pictures of your aunt's moles.)Alright, if you're still reading, then you must be interested! I'd love to
work with you!
17 Subjects: including calculus, GRE, SAT writing, Regents
...I am also self-taught on the 6-string guitar. I graduated cum laude from the College of Mt. St.
23 Subjects: including trigonometry, algebra 1, algebra 2, geometry
I have taught Mathematics in courses that include basic skills (arithmetic and algebra), probability and statistics, and the full calculus sequence. My passion for mathematics and teaching has
allowed me to develop a highly intuitive and flexible approach to instruction, which has typically garnere...
7 Subjects: including algebra 1, algebra 2, calculus, geometry
...I graduated from University of Pennsylvania with a Bachelors of Arts in Chemistry, and am currently researching in a pharmacology group using organic chemistry. I truly love chemistry, and
spent much of my undergraduate career helping my peers understand general, organic, and physical chemistry. Often, passion and motivation is key to success in difficult courses.
8 Subjects: including algebra 1, algebra 2, chemistry, geometry
Related Rockleigh, NJ Tutors
Rockleigh, NJ Accounting Tutors
Rockleigh, NJ ACT Tutors
Rockleigh, NJ Algebra Tutors
Rockleigh, NJ Algebra 2 Tutors
Rockleigh, NJ Calculus Tutors
Rockleigh, NJ Geometry Tutors
Rockleigh, NJ Math Tutors
Rockleigh, NJ Prealgebra Tutors
Rockleigh, NJ Precalculus Tutors
Rockleigh, NJ SAT Tutors
Rockleigh, NJ SAT Math Tutors
Rockleigh, NJ Science Tutors
Rockleigh, NJ Statistics Tutors
Rockleigh, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/rockleigh_nj_math_tutors.php","timestamp":"2014-04-21T15:16:22Z","content_type":null,"content_length":"23740","record_id":"<urn:uuid:6f71c596-944d-4357-b466-9363e6ecdd7e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simple Math Problem
April 26th 2012, 02:16 PM
Simple Math Problem
I have a what seems to be a simple math problem but for some reason the answer key is telling me my answer is incorrect. So here goes...
One motor scooter travels at an average speed of 48 kilometres per hour. A slower scooter travels at an average speed of 36 kilometres per hour. In 45 minutes, how many more kilometres does the
faster scooter travel than the slower scooter?
My answer is 9km. I multiplied 48 by .75 which gave me 36. I then multiplied 36 by .75 which gave me 27. I subtracted the 27 from 36 to get the answer of 9. I figure 45 minutes is 3/4 or 75% aka
.75 of an hour. Apparently, according to the answer key, the answer is 7.5km. Where did I go wrong?
April 26th 2012, 06:00 PM
Re: Simple Math Problem
Hello, phenomin!
Your work is excellent . . . They are wrong!
I noted that a small change in the problem would justify their answer.
If the faster scooter had a speed of 46 kph or the slower scooter had a speed of 38 kph,
. . the answer would be 7.5 kilometers.
Could you have typed the problem incorrectly?
April 26th 2012, 06:13 PM
Re: Simple Math Problem
Hi Soroban,
Thanks for your response. This was actually a question on a math paper my girlfriend was going over and when I gave her the answer she told me I was wrong. I checked the answer key and it did say
7.5km. I assure you I did not type it out incorrectly as it is typed identically word for word. At first I told her that her answer key is wrong and that sometimes these things happen but she
seems to think they are never wrong, lol. Anyway, glad to have a second opinion from someone on a forum like this just to clarify. Thanks
April 27th 2012, 09:45 PM
Re: Simple Math Problem
Hi phenomin-
You can also reason it this way: in one hour, the faster scooter covers (48-36) km, or 12 km, more than the slower one.
Thus, in 45 minutes, the difference in distance covered will be (45/60) x 12, or 0.75 x 12, which comes out to 9 km.
So, in case you wanted a third opinion... (Itwasntme)
An answer of 7.5 km is valid only if
1) The difference in scooter speeds in 10 km/h,
2) The time given is 37.5 minutes. | {"url":"http://mathhelpforum.com/math-challenge-problems/197966-simple-math-problem-print.html","timestamp":"2014-04-19T05:07:16Z","content_type":null,"content_length":"6037","record_id":"<urn:uuid:25f3db9d-1b4f-432c-96a9-72ebb49a7c46>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some Generalized Hardy Spaces
Some Generalized Hardy Spaces
Loren David Meeker
Department of Mathematics, Stanford University., 1964 - Analytic functions - 182 pages
From inside the book
8 pages matching fr(x in this book
Where's the rest of this book?
Results 1-3 of 8
What people are saying - Write a review
We haven't found any reviews in the usual places.
Related books
Section 1 5
Section 2 8
Section 3 9
10 other sections not shown
Common terms and phrases
a e G A-element a.e. x e G abelian group absolutely continuous analytic and harmonic analytic functions archimedean Banach space Borel function Borel measure Borel sets Borel subset boundary values
Cauchy sequence Chapter compact Hausdorff space completes the proof continuous function convergence Corollary cp-analytic defined on G Definition denote direction of cp dual groups f is harmonic f|
dA f|Pdm Fourier-Stieltjes transform fr(x Fubini theorem function f function on G functions defined group G groups whose dual h(x+y)du Haar measure half-plane Hardy spaces harmonic functions
harmonic polynomials Hausdorff space hence homomorphism isometrically isomorphic lim M f](r linearly ordered LP(A LP(G mapping measure on G non-negative p e Hom(S,E p-integrable Proposition prove
quasi-harmonic functions real numbers regular Borel measure representing function resp respect to Haar satisfies semigroup of type shows Space HP(A subspace topology u e M(G unique unit disk
Bibliographic information
Some Generalized Hardy Spaces
Loren David Meeker
Department of Mathematics, Stanford University., 1964 - Analytic functions - 182 pages | {"url":"http://books.google.com/books?id=jHlCAAAAIAAJ&q=fr(x&dq=related:OCLC6310222&source=gbs_word_cloud_r&cad=5","timestamp":"2014-04-18T06:52:17Z","content_type":null,"content_length":"100231","record_id":"<urn:uuid:98c16a7a-7724-47ed-82ac-a4e7b522723b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |