content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Integration by Parts
May 30th 2007, 07:16 PM #1
Jan 2007
Integration by Parts
I need help with 4 of these. I tried them, but I keep getting the wrong answer.
1. Evalute the integral of (sin(8x))^3*(cos(8x))^2*dx. Use C to denote an arbitrary constant.
2. Evalute the integral of (cos(5x))^3*dx. Use C to denote an arbitrary constant.
3. Evalute the integral of csc(3x)dx. Use C to denote an arbitrary constant.
4. Evalute the integral of (2-2sinx)/cosx dx. Use C to denote an arbitrary constant.
$\int \cos^3 x dx = \int \cos x \cos^2 x dx = \int \cos x (1-\sin^2 x) dx$
Let $t=\sin x$ then $t'=\cos x$:
$\int (1-t^2)dt = t - \frac{1}{3}t^3+C$
$\sin x - \frac{1}{3}\sin^3 x +C$
Now to find,
$\int \cos^3 5x dx$
Just use the substitution $u=5x$ und do what I do above.
$\int \sin^3 8x \cos^2 8x dx$
$=\frac{1}{8} \int \sin^3 u \cos^2 u du$
$=\frac{1}{8} \int \sin^2 u \cos^2 u \sin u du$
$=\frac{1}{8} \int (1-\cos^2 u)\cos^2 u \sin u du$
Let $t=\cos u$ then,
$-\frac{1}{8} \int (1-t^2)t^2 dt$
You finish!
May 30th 2007, 07:18 PM #2
Global Moderator
Nov 2005
New York City
May 30th 2007, 07:20 PM #3
Global Moderator
Nov 2005
New York City
May 30th 2007, 07:23 PM #4
Global Moderator
Nov 2005
New York City
May 30th 2007, 07:25 PM #5
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/calculus/15499-integration-parts.html","timestamp":"2014-04-19T07:42:49Z","content_type":null,"content_length":"49663","record_id":"<urn:uuid:64673c7b-867a-4a94-ac31-7f1af674d966>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lyubov Chumakova, University of Wisconsin
Joel Louwsma , University of Michigan
Sharon Lutz , University of Colorado, Boulder
Alan Tarr, Pomona College
Chris Willmore, Purdue University
Racheal Allen, California State University Northridge
Maria G. Uribe, California State University Northridge
Adilson Eduardo Presoto , Universidade Federal de Sao Carlos(UFSCar)
Anderson Tiago da Silva , Universidade Federal de Vicosa
Anne Caroline Bronzi, Universidade Estadual de Campinas (UNICAMP)
Jose Regis Azevedo Varao Filho , (UNICAMP)
Leonardo Barichello, UNICAMP
Patricia Romano Cirilo, Universidade Federal de Minas Gerais
Roberta Camelucci Carrocine, (UFSCar)
Welington Vieira Assuncao, Universidade Estadual do Estado de Sao Paulo, Rio Claro
Brazilian participants were supported by the University Foundation, FAEP.
Organizers: M. Helena Noronha, California State University Northridge, and Marcelo Firer, UNICAMP.
Faculty Advisors: Renato H. L. Pedrosa, Plamen Emilov Koshlukov, Ana Friedlander (UNICAMP), Yuriko Baldin , (UFSCar), and M. Helena Noronha, California State University Northridge. | {"url":"http://www.csun.edu/math/ires/reu_04.html","timestamp":"2014-04-18T08:27:41Z","content_type":null,"content_length":"18937","record_id":"<urn:uuid:84bf20f6-ff93-47ba-95fa-b64c38873a9b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Long division and dyslexia
I have a dyslexic 9 year old and I wish to find out if your program can benefit him. I have tried Math-U-See and it became too boring or frustrating for him. I have tried various other programs
in hope of not overwhelming him. He has completed the delta Math U See level but I feel needs more work on long division. He simply gets very frustrated with it, as the length of time and knowing
where to place number due to dyslexia. Is your curriculum a spiral curriculum? Any suggestions would help.
One of my books from the Blue Series goes through long division in several small steps:
Math Mammoth Division 2
I would suggest that for a dyslexic child, have him do ALL the problems on a squared paper (grid paper). That will help him place the numbers right. Not all the problems in my division book are done
with the grid... but for your son, it may be necessary to always use such paper.
Secondly, when you teach on board or on paper, at each step COLOR the whole column of ones, or tens, or hundreds (whichever you are working at). I have not done such exactly like what I explain now
in my book, I'm just telling you to try that: color the column you're looking at, at each step.
This will help him focus on the specific place value, such as hundreds, and help him place the digit in the quotient in the hundreds column, write the product, and calculate the difference in that
Apart from those tips, it might also help if you check whether he understands the REMAINDER concept outside long division. For example,
16 / 5 = ??
42 / 10 = ??
He must be able to do those well in order to some day understand why long division works. HOWEVER, it's possible for children to learn the motions of long division without understanding why it works.
So, definitely do not discontinue long division just because he doesn't grasp why it works. That understanding may come later on.
My curriculum does not use a "short" spiral like Saxon/Abeka/Horizons. It is more mastery based. However, different concepts are reviewed and studied usually on 2 or 3 neighboring grade levels, and I
also use problems about new concepts that also use previous concepts so they cannot be forgotten. For example, once they learn about writing addition and subtraction from the same picture, then that
is used to learn fact families, which is also used later on with x.
Hope this helps. | {"url":"http://homeschoolmath.blogspot.com/2009/05/photo-by-laura.html","timestamp":"2014-04-16T18:58:21Z","content_type":null,"content_length":"111097","record_id":"<urn:uuid:fdfe5424-ea4d-442f-8f0c-447650d48ce9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R] binom and negbinom
Troels Ring tring at mail1.stofanet.dk
Mon Sep 25 17:45:50 CEST 2000
Dear friends. In Carlin and Louis "Bayes and emperical Bayes methods.."
1996 the classical example of 12 independent tosses of a fair coin
producing 9 heads and 3 tails is given. If the situation is seen as a fixed
sample of 12, a binomial lieklihood is used, and Carlin et al reports a
probability of 0.075.
Using sum(dbinom(9:12,12,.5)) I obtain 0.073
Likewise, if the experiment is seen as continuing until 3 tails are noted,
a negative binomial is used, and the authors find P = 0.0325, whereas
sum(dnbinom(9:1000,3,.5)) gives 0.0327.
These differences may be small - but who is right, either R or Carlin - or
did I do it wrong ?
Best wishes
r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !) To: r-help-request at stat.math.ethz.ch
More information about the R-help mailing list | {"url":"https://stat.ethz.ch/pipermail/r-help/2000-September/008388.html","timestamp":"2014-04-19T12:03:26Z","content_type":null,"content_length":"3492","record_id":"<urn:uuid:a8a98399-f21a-429d-af68-7aa0c57954b9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from April 13, 2009 on The e-Astronomer
Imaginary Worlds
April 13, 2009
Talking of things that don’t exist … I have been reading a fascinating book by Paul Nahin called “An Imaginary Tale : the story of the square root of minus one“. Somebody somewhere didn’t want me to
read this book. I bought it in December for a trip back to the UK. I lost it, but it turned up in the kitchen of the hotel I was staying in. The waitress who returned it stared at all the maths and
said “Is it in Chinese ?”. Really. Then, a week later, just as I was reaching chapter four, I turned a page … and it was blank. Every alternate page was blank for the next sixty four. I felt like I
was inside a Borges story. It all accentuated the feeling of unearthing arcane knowledge. Back in California I returned the book to a puzzled bookstore, and eventually got a new copy back. It then
sat on my shelf for a few months until I re-discovered it. I am happy to report it was worth the wait. Lots of fun.
It took hundreds of years for i to be accepted. The first key step was seeing imaginary numbers appear as an intermediate step in the solutions of cubics, where the final solution is perfectly real –
so the appearance of imaginary numbers is not simply the sign of a non-physical situation. The second key step was the invention of a way to visualise complex numbers a+ib as points in a 2D plane –
the Argand diagram, actually first invented by Wessel. This made complex numbers feel real, but also, especially together with the complex exponential form, made a whole bunch of calculations much
quicker. Since that time, complex numbers have been an indispensable part of the weaponry of mathematicians, physicists, and engineers, and people love using them to make things somehow seem simpler.
A beautiful example from the early twentieth century is the use of “imaginary time” in relativity.
In ordinary space, the interval between two points, ds^2=dx^2+dy^2+dz^2 is conserved if you transform to a different co-ordinate system. In spacetime, the quantity that is conserved is the distinctly
less obvious expression ds^2=-c^2dt^2+dx^2+dy^2+dz^2. But you can recover the nice spacelike expression if you replace time with imaginary time, t’=ict.
But even in the twentieth century some people were uncomfortable with this sort of thing. Nahin has a beautiful quote from a physicist criticising Einstein and Minkowski’s use of imaginary numbers
this way :
The square root of minus one has a legitimate application in pure mathematics, where it forms part of various ingenious devices for handling otherwise intractable situations. It has also a
limited value in mathematical physics … as an essential cog in a mathematical device. In these legitimate cases, having done its work it retires gracefully from the scene… The criterion for
distinguishing sense from nonsense has been lost; our minds are ready to tolerate anything if it comes from a man of repute and is accompanied by an array of symbols in Clarendon type.
This distinction between reality and mathemical convenience is a worrying one. The neat thing about i is that even though it doesn’t exist, you can manipulate it using the ordinary rules of
arithmetic and get the right answer. Hamilton was unfcomfortable with this, and painfully reproduced the advantages of complex numbers in a more acceptable way by defining algebraic couples (a,b)
and defining the product of two couples as (a,b)(c,d)=(ac-bd,bc-ad). Many years before, mathematicians were even uncomfortable with the idea of a negative number; in a similar manner you can of
course carefully define operational rules so such things never appear.. but, hey, relax, it works !
So is mathematics invented or discovered ? Of course the same issue arises for physical theory. Do our concepts and theories describe a true reality that we have unearthed – or they simply
calculating devices, that enable us to predict measured quantities ? Our nervousness about this problem depends critically on distance from sensory experience. You have to be pretty much of a pedant
to deny the existence of magnetic fields. Wave those two magnets near each other and you can feel it. Electrons and protons are weird but pretty safe. You can’t feel the effect on your muscles, but
you can see the needle deflect each time an electron at a time hits the cathode in your lab. Quarks hover around the border. Their presence seems clear in that plot you read in the consortium paper,
from data collected from a huge machine over many years and carefully filtered. You know that was all rigorously done, but you can’t help feeling maybe if you were smart enough there could be a
different set of concepts and calculations that would produce pretty much the same curve. Then finally we reach string theory, where some folk are messianic, and others are openly sceptical.
At the end of the day, most scientists are pragmatic. We only worry about the metaphysics when the facts aren’t in. One good hard prediction is all we need …
Recent Comments
rosetta stone keygen on Impact!!!
rosetta stone activa… on Ancient VMS vs Unix joke
twitterwar on The Truth is Out There ……
Jim Geach on Concrete Science Epiphanies
Phillip Helbig on Impact!!!
andyxl on Impact!!!
Phillip Helbig on Impact!!!
telecharger cube wor… on Museum of Hoaxes
Albert on Impact!!!
andyxl on Impact!!!
12 Comments | Science and Technology | Tagged: Borges, Complex numbers, Essentialism, imaginary time, Onanism, Operationalism, relativity, string theory | Permalink | {"url":"http://andyxl.wordpress.com/2009/04/13/","timestamp":"2014-04-20T23:29:52Z","content_type":null,"content_length":"43888","record_id":"<urn:uuid:391a14a8-fdc9-4d0d-af8c-0c04044441af>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
"Please share examples of how you have used these tools to help train teachers. Sometimes how we use these tools to teach adults can spark ideas for how to use them in the classroom.
I am not interested in finding the absolute BEST ways to implement web tools into the classroom. I am looking for ALL examples of how teachers have used these tools. Teachers have this amazing
way of taking a project and putting their own spin on it to make it work for them and their students, but many times we just need an example to get us going." | {"url":"https://groups.diigo.com/group/web20atschool/content/tag/learning%20integration%20wiki%20collaboration","timestamp":"2014-04-16T19:03:49Z","content_type":null,"content_length":"26870","record_id":"<urn:uuid:20d731ea-f824-4281-ac7f-a02dfaaf09fd>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Angles between planes
September 14th 2011, 10:35 AM
Angles between planes
I need to find the angle between the planes x-z=1 and 2y+z=1
So the normal vectors for the planes would be
<1,0,-1) and <0,2,1) Correct?
And the magnitudes of those vectors would be
(2)^(1/2) and (5)^(1/2) Correct?
Thus Cos(Theta)= -1/((2^.5)*(5^.5))
Leaving me with an angle of 1.8925 Radians?
Is this all correct?
September 14th 2011, 10:51 AM
Re: Angles between planes
Looks good to me, though I've been having trouble since my hari-kari.
September 14th 2011, 10:58 AM
Re: Angles between planes
Well, this is the wrong answer...unless the teacher is wrong. So I'm trying to figure out what's wrong here.
September 14th 2011, 11:07 AM
Re: Angles between planes
September 14th 2011, 11:23 AM
Re: Angles between planes
Yes. How do I find this angle?
September 14th 2011, 11:32 AM
Re: Angles between planes
September 14th 2011, 11:44 AM
Re: Angles between planes
September 14th 2011, 11:51 AM
Re: Angles between planes
When two planes (or two lines) intersect they make four angles but only two distinct angles- the "opposite" (vertical) angles are congruent. The two distinct angles between two planes add to $\
pi$ radians. $\pi- 1.8925=1.2491$ radians.
As to why that formula did not immediately give you the correct answer, note that <-1, 0, 1> is also normal to the first plane and if you had used that, you would have got $cos(\theta)= \frac{1}
{\sqrt{2}\sqrt{5}}$. I.e., to get the smaller angle, use the absolute value.
September 14th 2011, 01:23 PM
Re: Angles between planes
Thanks for all the help guys! | {"url":"http://mathhelpforum.com/calculus/187992-angles-between-planes-print.html","timestamp":"2014-04-19T00:14:37Z","content_type":null,"content_length":"9338","record_id":"<urn:uuid:bb6339c5-77b6-4002-8859-4537dcbee165>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fluid mechanics and Waves
That's tricky- most decent fluids texts use tensors.
You may like Tritton's "Physical Fluid Dynamics"
Segel's "Mathematics applied to Continuum Mechanics" is available as a Dover text, and is quite good- the math is also very approachable.
For a impressively good read with *no* math, I recommend Vogel's "Life in Moving Fluids" | {"url":"http://www.physicsforums.com/showthread.php?t=437674","timestamp":"2014-04-19T19:42:43Z","content_type":null,"content_length":"22833","record_id":"<urn:uuid:3f5706d5-6ba9-4e2d-9155-fc83cac23676>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
CalcuDoku rules
Following the footsteps of Sudoku, Kakuro and other Number Logic puzzles, CalcuDoku is one more family of easy to learn addictive logic puzzles which were invented in Japan. Using logic together with
the four math operations, these fascinating puzzles offer endless fun and intellectual entertainment to puzzle fans of all skills and ages.
CalcuDoku are math based puzzles coupled with logic. Unlike other logic puzzles, CalcuDoku uses addition, subtraction, multiplication and division in ways which are deeper and more gratifying than
anyone can imagine.
CalcuDoku puzzles come in many sizes and range from very easy to extremely difficult taking anything from five minutes to several hours to solve. However, make one mistake and you’ll find yourself
stuck later on as you get closer to the solution...
If you like Sum Sudoku, Kakuro and other logic puzzles, you will love Conceptis CalcuDoku as well!
SingleOp CalcuDoku
Each puzzle consists of a grid containing blocks surrounded by bold lines. The object is to fill all empty squares so that the numbers 1 to N (where N is the number of rows or columns in the grid)
appear exactly once in each row and column and the numbers in each block produce the result shown in the top-left corner of the block according to the math operation appearing on the top of the grid.
In CalcuDoku a number may be used more than once in the same block.
DualOp CalcuDoku
Each puzzle consists of a grid containing blocks surrounded by bold lines. The object is to fill all empty squares so that the numbers 1 to N (where N is the number of rows or columns in the grid)
appear exactly once in each row and column and the numbers in each block produce the result of the math operation shown in the top-left corner of the block. In CalcuDoku a number may be used more
than once in the same block.
QuadOp CalcuDoku
Each puzzle consists of a grid containing blocks surrounded by bold lines. The object is to fill all empty squares so that the numbers 1 to N (where N is the number of rows or columns in the grid)
appear exactly once in each row and column and the numbers in each block produce the result of the math operation shown in the top-left corner of the block. In CalcuDoku a number may be used more
than once in the same block. | {"url":"http://www.conceptispuzzles.com/index.aspx?uri=puzzle/calcudoku/rules","timestamp":"2014-04-20T21:52:54Z","content_type":null,"content_length":"19387","record_id":"<urn:uuid:74d771de-d0b9-4936-b684-4f1b2d29968a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Illustration 17.6
Java Security Update: Oracle has updated the security settings needed to run Physlets.
Click here for help on updating Java and setting Java security.
Illustration 17.6: Plucking a String
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8
Please wait for the animation to completely load.
A green string of length L = 28 cm (position is given in centimeters) is shown plucked to x = 6 cm and y = 3 cm. The unstretched position of the string is shown in gray. Changing the slider changes
this plucking point along the length of the string in the x direction (the y point of the pluck remains the same). You may also look at the Fourier components that make up the green stretched string
by clicking on an n value. The relative size of these sine waves is depicted by the graph on the right. Restart.
We have thus far looked at using a Fourier series to describe an arbitrary periodic wave (see Illustration 16.5 and Illustration 16.6). For the plucked string, we must consider a different way to add
up waves to get the Fourier series. Here we must consider any wave that is zero at the ends of the string (since the plucked string, like a standing wave, has ends that are tied down). Therefore, we
find that our plucked string can be described in terms of a Fourier series as
f(x) = Σ A[n] sin (n*π*x/L),
where in the animation L = 28 cm (see Illustration 16.5 and Illustration 16.6 for more details on the periodic case).
When you get a good-looking graph, right-click on it to clone the graph and resize it for a better view.
Illustration authored by Morten Brydensholt, Wolfgang Christian, and Mario Belloni.
Script authored by Morten Brydensholt, Wolfgang Christian, and Mario Belloni.
« previous
next » | {"url":"http://www.compadre.org/Physlets/waves/illustration17_6.cfm","timestamp":"2014-04-19T09:37:52Z","content_type":null,"content_length":"25484","record_id":"<urn:uuid:5f34d781-0fcc-427d-aade-68d873f78d59>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
why does an led have to be given a resistor before it
if a device draws only its required amount of current from a supply,. then why does an led have to be given a resistor before it? wont it just take 20 mA from a 3 volt 5 amp power supply?
i checked many instrctables for it but couldnt find a proper explanation for my answer so i thought of asking it..
and people say that led should not be directly connected to any supply as it may blow or get damaged or burn..
but i have done that many times and also had made a torch for my uncle using two white leds of 3.5v( not sure but they are big) and used a 9 v battery without any resistor...it lasted till the
battery drained but never blasted... and yes its in series..
and thanks in advance for answering me.. and sorry for bad image quality..used my phone
The LED is a semiconductor device - an active component. The current rating is not what it will draw but the maximum it can stand before over heating and melting.
The LED is almost a short circuit when connect so it will work - forwards biased - so if it can it will draw as much current as the battery or power supply can provide - a bad thing.
Often LEDS - especially high powered ones are operated via a constant current device to remove all these issues.
To operate the LED you need to limit the current to the recommended level to avoid destroying the device. this way you can very the supplied voltage a little and still keep the current within bounds.
Take the LED voltage away from the supply voltage thus leaving the voltage you need to drop across the resistor.
You now need to use Ohms law Volts=Current x Resistance to calculate the value of the resistor for a give current - the LED operating current. If an LED needs 3 volts and operates at 10mAmps and the
supply is 9 volts then: 9-3=6 volts across the resistor. V=I x R can be changed round to be R=V / I So R=6 / .010 amp (10 milliamps.) = 600 ohms.
So a 600 ohm resistor at 6 volts will draw 10 milliamps. The current that passes through the resistor also passes through the LED so all is OK.
thank you very much for the detailed reply...i didnt know that leds can draw current infinitely until burnt...and you also gave the explanation for why it does that... and good thing you gave the
calculation example also.. and i had also uploaded an image of a 5 minute battery i made using 2 leds but it didnt show up with the question
The LED wants 20mA to be used most effectively. More current and it will start to over heat and potentially be damaged, and less current it will not light as brightly or at all.
20mA is the recommended optimal value, 5 amps is 250 times that much and will incinerate it quickly.
You must also take into account the forward voltage, or the recommended voltage of the LED. It probably wants between 2 and 2.5 volts to run most effectively. Any more and it will burn any less and
it wont turn on.
Also, 5 amps is the maximum amount of current the power supply will give. Meaning if you short it you would get 5 amps, but you could get any amount of amps below 5 out of it with the right resistor
To find a resistor that will turn 3 volts at 5 amps into 2.5 volts at 20 mA, we use ohms law.
V=IR (Voltage equal the Current (in amps) times the resistance in Ohms)
we can rewrite that as R=V/I
In this case R= (3-2.5)/.02 (The voltage across the resistor is the total voltage minus the voltage drop of the LED, NOT the total voltage)
so R=.5/.02 = 25 ohms. Unfortunately you probably can't get a 25 ohm resistor so you must use a close value like 33 or 45 ohms.
The post below mostly covered this. If you want further information abotu Ohms law or LED resistor calculation check this site out
we have add a resistor because never a cell give out exact voltage as metioned you can check this by using a multimeter. The resistor give out a stable voltage and also prevent the L.E.D from dying.
This is an excellently posed question, and Rick has given a great answer! Even though it's already answered, I have Featured it so that, perhaps, other I'bles readers can learn from it. Thank you!
thank you....well heres the 5 minute torch i made...trying to upload the image in the comment instead of the question as it didnt work then..
Those higher-powered LED's probably are designed to be used like that, and they have built-in power control circuitry inside the clear plastic.
An important point with LEDs is that the relationship between current drawn and voltage drop isn't linear as with a resistor - it behaves almost like a fixed voltage drop which will sink whatever
current you try to put through it until it reaches its limit and burns out.
This means that it's much more sensitive to the applied voltage than a resistor, so you either need to drive it from a true current source (which you can make using transistors), or by using a
resistor to regulate the current to the level you want.
The way to work out the resistance is as follows:
- before you start you need to know the voltage you're driving it from, the max current the LED can take, and the LED's characteristic voltage drop (between 1.5-4V depending on the colour). You can
look this up in the data sheet if there is one or test it with a multimeter.
- What you're aiming at is for the circuit to stabilise at a point where the LED is dropping its characteristic voltage, and the remaining voltage drop from the supply is taken up by the resistor.
- So, you subtract the LED voltage from the supply voltage to get the voltage drop across the resistor.
- Now you need to choose a resistance which will drop that voltage at the current you want.
- The right value is given by Ohm's law: V = I * R, which you can rearrange to make R=V / I
- In other words, divide the voltage drop across the resistor by the current you want and you'll get the right answer!
(Incidentally this only applies for normal LEDs which are sold without a built in resistor - LEDs that are sold as 5v, 9v, 12v etc already have a resistor built in and don't need another one.)
Rick's answer is, indeed, very good. Here are a few other thoughts on how to think of what is happening. As Rick says, an LED is a semiconductor device. However, when he said it was an "active"
device, I think he was thinking of the term "non-linear" device. A non-linear device is an electronic component where the voltage is not proportional to the current. (A resistor IS a linear device;
if you double the voltage across it, for example, the current through it doubles.) So, again, the LED is non-linear. As you increase the voltage across it, nothing happens at first. No current flows.
You increase the voltage - more - and more, and still nothing happens. Then suddenly, when you increase the voltage just a little bit more, a big current flows. The voltage at which this happens
varies from LED to LED. It is often somewhere in the 3 volt to 4 volt range. The purpose of the resistor in series with the LED is to limit the current flowing through both of them. There are lots of
posts that tell you how to do the mathematics to decide on what resistor to use. As many people have found, they can sometimes hook up an LED (or a chain of LEDs) to a battery without a resistor in
series, an it works fine. This is because the battery voltage is not much more than the "threshold" voltage and because the internal resistance of the battery is enough to limit the current. In other
words, there IS a resistor in the circuit, even though you can't see it. A typical current for small LEDs is about 10 to 20 mA. However, I played with some very bright ones that are rated for 350 mA.
I pushed the current to 400 mA before I chickened out. They were too bright to look at.
If a device draws only its required amount of current from a supply.
If you struck a LED with lightning, would it only draw 20mA or would it vapourise?
Unless you know of current-regulated LEDs (?) what goes through them is related to how hard you "push" in Volts. You need to control current with LEDs as they've got little resistance, unlike
filament bulbs for example.
There are, indeed, current-regulated LEDs. They have a minimum voltage and a maximum voltage, and will draw their rated current from any supply between those limits. They are really handy when
driving a series string of LEDs; just put one constant-current unit in the string, and make sure that the supply voltage is high enough (CCLED minimum + (number of ordinary LEDs * forward drop of an
ordinary LED). This is especially good for automotive applications, where the supply voltage can vary. Sometimes you can get away with operating an ordinary LED without a series resistor. The current
will be limited by the internal resistance of the source, and the LED will get hot, increasing its forward drop. Not something to depend on unless you have detailed information about the LED and the
It's because very few times can you run them at exactly the right voltage. I you look at an led calculator and plug in 3 volts power supply, 3 volt led, 25 ma it will show a resistor of 1 ohm. That
is nothing. It doesn't even show up. A long piece of wire might have more resistance than that. It shows a resistance because the calculator has been programmed with one in there even if it's not | {"url":"http://www.instructables.com/answers/why-does-an-led-have-to-be-given-a-resistor-before/CVVKECQGB5HEA5D","timestamp":"2014-04-16T05:11:55Z","content_type":null,"content_length":"139169","record_id":"<urn:uuid:8b27d68a-a7e1-4f73-a18c-8299c19ae85a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 41
- ACM COMPUTING SURVEYS , 1998
"... We survey parallel programming models and languages using 6 criteria [:] should be easy to program, have a software development methodology, be architecture-independent, be easy to understand,
guranatee performance, and provide info about the cost of programs. ... We consider programming models in ..."
Cited by 134 (4 self)
Add to MetaCart
We survey parallel programming models and languages using 6 criteria [:] should be easy to program, have a software development methodology, be architecture-independent, be easy to understand,
guranatee performance, and provide info about the cost of programs. ... We consider programming models in 6 categories, depending on the level of abstraction they provide.
- Proc. 16th Annual ACM Symposium on Theory Of Computing , 1984
"... Abstract. A parallel algorithm is presented that accepts as input a graph G and produces a maximal independent set of vertices in G. On a P-RAM without the concurrent write or concurrent read
features, the algorithm executes in G((10gn)~) time and uses 0((n/(logn))3) processors, where n is the numbe ..."
Cited by 78 (1 self)
Add to MetaCart
Abstract. A parallel algorithm is presented that accepts as input a graph G and produces a maximal independent set of vertices in G. On a P-RAM without the concurrent write or concurrent read
features, the algorithm executes in G((10gn)~) time and uses 0((n/(logn))3) processors, where n is the number of vertices in G. The algorithm has several novel features that may find other
applications. These include the use of balanced incomplete block designs to replace random sampling by deterministic sampling, and the use of a “dynamic pigeonhole principle ” that generalizes the
conventional pigeonhole principle.
- Communications of the ACM , 1983
"... foremost recognition of technical contributions to the computing community. The citation of Cook's achievements noted that "Dr. Cook has advanced our understanding of the complexity of
computation in a significant and profound way. His seminal paper, The Complexity of Theorem Proving Procedures ..."
Cited by 17 (0 self)
Add to MetaCart
foremost recognition of technical contributions to the computing community. The citation of Cook's achievements noted that "Dr. Cook has advanced our understanding of the complexity of
computation in a significant and profound way. His seminal paper, The Complexity of Theorem Proving Procedures, presented at the 1971 ACM SIGACT Symposium on the Theory of Computing, laid the
foundations for the theory of NP-completeness. The ensuing exploration of the boundaries and nature of the NP-complete class of problems has been one of the most active and important research
activities in computer science for the last decade. Cook is well known for his influential results in fundamental areas of computer science. He has made significant contributions to complexity
theory, to time-space tradeoffs in computation, and to logics for programming languages. His work is characterized by elegance and insights and has illuminated the very nature of computation."
During 1970-1979, Cook did extensive work under grants from the
- Journal of Supercomputing , 2000
"... . In this paper, we present deterministic and probabilistic methods for simulating PRAM computations on linear arrays with reconfigurable pipelined bus systems (LARPBS). The following results
are established in this paper. (1) Each step of a p-processor PRAM with m = O#p# shared memory cells can b ..."
Cited by 15 (11 self)
Add to MetaCart
. In this paper, we present deterministic and probabilistic methods for simulating PRAM computations on linear arrays with reconfigurable pipelined bus systems (LARPBS). The following results are
established in this paper. (1) Each step of a p-processor PRAM with m = O#p# shared memory cells can be simulated by a p-processors LARPBS in O#log p# time, where the constant in the big-O notation
is small. (2) Each step of a p-processor PRAM with m = ##p# shared memory cells can be simulated by a p-processors LARPBS in O#log m# time. (3) Each step of a p-processor PRAM can be simulated by a
p-processor LARPBS in O#log p# time with probability larger than 1 - 1/p c for all c>0. (4) As an interesting byproduct, we show that a p-processor LARPBS can sort p items in O#log p# time, with a
small constant hidden in the big-O notation. Our results indicate that an LARPBS can simulate a PRAM very efficiently. Keywords: Concurrent read, concurrent write, deterministic simulation, linear
, 1994
"... Machine. This uses combining networks on a butterfly topology with a hashed address space to try and hide the network latency. [ Abolhassan et al., 1991 ] analyses Ranade's approach in a
quantitative way by giving cost models for implementing various parts of the PRAM machine. This is then used to d ..."
Cited by 11 (0 self)
Add to MetaCart
Machine. This uses combining networks on a butterfly topology with a hashed address space to try and hide the network latency. [ Abolhassan et al., 1991 ] analyses Ranade's approach in a quantitative
way by giving cost models for implementing various parts of the PRAM machine. This is then used to demonstrate an improvement on Ranade's Fluent machine using multiple butterflies and parallel
slackness. It is then shown that the proposed improved Fluent machine would have a similar price / performance ratio of conventional distributed memory architectures. Other attempts at realising the
PRAM model involves it's simulation on conventional distributed memory architectures. This method usually involves hashing the address space of the PRAM across the distributed memory of the machine
and replication of variables [ Mehlhorn and Vishkin, 1984 ] , or using multiple hash functions [ Abolhassan et al., 1991 ] . 2.2 BSP A Bulk Synchronous Parallel machine consists of a number of
processor memo...
, 1996
"... In this paper we present a new randomized selection algorithm on the Bulk-Synchronous Parallel (BSP) model of computation along with an application of this algorithm to dynamic data structures,
namely Parallel Priority Queues (PPQs). We show that our algorithms improve previous results upon both the ..."
Cited by 11 (11 self)
Add to MetaCart
In this paper we present a new randomized selection algorithm on the Bulk-Synchronous Parallel (BSP) model of computation along with an application of this algorithm to dynamic data structures,
namely Parallel Priority Queues (PPQs). We show that our algorithms improve previous results upon both the communication requirements and the amount of parallel slack required to achieve optimal
performance. We also establish that optimality to within small multiplicative constant factors can be achieved for a wide range of parallel machines. While these algorithms are fairly simple
themselves, descriptions of their performance in terms of the BSP parameters is somewhat involved. The main reward of quantifying these complications is that it allows transportable software to be
written for parallel machines that fit the model. We also present experimental results for the selection algorithm that reinforce our claims.
, 1994
"... A new model of parallel computation is proposed, CLUMPS (Campbell's Lenient, Unified Model of Parallel Systems). This is composed of an abstract machine with an associated cost model, and aims
to be more portable, reflective of costs, expressible and encouraging of more efficient implementations of ..."
Cited by 10 (6 self)
Add to MetaCart
A new model of parallel computation is proposed, CLUMPS (Campbell's Lenient, Unified Model of Parallel Systems). This is composed of an abstract machine with an associated cost model, and aims to be
more portable, reflective of costs, expressible and encouraging of more efficient implementations of algorithms than other existing models. It is shown that each basic parallel architecture class can
congruently perform each other's computations, but the congruent simulation of each other's communication is not generally possible (where for a simulation to be congruent the simulation costs on the
target architecture are asymptotically equivalent to the implementation costs on the native architectures). This is reflected in the CLUMPS abstract machine through its flexibility in terms of
program control and memory access. The congruence requirement is relaxed so that though strict congruence may not be achieved according to the above definition, communication costs are reflectively
accounted ...
, 1996
"... Parallel computers have been successfully deployed in many scientific and numerical application areas, although their use in non-numerical and database applications has been scarce. In this
report, we first survey the architectural advancements beginning to make general-purpose parallel computing co ..."
Cited by 8 (2 self)
Add to MetaCart
Parallel computers have been successfully deployed in many scientific and numerical application areas, although their use in non-numerical and database applications has been scarce. In this report,
we first survey the architectural advancements beginning to make general-purpose parallel computing cost-effective, the requirements for non-numerical (or symbolic) applications, and the previous
attempts to develop parallel databases. The central theme of the Bulk Synchronous Parallel model is to provide a high level abstraction of parallel computing hardware whilst providing a realisation
of a parallel programming model that enables architecture independent programs to deliver scalable performance on diverse hardware platforms. Therefore, the primary objective of this report is to
investigate the feasibility of developing a portable, scalable, parallel object database, based on the Bulk Synchronous Parallel model of computation. In particular, we devise a way of providing
high-level abstra...
, 2001
"... The Hierarchical Memory Model (HMM) of computation is similar to the standard Random Access Machine (RAM) model except that the HMM has a non-uniform memory organized in a hierarchy of levels
numbered 1 through h. The cost of accessing a memory location increases with the level number, and accesses ..."
Cited by 8 (1 self)
Add to MetaCart
The Hierarchical Memory Model (HMM) of computation is similar to the standard Random Access Machine (RAM) model except that the HMM has a non-uniform memory organized in a hierarchy of levels
numbered 1 through h. The cost of accessing a memory location increases with the level number, and accesses to memory locations belonging to the same level cost the same. Formally, the cost of a
single access to the memory location at address a is given by (a), where : N ! N is the memory cost function, and the h distinct values of model the different levels of the memory hierarchy. We study
the problem of constructing and storing a binary search tree (BST) of minimum cost, over a set of keys, with probabilities for successful and unsuccessful searches, on the HMM with an arbitrary
number of memory levels, and for the special case h = 2. While the problem of constructing optimum binary search trees has been well studied for the standard RAM model, the additional parameter for
the HMM inc...
, 1996
"... In this paper we describe methods for mitigating the degradation in performance caused by high latencies in parallel and distributed networks. Our approach is similar in spirit to the
"complementary slackness" technique for latency hiding but has the advantage that the slackness does not need to be ..."
Cited by 8 (2 self)
Add to MetaCart
In this paper we describe methods for mitigating the degradation in performance caused by high latencies in parallel and distributed networks. Our approach is similar in spirit to the "complementary
slackness" technique for latency hiding but has the advantage that the slackness does not need to be provided by the programmer and that large slowdowns are not needed in order to hide the latency.
For example, given any algorithm that runs in T steps on an n-node ring with unit link delays, we show how to run the algorithm in O(T ) steps on any n-node bounded-degree connected network with
average link delay O(1). This is a significant improvement over prior approaches to latency hiding, which require slowdowns proportional to the maximum link delay (which can be quite large in
comparison to the average delay). In the case when the network has average link delay dave , our simulation runs in O( p daveT ) ... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=376377","timestamp":"2014-04-18T12:17:51Z","content_type":null,"content_length":"39021","record_id":"<urn:uuid:1f6c72d5-6661-4249-a3f4-6765d17e561d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
What classes to take?
but i wonder if in order to learn anything in that field, 1 would also need to take courses like magnetism and/or quantum physics?
I really don't know which courses are required to take optics. I think that you should inquire with someone at your University. But, a
understanding of optics can be achived without studing Electromagnetism and Quantum Physics. For an in-depth understaing of optics one would probably require knowledge of Quantum Physics and most
definetly Electromagnetism.
i wonder if those courses will be too tough.
Only you can be the judge of that. See this page:
Search a little about the subjects in question, to get things into perspective.
if you start answering(right now it would be interesting what those 5 courses i listed are useful for?)
group theory
-Physics (don't know about biophysics, though)
- Statistics
-virtually any mathematical science
- Physics (mostly statistical and quantum) physics and, of course, statistics
- I have no idea what that is. | {"url":"http://www.physicsforums.com/showthread.php?p=532767","timestamp":"2014-04-20T05:48:21Z","content_type":null,"content_length":"54221","record_id":"<urn:uuid:089773ba-7749-4371-a807-c9444eef7899>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Crescenta SAT Math Tutor
Find a La Crescenta SAT Math Tutor
...Further, I have always had a great interest in grammar, so my Spanish grammar continues to improve in advanced grammar topics. I am a math major at Caltech currently doing research in graph
theory and combinatorics with a professor at Caltech. I have taken several discrete math courses and I spent a summer solving hard problems in discrete math with a friend.
28 Subjects: including SAT math, chemistry, Spanish, calculus
...I have the certificate of Chinese speaking. I have used SAS for more than 5 years. I am very familiar with using SAS for multivariate linear regression models, generalized linear models and
other advanced data management techniques.
10 Subjects: including SAT math, calculus, statistics, HTML
...I have a good grasp of commonly tested topics, and I have an organized systems based approach of teaching that integrates concepts in both fields. I have previously tutored students in
strategizing for college applications. Based on their GPA and SAT score, desired location, and financial budge...
43 Subjects: including SAT math, English, writing, reading
...Through tutoring, I have learned how to cater to the individual. There are many ways to come to the same conclusion, and I know that every single person is different. Therefore I adapt to my
students to discover what is best teaching method for him and/or her.
18 Subjects: including SAT math, geometry, algebra 1, GRE
...For example, I have home-schooled a 12-year old, tutored part-time at Huntington Learning Center and for the past two summers, I have taught SAT Math and Geometry in a classroom environment
(Top Learning Center and Perfect Score Academy). I actually just took an SAT where I got a perfect 800 in m...
73 Subjects: including SAT math, English, reading, writing | {"url":"http://www.purplemath.com/la_crescenta_ca_sat_math_tutors.php","timestamp":"2014-04-21T07:18:09Z","content_type":null,"content_length":"24089","record_id":"<urn:uuid:c82a84c4-0867-4078-a118-64689c1a1578>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebraic and Geometric Topology 1 (2001), paper no. 34, pages 699-708.
The mapping class group of a genus two surface is linear
Stephen J. Bigelow, Ryan D. Budney
Abstract. In this paper we construct a faithful representation of the mapping class group of the genus two surface into a group of matrices over the complex numbers. Our starting point is the
Lawrence-Krammer representation of the braid group B_n, which was shown to be faithful by Bigelow and Krammer. We obtain a faithful representation of the mapping class group of the n-punctured sphere
by using the close relationship between this group and B_{n-1}. We then extend this to a faithful representation of the mapping class group of the genus two surface, using Birman and Hilden's result
that this group is a Z_2 central extension of the mapping class group of the 6-punctured sphere. The resulting representation has dimension sixty-four and will be described explicitly. In closing we
will remark on subgroups of mapping class groups which can be shown to be linear using similar techniques.
Keywords. Mapping class group, braid group, linear, representation
AMS subject classification. Primary: 20F36. Secondary: 57M07, 20C15.
DOI: 10.2140/agt.2001.1.699
E-print: arXiv:math.GT/0010310
Submitted: 2 August 2001. (Revised: 15 November 2001.) Accepted: 16 November 2001. Published: 22 November 2001.
Notes on file formats
Stephen J. Bigelow, Ryan D. Budney
Department of Mathematics and Statistics, University of Melbourne
Parkville, Victoria, 3010, Australia
Department of Mathematics, Cornell University
Ithaca, New York 14853-4201, USA
Email: bigelow@unimelb.edu.au, rybu@math.cornell.edu
AGT home page
Archival Version
These pages are not updated anymore. They reflect the state of . For the current production of this journal, please refer to http://msp.warwick.ac.uk/. | {"url":"http://www.emis.de/journals/UW/agt/AGTVol1/agt-1-34.abs.html","timestamp":"2014-04-19T09:44:47Z","content_type":null,"content_length":"3558","record_id":"<urn:uuid:5b0b1cbf-fe54-49fd-896c-76886a415b8b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Los Altos Hills, CA Trigonometry Tutor
Find a Los Altos Hills, CA Trigonometry Tutor
...As an undergrad at Harvey Mudd, I helped design and teach a class on the software and hardware co-design of a GPS system, which was both a challenging and rewarding experience. I offer tutoring
for all levels of math and science as well as test preparation. I will also proofread and help with technical writing, as I believe good communication skills are very important.
27 Subjects: including trigonometry, chemistry, calculus, physics
...In fact, I enjoyed teaching so much that I kept on doing it to this day. Through the years, I have worked mostly with junior high and high schoolers, but I have also worked with kids as young
as 4th graders and adults at university or community colleges. My number one goal is the academic success of my students.
11 Subjects: including trigonometry, chemistry, calculus, physics
...I graduated with an applied mathematics B.A. and a minor in mathematical education, so I am more than qualified to answer any questions your students may have. My teaching philosophy is to help
the the students understand where the formulas come from. By doing this, the students won't just have to memorize formulas, they will be able to derive them if they need to.
9 Subjects: including trigonometry, calculus, geometry, algebra 1
...I have done private tutoring for students in 3rd, 5th, and 6th grade, as well as tutored geometry and physics for a 10th grade student and Calculus AB for a 12th grade student. My favorite
subjects to tutor are high school math, physics, and Spanish. I worked as the Children's Programming Coord...
27 Subjects: including trigonometry, chemistry, English, reading
...I can offer tutoring in many subjects, but my specialties are science, math, and computer programming (I favor Python). I hope to teach you more than just what you need for tomorrow's test. I
hope you will also learn how to keep learning on your own!I am a professional research biologist, with s...
17 Subjects: including trigonometry, chemistry, writing, geometry | {"url":"http://www.purplemath.com/Los_Altos_Hills_CA_trigonometry_tutors.php","timestamp":"2014-04-20T11:09:53Z","content_type":null,"content_length":"24808","record_id":"<urn:uuid:e0f4d5c2-9dd6-4ce9-a090-8a807d33afa9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exponential Form
Students can learn about the Exponential form here.They can solve problems based on the methodology explained in the solved examples.
In Algebra, the study of working with algebraic equations and variables is an important topic. The study of exponents is also an important topic. The Exponential form is a way of expressing
variables. It has 2 components:
• The Base- which could be a number or a variable
• The Exponent- the power to which the base is raised to.
The exponential form often represents the degree of an equation. The exponents mean the number of times the variable or a number is multiplied with itself.
Example: `x^(2)` is x * x
`2^(4)`= 2*2*2*2= 16
Definition of Exponential Form
Students can learn the definition of exponential form here.
Exponential function can be defined as when a function or number increases at constant rate. The numeric function is called exponential if it is in the form,
[ f(a)=`e^(a)` ]
here a is independent variable.
* e constant value is e = 2.71828183
* Exponential functions are represented by either e or d
The rules for expressing in exponential form are:
1) `x^(a+b)` = `x^(a)` `x^(b)`
2) `x^(ab)` = x^a^b
3) `x^(0)` = 1
4) `x^(-a)` = [1/`x^(a)`]
Calculating Exponentials Online
Students can learn Calculating exponentials online. They can also get help from the online tutors for understanding the steps involved.
Example 1: 270000
here we can see it has four zeroes. Therefore we can write as 27 x 10^4
Example 2: Write the number in exponential form in one digit - 13,2000 0000.
Here it is asked to write in single digit.
1.32 x `10^(9)` we can write
after the 1st digit 2 there were 9 digits.
so we wrote it as power 9.
Example 3: 63/10000
let us change denominator to power of 10
here 10,000 = `10^(4)`
so 1/10,000 = `10^(-4)`
then 63/ `10^(4)` can be written as 63 * `10^(-4)`
or for single digit we write it as [(6.3)*10] *`10^(-4)`
so 6.3 * `10^(-3)`
Students can get more help with the topic on the algebra homework help page. | {"url":"http://www.tutornext.com/math/exponential-form.html","timestamp":"2014-04-17T12:37:56Z","content_type":null,"content_length":"12116","record_id":"<urn:uuid:853d2ad4-f3da-4e58-86be-d5264773f6b4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
Balance & Harmony
Tom Hiddleston holding Chris Hemsworth’s baby. Tom with a babyTOM wiTh a bA by HERE TUMBLR HAVE THIS I had to stare for like 10 minutes to make sure it was real
Tom Hiddleston holding Chris Hemsworth’s baby. Tom with a babyTOM wiTh a bA by HERE TUMBLR HAVE THIS
Tom Hiddleston holding Chris Hemsworth’s baby. Tom with a babyTOM wiTh a bA by
I had to stare for like 10 minutes to make sure it was real
EVERY SINGLE EASTER MY MOTHER HIDES A THREE POUND EASTER EGG IN THE HOUSE AND SETS MY BROTHERS AND I OFF TO GO FIND IT AND GUESS WHO GOT IT FOR THE FOURTH CONSECUTIVE YEAR IN A ROW NOT THOSE LIL
EVERY SINGLE EASTER MY MOTHER HIDES A THREE POUND EASTER EGG IN THE HOUSE AND SETS MY BROTHERS AND I OFF TO GO FIND IT AND GUESS WHO GOT IT FOR THE FOURTH CONSECUTIVE YEAR IN A ROW
As a woman I have no country. As a woman I want no country. As a woman, my country is the whole world.— Virginia Woolf wow I wish I could take beautiful pictures from plane windows You can. Just get
a camera or phone and take one.
As a woman I have no country. As a woman I want no country. As a woman, my country is the whole world.— Virginia Woolf wow I wish I could take beautiful pictures from plane windows
As a woman I have no country. As a woman I want no country. As a woman, my country is the whole world.— Virginia Woolf
wow I wish I could take beautiful pictures from plane windows
You can. Just get a camera or phone and take one.
… Y’see, now, y’see, I’m looking at this, thinking, squares fit together better than circles, so, say, if you wanted a box of donuts, a full box, you could probably fit more square donuts in than
circle donuts if the circumference of the circle touched the each of the corners of the square donut. So you might end up with more donuts. But then I also think… Does the square or round donut have
a greater donut volume? Is the number of donuts better than the entire donut mass as a whole? Hrm. HRM. A round donut with radius R1 occupies the same space as a square donut with side 2R1. If the
center circle of a round donut has a radius R2 and the hole of a square donut has a side 2R2, then the area of a round donut is πR12 - πr22. The area of a square donut would be then 4R12 - 4R22. This
doesn’t say much, but in general and throwing numbers, a full box of square donuts has more donut per donut than a full box of round donuts.The interesting thing is knowing exactly how much more
donut per donut we have. Assuming first a small center hole (R2 = R1/4) and replacing in the proper expressions, we have a 27,6% more donut in the square one (Round: 15πR12/16 ≃ 2,94R12, square:
15R12/4 = 3,75R12). Now, assuming a large center hole (R2 = 3R1/4) we have a 27,7% more donut in the square one (Round: 7πR12/16 ≃ 1,37R12, square: 7R12/4 = 1,75R12). This tells us that,
approximately, we’ll have a 27% bigger donut if it’s square than if it’s round. tl;dr: Square donuts have a 27% more donut per donut in the same space as a round one. Thank you donut side of Tumblr.
… Y’see, now, y’see, I’m looking at this, thinking, squares fit together better than circles, so, say, if you wanted a box of donuts, a full box, you could probably fit more square donuts in than
circle donuts if the circumference of the circle touched the each of the corners of the square donut. So you might end up with more donuts. But then I also think… Does the square or round donut have
a greater donut volume? Is the number of donuts better than the entire donut mass as a whole? Hrm. HRM. A round donut with radius R1 occupies the same space as a square donut with side 2R1. If the
center circle of a round donut has a radius R2 and the hole of a square donut has a side 2R2, then the area of a round donut is πR12 - πr22. The area of a square donut would be then 4R12 - 4R22. This
doesn’t say much, but in general and throwing numbers, a full box of square donuts has more donut per donut than a full box of round donuts.The interesting thing is knowing exactly how much more
donut per donut we have. Assuming first a small center hole (R2 = R1/4) and replacing in the proper expressions, we have a 27,6% more donut in the square one (Round: 15πR12/16 ≃ 2,94R12, square:
15R12/4 = 3,75R12). Now, assuming a large center hole (R2 = 3R1/4) we have a 27,7% more donut in the square one (Round: 7πR12/16 ≃ 1,37R12, square: 7R12/4 = 1,75R12). This tells us that,
approximately, we’ll have a 27% bigger donut if it’s square than if it’s round. tl;dr: Square donuts have a 27% more donut per donut in the same space as a round one.
… Y’see, now, y’see, I’m looking at this, thinking, squares fit together better than circles, so, say, if you wanted a box of donuts, a full box, you could probably fit more square donuts in than
circle donuts if the circumference of the circle touched the each of the corners of the square donut. So you might end up with more donuts. But then I also think… Does the square or round donut have
a greater donut volume? Is the number of donuts better than the entire donut mass as a whole? Hrm. HRM.
… Y’see, now, y’see, I’m looking at this, thinking, squares fit together better than circles, so, say, if you wanted a box of donuts, a full box, you could probably fit more square donuts in than
circle donuts if the circumference of the circle touched the each of the corners of the square donut.
But then I also think… Does the square or round donut have a greater donut volume? Is the number of donuts better than the entire donut mass as a whole?
A round donut with radius R1 occupies the same space as a square donut with side 2R1. If the center circle of a round donut has a radius R2 and the hole of a square donut has a side 2R2, then the
area of a round donut is πR12 - πr22. The area of a square donut would be then 4R12 - 4R22. This doesn’t say much, but in general and throwing numbers, a full box of square donuts has more donut per
donut than a full box of round donuts.The interesting thing is knowing exactly how much more donut per donut we have. Assuming first a small center hole (R2 = R1/4) and replacing in the proper
expressions, we have a 27,6% more donut in the square one (Round: 15πR12/16 ≃ 2,94R12, square: 15R12/4 = 3,75R12). Now, assuming a large center hole (R2 = 3R1/4) we have a 27,7% more donut in the
square one (Round: 7πR12/16 ≃ 1,37R12, square: 7R12/4 = 1,75R12). This tells us that, approximately, we’ll have a 27% bigger donut if it’s square than if it’s round.
tl;dr: Square donuts have a 27% more donut per donut in the same space as a round one.
so my boyfriend and I tried roleplaying the other day and we did the whole “professor and bad student who needs to pass” thing, only he wanted to be the professor, so I had to be the horny and
failing student. I’m the valedictorian of my senior class of 400 and I have a horrible phobia of flunking, so when he whispered “you’re failing my class, you naughty girl” in my ear, I started crying
and we had to stop
My friend and her bf just broke up and she called me crying and I was all like “You’re going to fall in love so many times before you find the one you’ll be with forever. So think of it this way;
you’re one heartbreak closer to happily ever after.” and I think she thought I was being deep and insightful, but really I was quoting wizards of waverly place | {"url":"http://wellfightaslongaswelive.tumblr.com/","timestamp":"2014-04-19T09:36:44Z","content_type":null,"content_length":"70731","record_id":"<urn:uuid:89d3e70c-599a-4b9c-83a9-dfea5608ed2f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Transient Two-Dimensional Heat Conduction Using Chebyshev Collocation
Consider the two-dimensional heat equation given by
, with and ,
which represents heat conduction in a two-dimensional domain. The boundary conditions are such that the temperature, , is equal to 0 on all the edges of the domain:
and for
and for
Without loss of generality, one can take the thermal diffusivity, , equal to . The initial condition is given by
for .
The dimensionless temperature, , can be found using either
or the Chebyshev collocation technique. As shown in the table of data given at , the two methods give the same results. Also, three contours of the dimensionless temperature, , (i.e., 0.25, 0.5, and
0.75) at are shown using the red curves and dashed black curves for the solution obtained with
and Chebyshev collocation, respectively. Again, perfect agreement is observed. Finally, you can set the value of the dimensionless time, and the contour plot tab lets you display the contour plot of
the solution obtained using the Chebyshev collocation method.
In the discrete Chebyshev–Gauss–Lobatto case, the interior points are given by . These points are the extremums of the Chebyshev polynomial of the first kind .
The Chebyshev derivative matrix at the quadrature points, , is an matrix given by
, , , for , and for and ,
where for and .
The discrete Laplacian is given by , where is the identity matrix, is the Kronecker product operator, , and is without the first row and first column.
An affine transformation, , allows shifting from the interval to .
[1] L. N. Trefethen,
Spectral Methods in MATLAB
, Philadelphia: SIAM, 2000. | {"url":"http://www.demonstrations.wolfram.com/TransientTwoDimensionalHeatConductionUsingChebyshevCollocati/","timestamp":"2014-04-21T14:42:25Z","content_type":null,"content_length":"50108","record_id":"<urn:uuid:d793ed0b-74ed-48f1-8b5a-086ac3270d28>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Bayesian method for the induction of probabilistic networks from data
Results 1 - 10 of 847
, 1995
"... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called
the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..."
Cited by 981 (70 self)
Add to MetaCart
In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the
Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is one-half. Although there has been much discussion of Bayesian hypothesis testing in the
context of criticism of P -values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the
context of five scientific applications in genetics, sports, ecology, sociology and psychology.
- Machine Learning , 1995
"... We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event
equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simpl ..."
Cited by 913 (38 self)
Add to MetaCart
We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence
and parameter modularity. These properties have been mostly ignored, but when combined, greatly simplify the encoding of a user’s prior knowledge. In particular, a user can express his knowledge—for
the most part—as a single prior Bayesian network for the domain. 1
- LEARNING IN GRAPHICAL MODELS , 1995
"... ..."
- Journal of Computational Biology , 2000
"... DNA hybridization arrays simultaneously measure the expression level for thousands of genes. These measurements provide a “snapshot ” of transcription levels within the cell. A major challenge
in computational biology is to uncover, from such measurements, gene/protein interactions and key biologica ..."
Cited by 731 (16 self)
Add to MetaCart
DNA hybridization arrays simultaneously measure the expression level for thousands of genes. These measurements provide a “snapshot ” of transcription levels within the cell. A major challenge in
computational biology is to uncover, from such measurements, gene/protein interactions and key biological features of cellular systems. In this paper, we propose a new framework for discovering
interactions between genes based on multiple expression measurements. This framework builds on the use of Bayesian networks for representing statistical dependencies. A Bayesian network is a
graph-based model of joint multivariate probability distributions that captures properties of conditional independence between variables. Such models are attractive for their ability to describe
complex stochastic processes and because they provide a clear methodology for learning from (noisy) observations. We start by showing how Bayesian networks can describe interactions between genes. We
then describe a method for recovering gene interactions from microarray data using tools for learning Bayesian networks. Finally, we demonstrate this method on the S. cerevisiae cell-cycle
measurements of Spellman et al. (1998). Key words: gene expression, microarrays, Bayesian methods. 1.
, 1997
"... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with
state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..."
Cited by 587 (22 self)
Add to MetaCart
Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with
state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we evaluate approaches for
inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian
classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the
same time maintains the computational simplicity (no search involved) and robustness that characterize naive Bayes. We experimentally tested these approaches, using problems from the University of
California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for feature selection.
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and
flexible. For example, HMMs have been used for speech recognition and bio-sequence analysis, and KFMs have bee ..."
Cited by 564 (3 self)
Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible.
For example, HMMs have been used for speech recognition and bio-sequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However,
HMMs and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete
random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linear-Gaussian. In this thesis, I will discuss how to represent many different kinds of
models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data. In particular, the main novel technical contributions of this thesis are as
follows: a way of representing Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that
takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic
approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of applying Rao-Blackwellised particle filtering
to DBNs in general, and the SLAM (simultaneous localization and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of
DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
- MACHINE LEARNING , 1997
"... Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It
does this by learning tasks in parallel while using a shared representation; what is learned for each task ..."
Cited by 465 (7 self)
Add to MetaCart
Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does
this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new
evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we
demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm
and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning
works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems.
- Papers from the 1998 Workshop, AAAI , 1998
"... In addressing the growing problem of junk E-mail on the Internet, we examine methods for the automated construction of filters to eliminate such unwanted messages from a user’s mail stream. By
casting this problem in a decision theoretic framework, we are able to make use of probabilistic learning m ..."
Cited by 386 (6 self)
Add to MetaCart
In addressing the growing problem of junk E-mail on the Internet, we examine methods for the automated construction of filters to eliminate such unwanted messages from a user’s mail stream. By
casting this problem in a decision theoretic framework, we are able to make use of probabilistic learning methods in conjunction with a notion of differential misclassification cost to produce
filters Which are especially appropriate for the nuances of this task. While this may appear, at first, to be a straight-forward text classification problem, we show that by considering
domain-specific features of this problem in addition to the raw text of E-mail messages, we can produce much more accurate filters. Finally, we show the efficacy of such filters in a real world usage
scenario, arguing that this technology is mature enough for deployment.
- In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence , 1995
"... When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by
discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality ..."
Cited by 311 (2 self)
Add to MetaCart
When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by
discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality assumption and instead use statistical methods for nonparametric density estimation.
For a naive Bayesian classifier, we present experimental results on a variety of natural and artificial domains, comparing two methods of density estimation: assuming normality and modeling each
conditional distribution with a single Gaussian; and using nonparametric kernel density estimation. We observe large reductions in error on several natural and artificial data sets, which suggests
that kernel estimation is a useful tool for learning Bayesian models. In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann Publishers, San Mateo, 1995
1 Introduction In rec...
- Communications of the ACM , 1995
"... We examine a graphical representation of uncertain knowledge called a Bayesian network. The representation is easy to construct and interpret, yet has formal probabilistic semantics making it
suitable for statistical manipulation. We show how we can use the representation to learn new knowledge by c ..."
Cited by 299 (13 self)
Add to MetaCart
We examine a graphical representation of uncertain knowledge called a Bayesian network. The representation is easy to construct and interpret, yet has formal probabilistic semantics making it
suitable for statistical manipulation. We show how we can use the representation to learn new knowledge by combining domain knowledge with statistical data. 1 Introduction Many techniques for
learning rely heavily on data. In contrast, the knowledge encoded in expert systems usually comes solely from an expert. In this paper, we examine a knowledge representation, called a Bayesian
network, that lets us have the best of both worlds. Namely, the representation allows us to learn new knowledge by combining expert domain knowledge and statistical data. A Bayesian network is a
graphical representation of uncertain knowledge that most people find easy to construct and interpret. In addition, the representation has formal probabilistic semantics, making it suitable for
statistical manipulation (Howard,... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=6408","timestamp":"2014-04-18T07:23:48Z","content_type":null,"content_length":"39295","record_id":"<urn:uuid:f858e374-7e84-48a4-bf82-d14a2ba865b2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fullerton, CA Algebra Tutor
Find a Fullerton, CA Algebra Tutor
...I passed CSET math sections. I have five years' combined experience with all different levels of students. I recently obtained a teaching assistant position (fieldwork experience) at a high
school in Orange County.
15 Subjects: including algebra 2, algebra 1, calculus, chemistry
...I am currently employed through the college and have been tutoring for about 1.5 years. I tutor anything from precalculus down. If you have had a bad experience with math in the past, do not
5 Subjects: including algebra 1, algebra 2, precalculus, trigonometry
Hi! I've been teaching high school math for over 20 years, and I teach every type of class, from Algebra 1 to Honors Algebra 2. With two sons in college (USC and Cal), I need to make some extra
money in the late afternoons or early evenings.
5 Subjects: including algebra 1, algebra 2, SAT math, linear algebra
...I just graduated from UCLA with a degree in Physiological Science and am currently taking prereqs before applying to grad school in speech-language pathology. My teaching experience includes:
1 year high school biology, 3 months reading & writing with elementary school kids, 3 months reading & w...
25 Subjects: including algebra 2, algebra 1, English, reading
My name is Chris, and I may be the ideal tutor for you! I am a graduate of the University of California, Irvine. I'm also finishing my credential program to become a math teacher.
11 Subjects: including algebra 2, algebra 1, geometry, SAT math
Related Fullerton, CA Tutors
Fullerton, CA Accounting Tutors
Fullerton, CA ACT Tutors
Fullerton, CA Algebra Tutors
Fullerton, CA Algebra 2 Tutors
Fullerton, CA Calculus Tutors
Fullerton, CA Geometry Tutors
Fullerton, CA Math Tutors
Fullerton, CA Prealgebra Tutors
Fullerton, CA Precalculus Tutors
Fullerton, CA SAT Tutors
Fullerton, CA SAT Math Tutors
Fullerton, CA Science Tutors
Fullerton, CA Statistics Tutors
Fullerton, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/fullerton_ca_algebra_tutors.php","timestamp":"2014-04-17T21:43:08Z","content_type":null,"content_length":"23732","record_id":"<urn:uuid:8e60c112-1f4a-4226-adcb-386a3086fa9a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
java code for fuzzy c means clustering algorithm
Some Information About
java code for fuzzy c means clustering algorithm
is hidden..!! Click Here to show java code for fuzzy c means clustering algorithm's more details..
Do You Want To See More Details About
"java code for fuzzy c means clustering algorithm"
? Then
.Ask Here..!
with your need/request , We will collect and show specific information of java code for fuzzy c means clustering algorithm's within short time.......So hurry to Ask now (No Registration , No fees
...its a free service from our side).....Our experts are ready to help you...
.Ask Here..!
In this page you may see java code for fuzzy c means clustering algorithm related pages link And You're currently viewing a stripped down version of content. open "Show Contents" to see content in
proper format with attachments
Page / Author tags
Posted by:
Created at: Friday 12th fuzzy c means clustering algorithm , java code for fuzzy c means, fuzzy c means java , fuzzy c means clustering algorithm source code in java, algorithm for fuzzy c means
of April 2013 05:11:05 clustering java , java code for fuzzy c means clustering algorithm, source code for implement fuzzy c means clustering algorithm , java source code for fuzzy clustering,
AM fuzzy c means java code , fuzzy c means java source code, what is fuzzy clustering algorithm in swing code , fuzzy c mean clustering java code, fuzzy c means clustering
Last Edited Or Replied algorithm source code , fuzzy c means coding in java, fuzzy c means clustering algorithm java code , algorithm project source code, fuzzy c mean java ,
at :Monday 08th of July
2013 03:44:31 AM
Please send me java sourc..................[:=> Show Contents <=:]
Posted by:
Created at: Friday 12th fuzzy c means clustering algorithm, java code for fuzzy c means , fuzzy c means java, fuzzy c means clustering algorithm source code in java , algorithm for fuzzy c means
of April 2013 05:11:05 clustering java, java code for fuzzy c means clustering algorithm , source code for implement fuzzy c means clustering algorithm, java source code for fuzzy clustering ,
AM fuzzy c means java code, fuzzy c means java source code , what is fuzzy clustering algorithm in swing code, fuzzy c mean clustering java code , fuzzy c means clustering
Last Edited Or Replied algorithm source code, fuzzy c means coding in java , fuzzy c means clustering algorithm java code, algorithm project source code , fuzzy c mean java,
at :Monday 08th of July
2013 03:44:31 AM
Please send me java source code for fuzzy c-means clustering..................[:=> Show Contents <=:]
Posted by:
Created at: Friday 12th
of April 2013 05:10:47 cancer treatment using nanotechnology, treatment of cancer using nanotechnology , fuzzy c means clustering code in java, fuzzy c means clustering algorithm source code in
AM java , nanotechnology in pharmacotherapy 2013 ppt, seminar on cancer treatment using nanotechnology ppt ,
Last Edited Or Replied
at :Saturday 13th of
April 2013 01:12:58 AM
[f..................[:=> Show Contents <=:]
Posted by:
Created at: Thursday spatial fuzzy c means clustering matlab , matlab code for liver image segmentation by fuzzy c means algorithm, fuzzy c means clustering algorithm , matlab code spatial fuzzy
07th of March 2013 cmeans, fuzzy c means , c means clustering matlab code image segmentation, fuzzy c means matlab , fcm code for speech clustering in matlab, fuzzy c means clustering matlab
10:45:21 PM code , spatial fuzzy c means clustering for image segmentation matlab code, fuzzy c mean clustering matlab , fuzzy c means code for image segmentation matlab, fuzzy c means
Last Edited Or Replied clustering code for image segmentation , fuzzy c means matlab image, c means algorithm matlab code , cod fcm in matlab, fuzzy c means clustering matlab ,
at :Friday 08th of
March 2013 02:53:47 AM
i am doing project in microarray image segmentation.
can you help me for spatial fuzzy c means clustering..................[:=> Show Contents <=:]
Posted by:
Created at: Friday 18th
of January 2013 adaptive fuzzy moving k means clustering matlab code for image segmentation, k means java for code retinal image segmentation , fuzzy c means clustering algorithm, adaptive
01:53:03 AM fuzzy k means clustering algorithm for image segmentation matlab code , adaptive fuzzy k means clustering algorithm for image segmentation, fuzzy k means clustering
Last Edited Or Replied algorithm for image segmentation , alghoritm for fuzzy image segmentation ppt, k means clustering in image segmentation , ppt adaptive k means,
at :Friday 18th of
January 2013 01:53:03
can you provide numerical calculations for fuzzy c mean and ..................[:=> Show Contents <=:]
Posted by:
Created at: Wednesday
16th of January 2013 seminar topics image segmentation using k means algorithm in ieee format , image segmentation using k means clutering algorthim, image segmentation using k means clustering
11:31:05 PM algorithm documentation , fuzzy c means clustering algorithm, fuzzy k means clustering algorithm for image segmentation ,
Last Edited Or Replied
at :Wednesday 16th of
January 2013 11:31:05
can you provide clear explanation fo..................[:=> Show Contents <=:]
Posted by: smart paper
boy FUZZYLOGIC BASED INFORMATION FUSION FOR IMAGE SEGMENTATION, FUZZYLOGIC , BASED, INFORMATION , FUSION, fuzzy search , introduction to fuzzy, fuzzy logic algorithms , fuzzy
Created at: Monday 20th logic in expert system, fuzzy networks , fuzzy neural, fuzzy systems , systems fuzzy, IMAGE , SEGMENTATION, seminar based on fuzzy logic 2012 papers for computer science ,
of June 2011 05:14:11 projects based on fuzzy logic and computer science, project related to fuzzy logic and computer science , fuzzy logic doc, fuzzy logic based projects on image processing ,
AM fuzzy logic based projects in marketing, fuzzy logic based project codes , fuzzy logic on images, fuzzy c means clustering algorithm seminar report , fuzzy logic based image
Last Edited Or Replied processing, science and logic images , brain tumor segmetation using fuzzy c means, report fuzzy logic image fusion , latest project on computer science fuzzy logic, fuzzy
at :Friday 03rd of logic based topics , seminar based on fuzzy logic, image segmentation , fuzzy logic,
February 2012 01:05:42
there was a third region (beyond True and False) where these opposites tumbled about. Other, more modern philosophers echoed his sentiments, notably Hegel, Marx, and Engels. But it was Lukasiewicz
who first proposed a systematic alternative to the bi-valued logic of Aristotle ..................[:=> Show Contents <=:]
Cloud Plugin by Remshad Medappil | {"url":"http://seminarprojects.net/c/java-code-for-fuzzy-c-means-clustering-algorithm","timestamp":"2014-04-16T10:12:03Z","content_type":null,"content_length":"30615","record_id":"<urn:uuid:ce208f70-3ebd-4c63-a839-f8c264f5fb90>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
Archimedes' Method of Estimating Pi
Date: 5/29/96 at 21:24:35
From: Larry Sherman
Subject: Archimedes' method of estimating pi - ?
Tell me about Archimedes' method for estimating pi using inscribed and
circumscribed polygons about a circle.
Date: 5/30/96 at 14:38:30
From: Doctor Darrin
Subject: Re: Archimedes' method of estimating pi - ?
Archimedes knew that the area of a circle was pi * r^2. He estimated
the value of pi by estimating the area of a circle with radius 1 (and
area pi).
To do this, he would calculate the area of a regular polygon inscribed
in the circle. Since the polygon would be entirely contained in the
circle, it would have an area less than the area of the circle. For
instance, if we inscribed a regular hexagon in a circle of radius 1,
we could divide the hexagon into 6 equilateral triangles, each having
sides of length one. The area of the triangles would then be about
.433, so the area of the hexagon is 6*.433=2.60. Thus, we see that pi
is greater than 2.6.
If we circumscribe a hexagon around a circle, then we can divide it
into six equilateral triangles each having area .577, so the hexagon
has area 3.46. Since the circle is inside the hexagon, it has area
less than 3.46, so we see that pi is less than 3.46.
Archimedes did much better than this - he used regular polygons with
96 sides, and found that pi is between 3+(10/71) and 3+(1/7).
I hope this helps.
-Doctor Darrin, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/54824.html","timestamp":"2014-04-21T05:58:06Z","content_type":null,"content_length":"6945","record_id":"<urn:uuid:e2469399-fef8-4f7f-b5e1-a963520c9b41>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
1 April 2002 Vol. 7, No. 13
THE MATH FORUM INTERNET NEWS
NCTM Annual Meeting | Numericana.com
Math Books Recommended by Mr. Brandenburg
NCTM ANNUAL MEETING - MATH FORUM PRESENTATIONS
Attend one of our sessions at the NCTM Annual Meeting:
Session 116
The Math Forum: Realizing a Vision for
Supporting Teaching and Learning Mathematics on
the Internet
Session 272
Standards-aligned Mathematics Lessons
Using Technology as a Tool
Session 425
Girl Power and PoW
Session 834
Ask-Your-Colleagues: The Math Forum Online
Teacher2Teacher Service
Session 927
Enhancing Mathematical Communication
Using Problems of the Week from the Math Forum
Session 984
Using the Math Forum as a Medium for
Professional Learning
The online companion of Numericana, this site contains
excerpts from Gerard Michon's book, including the entire
glossary of scientific terms. Browse an index with summaries,
browse by popularity, or search nearly two hundred of his
"final answers" of readers' questions, organized into the
- Measurements and Units
- Counting, Probability, Utility
- Geometry and Topology
- Algebra
- Trigonometry, Elementary Functions, Special Functions
- Calculus
- Analysis, Convergence, Series, Complex Analysis
- Set Theory and Logic
- Number Theory
- Recreational Mathematics
- Miscellaneous
- History, Nomenclature, Vocabulary, etc.
- Physics
- Practical Formulas
- Trivia
- Polyhedra
- Mathematical Games (Strategies)
- Unabridged Answers
including a 1961 lecture by Richard P. Feynman on
physical units:
Use the "Ask Gerard" link to e-mail your own question to
MATH BOOKS RECOMMENDED BY MR. BRANDENBURG
From others' recommendations and from his own reading as well,
Guy Brandenburg compiled a list of about 80 math-related books,
mostly recent, for his geometry students to choose from, read,
and do a report on. The list is also available for download
as a MS Word document, text file or PDF file.
LITERATURE AND MATHEMATICS
From the Math Forum's Teacher2Teacher service, answers to
a frequently asked question: How can I teach mathematics
using literature?
Use the submission form to share your favorite book with
the T2T community.
CHECK OUT OUR WEB SITE:
The Math Forum http://mathforum.org/
Ask Dr. Math http://mathforum.org/dr.math/
Problems of the Week http://mathforum.org/pow/
Mathematics Library http://mathforum.org/library/
Teacher2Teacher http://mathforum.org/t2t/
Discussion Groups http://mathforum.org/discussions/
Join the Math Forum http://mathforum.org/join.forum.html
Send comments to the Math Forum Internet Newsletter editors
_o \o_ __| \ / |__ o _ o/ \o/
__|- __/ \__/o \o | o/ o/__/ /\ /| |
\ \ / \ / \ /o\ / \ / \ / | / \ / \ | {"url":"http://mathforum.org/electronic.newsletter/mf.intnews7.13.html","timestamp":"2014-04-20T09:24:02Z","content_type":null,"content_length":"7321","record_id":"<urn:uuid:df8cc0cc-dd88-49a8-a5d7-39e3d5623537>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recent Entries
Taking advantage of DW's new community sticky feature:
This is a community for people following along with
MIT 6.00: Introduction to Computer Science and Programming
, which is part of their OpenCourseWare project; the lectures, reading assignments and problem sets are all free and available online. The goal is to do a first run-through from November 2009 -
February 2010, although obviously you can go at your own pace, and it would be great to muster up enough people to support each other on different timeframes (one lecture a week or starting after the
new year).
Feel free to join in at any point! No programming experience necessary.
ETA: If you don't have a dreamwidth account but would like one, check out
or drop me a line. | {"url":"http://intro-to-cs.dreamwidth.org/","timestamp":"2014-04-20T15:51:16Z","content_type":null,"content_length":"112228","record_id":"<urn:uuid:d0435ec1-91c0-465e-bd0a-dbde65af347c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can a corollary follow a conjecture?
up vote 5 down vote favorite
It is typical to find a corollary that following theorems, but is it right to use the word corollary for a statement following a conjecture, where the statement is true only if the unproven
conjecture is true?
convention mathematical-writing
2 Such statements are said to have "conditional proofs". en.wikipedia.org/wiki/Conditional_proof . I'm voting to close as "not a real question" – Gjergji Zaimi Mar 11 '10 at 4:15
4 I think it's a reasonable question. – Noah Snyder Mar 11 '10 at 4:31
I don't like this business of voting down stuff, especially when I do like the question. – Anonymous Mar 11 '10 at 4:31
I didn't give it a vote down, the poster asked a clear question. The only debatable thing is whether this the right place to find the answer. Maybe this should be discussed at meta (or maybe there
is a discussion already), but at least change the tag? :-) – Gjergji Zaimi Mar 11 '10 at 4:35
I retagged the question. – Kevin H. Lin Mar 11 '10 at 5:44
add comment
5 Answers
active oldest votes
I think it's generally bad form to have a corollary dependent on an earlier conjecture. I recommend one of the following:
Theorem: Assuming Conjecture A, properties X, Y and Z are true.
up vote 12 down vote accepted
Theorem: Conjecture A implies X, Y and Z.
Most importantly, it should be crystal clear that the result is dependent on the conjecture.
add comment
I would write "Proposition Z: If X holds, then Y is true." Even if the deduction of Y from X were trivial, I think labelling this a corollary would be confusing. (After all, what is the
statement "X implies Y" a corollary of?) However, I wouldn't have a problem writing something like "as we saw above, Y would be a corollary of X" later on. (The subjunctive voice is
up vote 9 important here!)
down vote
2 +1 for mentioning the subjunctive (and to be serious, I agree with this and with Douglas' take). – Yemon Choi Mar 11 '10 at 5:13
add comment
I'm reminded of the following story that I posted on my personal web journal a couple years ago:
At the Topology seminar yesterday, the speaker presented a theorem, which he immediately followed with a refinement: a statement that directly and obviously implies the theorem. He
labeled his refinement a "corollary". I turned to Noah Snyder, and said that it was more an "uncorollary, or an anticorollary", but as soon as I said as much, the two of us
up vote 9 simultaneously correctly labeled the refinement as a "rollary".
down vote
There should be more rollaries in mathematical writing.
3 I am not sure if I understand your joke correctly, but shouldn't it be contrarollary? – Willie Wong Mar 11 '10 at 12:51
add comment
Making a new Theorem environment that let you have the bolded part say "Corollary to Conjecture X" seems to me a good compromise of concise and unlikely to confuse anyone.
up vote 7
down vote
2 I'm sure I've written stuff like "Corollary (of Conjecture X)" in class notes, but I think that in careful writing one should be even more clear about the conditional nature of a result
than this. Strictly speaking it makes no sense for a conjecture to have a corollary, because the logical statuses are completely different. The more careful statement: "Proposition [or
Theorem, or whatever]: Conjecture X implies Y" seems preferable. – Pete L. Clark Mar 11 '10 at 9:44
I also think that there is room for improvement in the way mathematicians state conditional results. In some subfields it seems to be almost forgotten that certain standard conjectures
1 are not known to be true, so you see things like "Theorem: Something amazing (conditional on GRH)" (which, by the way, is not even one conjecture but a bundled together family of
conjectures) Or, to hit closer to home: "We show something fantastic relating analytic, Mordell-Weil and Selmer ranks (assuming the finiteness of Sha)". I say boo: put your assumptions
first! – Pete L. Clark Mar 11 '10 at 9:48
2 I actually think this is much clearer than "Prop: X implies Y." If you're skimming through you can't miss that "Corollary to conjecture" means it hasn't been proven, whereas if you have
a proposition and you phrase it a little bit poorly a skimmer might think the conclusion had been proven. The clearest thing is to have the fact that it's unproven in bold. – Noah Snyder
Mar 11 '10 at 17:00
Here's an example of when this construction is used: jlms.oxfordjournals.org/cgi/pdf_extract/s1-43/1/146 – Douglas S. Stones Mar 13 '10 at 22:32
add comment
The correct term for such an item is CONJOLLARY. ;)
up vote 3 down vote
Isn't that a relative of the Bandersnatch? – Yemon Choi Mar 11 '10 at 18:48
add comment
Not the answer you're looking for? Browse other questions tagged convention mathematical-writing or ask your own question. | {"url":"http://mathoverflow.net/questions/17817/can-a-corollary-follow-a-conjecture?sort=votes","timestamp":"2014-04-18T18:42:00Z","content_type":null,"content_length":"80670","record_id":"<urn:uuid:fe644e04-a3f1-41ba-b40e-ee528b3dfa9c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculator for confidence intervals
Calculator for confidence intervals of relative risk This calculator works off-line. (IE4)
Programme written by DJR Hutchon. Enter data into the boxes "A","B","C",and "D"
│ │ │Group characteristic │
│ │ │Treatment │Control │
│ │ │(A+C) │(B+D) │
│ │Total "better" A+B= │A = │B = │
│ │Total "no better" C+D │C = │D= │
Relative risk R =
95% confidence interval = or treatment is
A permanent record of the analysis can be obtained by printing the page.
Ref: Gardner M J and Altman D G. Statisitics with confidence. BMJ publications. Reprint 1994 p 51-52
Relative risk = (A/A+C)/(B/B+D)
Standard Error oflog Relative risk(SElogR) =sqrt((1/A)-(1/(A+C))+(1/B)-(1/(B+D)))
lower limit= the exponential of (log(Rel risk)-(1.96*SElogR))
upper limit= the exponential of (log(Rel risk)+(1.96*SElogR))
This calculator is for educational use. It is believed accurate but no responsibility for accuracy of the results is accepted by the author. David J R Hutchon BSc, MB, ChB, FRCOG Consultant
Gynaecologist, Memorial Hospital, Darlington, England.
You are welcome to keep this page and use the calculator off-line. I would appreciate an E-mail if you find it useful enough to do so. E-mail to me at DJRHutchon@hotmail.co.uk | {"url":"http://www.hutchon.net/ConfidRR.htm","timestamp":"2014-04-19T07:02:57Z","content_type":null,"content_length":"4503","record_id":"<urn:uuid:98150540-5559-4230-bf1b-c2b912e949a3>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hyperspherical approach to three- and four-electron atomic systems
Toru Morishita
Department of Applied Physics and Chemistry, The University of Electro-Communications,
1-5-1 Chofu-ga-oka, Chofu-shi, Tokyo 182-8585, Japan
C. D. Lin
Department of Physics, Kansas State University, Manhattan, KS 66506, USA
Hyperspherical coordinates were first introduced by Fano and his coworkers [1,2] to understand the basic properties of doubly excited states of helium atoms. In the past several decades, this
approach has been improved and extended to a broad range of few-body atomic and molecular systems. In this work, we present our recent progress on the understanding of electron correlations mainly in
triply excited states of atoms. We also present a new development of the hyperspherical approach to a four-electron atomic system.
In the hyperspherical method, 3N dimensional configuration space of an N-electron atom with the nucleus at the center is parametrized by a hyperradius
where H[ad](Ω;R) is the adiabatic Hamiltonian which depends parametrically on R. Within the adiabatic approximation introduced by Macek [3], the total wavefunction for the n-th state in channel μ can
be written as
where F[μ]^n(R) is the hyperradial function which measures the size of the state; Φ[μ](Ω;R) is the hyperspherical adiabatic channel function, which contains all the information about electron
correlations for states within channel µ. The channel function Φ[μ](Ω;R) and its associated adiabatic potential U[μ](R) are obtained by solving the adiabatic eigenvalue problem at each R,
We solved this eigenvalue problem for three-electron atomic systems for each ^2S+1L^π symmetry [4], and for a four-electron atomic system.
As an example of three-electron atoms, we show the adiabatic potential curves for Li(^2S^e) states in Fig. 1. At large R, each curve approaches the two electron Li^+ states: The potentials can be
classified into three groups by their asymptotic limits. The first group consists of a single curve - the lowest curve, labeled ‘I’ in Fig. 1. This curve approaches the Li^+(1s^2 ^1S^e) state as R →
∞. This curve supports the ground state and the 1s^2ns ^2S^e (n ≥ 2) singly excited states. The second group, labeled ‘II’ in Fig. 1, consists of potential curves that approach the 1snl (n ≥ 2)
singly excited states of Li^+ at large R. These curves support doubly excited states. The third group, labeled ‘III’, consists of potential curves that approach the nln'l' (n, n' ≥ 2) doubly excited
states of Li^+ at large R. These curves support triply excited states. Clearly, the avoided crossings among the di.erent groups are very sharp. Thus, the hyperspherical adiabatic channels can be used
to separate singly, doubly, and triply excited states.
To identify the features that characterize the adiabatic channels among the triply excited states, we examined the channel functions, Φ’s. We classify triply excited states of Li in the 2l2l'2l" and
the 2l2l'3l" manifolds by visualizing the channel functions in the body-fixed frame. We found that angular correlations play an important role in characterizing the 2l2l'2l" intrashell states. By
examining the wavefunctions of the three electrons at the same distance from the nucleus, we found that these intrashell states are distinguished by their distributions in the three relative angles
(See Fig. 2 for the definition of the angles). By examining the wave functions in the body-fixed frame, we identi.ed three basic modes of the three relative angles. In Fig. 3, we plotted the
equidensity surfaces of internal wavefunctions for the 2l2l'2l" intrashell states. These surfaces can be grouped clearly into three types. In fact, the major distinction being that in group A, the
three electrons form a coplanar e- quilateral triangle; in group B they form an equilateral triangle but not coplanar, and in group C the three electrons are coplanar but not allowed to make an
equilateral triangle. The ‘forbidden region’ for the latter two groups originates from the quantum symmetry in that a state with well-defined quantum numbers L, S, and π would incur nodal surfaces in
a multidimensional wavefunction.
We also examined the internal wavefunctions for the 2l2l'3l" intershell triply excited states of Li. For these states, radial correlations as well as angular correlations play an important role in
the classification. A new quantum number was introduced to describe the symmetric and antisymmetric stretches between the two inner electrons with the third one.
The next major step is to understand the four-electron atomic systems, where one can expect much richer electron correlation effects. We implemented a pilot calculation for Be within the s^4
configurations, and resulting hyperspherical adiabatic potential curves for the ^1S^e symmetry are shown in Fig. 4. The general appearance of the adiabatic potentials does not di.er markedly from
those of three-electron atoms such as Li. At large R, each curve converges to the three-electron Be^+ states. The groups labeled ‘I’, ‘II’, and ‘III’ consist of potential curves that support singly,
doubly, and triply excited states of Be, respectively, and they are similar to the three-electron systems as shown in Fig. 1. In addition to these three groups, we can clearly observe the fourth
group, labeled ‘IV’. The curves in the fourth group converge to the triply excited states of Be^+ and they support quadruply excited states of Be. We note that the avoided crossings among the
different groups are very sharp. This suggests that quadruply excited states of Be are rather stable against autoionization to singly, doubly, and triply ionized states of Be. Our next goal is to
examine the hyperspherical potentials including the effects from higher angular momentum states and to classify the quadruply excited states.
Figure 1: Hyperspherical adiabatic potentials for the ^2S^e symmetry of Li.
Figure 2: Definition of the three angles used to describe the three
electrons on a sphere.The three electrons form a σ plane. On the
plane (the right figure) the three electrons are confined to a circle.
Figure 3: The equidensity surface plots of the three-electron wavefunctions for the
eight intrashell states at r[1] = r[2] = r[3]. The surface represents 60% of the maximum density.
Each ‘slice’ represents the whole range of the three angles (0≤θ≤π, 0≤η≤π, -η≤φ≤η).
Figure 4: Hyperspherical adiabatic potential curves for the ^1S^e symmetry of Be within the s^4 configuration.
[1] W. Cooper, U. Fano, and F. Prats, Phys. Rev. Lett. 10, 518 (1963).
[2] U. Fano, Phys. Today 29, 32 (1976).
[3] J. Macek, J. Phys. B1, 831 (1968).
[4] T. Morishita and C. D. Lin,Phys. Rev. A, 57, 1835 (1999).
This work was supported by the Chemical Sciences, Geosciences and Biosciences Division,
Office of Basic Energy Sciences, Office of Science, U.S. Department of Energy.
Submitted to the Fano Memorial Symposium, July 2002 in Cambridge, MA.
This abstract is also available in Postscript or Adobe Acrobat formats. | {"url":"http://jrm.phys.ksu.edu/Data/abstracts/Hyperspherical_approach_to_three-_and_four-electron_atomic_systems.html","timestamp":"2014-04-20T08:19:16Z","content_type":null,"content_length":"11870","record_id":"<urn:uuid:171caf1a-c9f8-46d9-a033-7470f60e57e5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
American Mathematical Society
Bulletin Notices
AMS Sectional Meeting Full Program
Current as of Sunday, April 13, 2014 00:20:06
Program · Deadlines · Abstract submission · Registration/Housing/Etc.
Inquiries: meet@ams.org
Western Spring Sectional Meeting
University of New Mexico, Albuquerque, NM
April 4-6, 2014 (Friday - Sunday)
Meeting #1099
Associate secretaries:
Michel L Lapidus, AMS lapidus@math.ucr.edu, lapidus@mathserv.ucr.edu
Friday April 4, 2014
Saturday April 5, 2014
• Saturday April 5, 2014, 7:30 a.m.-4:00 p.m.
Exhibit and Book Sale
Room 120, Science and Math Learning Center
• Saturday April 5, 2014, 7:30 a.m.-4:00 p.m.
Meeting Registration
Lobby, Science and Math Learning Center
• Saturday April 5, 2014, 7:30 a.m.-10:50 a.m.
Special Session on Weighted Norm Inequalities and Related Topics, I
Room 101, Mitchell Hall
Oleksandra Beznosova, Baylor University Oleksandra_Beznosova@baylor.edu
David Cruz-Uribe, Trinity College
Cristina Pereyra, University of New Mexico
• Saturday April 5, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Analysis and Topology in Special Geometries, I
Room 221, Mitchell Hall
Charles Boyer, University of New Mexico
Daniele Grandini, University of New Mexico
Dimiter Vassilev, University of New Mexico vassilev@unm.edu
• Saturday April 5, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Commutative Algebra, I
Room 115, Mitchell Hall
Daniel J. Hernandez, University of Utah
Karen E. Smith, University of Michigan
Emily E. Witt, University of Minnesota ewitt@umn.edu
• Saturday April 5, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Descriptive Set Theory and its Applications, I
Room 356, Science and Math Learning Center
Alexander Kechris, California Institute of Technology
Christian Rosendal, University of Illinois, Chicago rosendal.math@gmail.com
• Saturday April 5, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Harmonic Analysis and Its Applications, I
Room 117, Mitchell Hall
Jens Gerlach Christensen, Colgate University
Joseph Lakey, New Mexico State University jlakey@nmsu.edu
Nicholas Michalowski, New Mexico State University
• Saturday April 5, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Interactions in Commutative Algebra, I
Room 220, Mitchell Hall
Louiza Fouli, New Mexico State University
Bruce Olberding, New Mexico State University
Janet Vassilev, University of New Mexico jvassil@math.unm.edu
• Saturday April 5, 2014, 8:00 a.m.-10:55 a.m.
Special Session on Modeling Complex Social Processes Within and Across Levels of Analysis, I
Room 213, Mitchell Hall
Simon DeDeo, Indiana University simon.dedeo@gmail.com
Richard Niemeyer, University of Colorado, Denver
□ 8:00 a.m.
Evolutionary games for crime, recidivism and rehabilitation of criminal offenders.
Maria R. D'Orsogna*, California State University at Northridge
□ 8:30 a.m.
From Wikipedia to the Arab Spring: Game Theory and Collective Cognition.
Simon DeDeo*, Indiana University & the Santa Fe Institute
□ 9:00 a.m.
From aggression to dominance: the logic of missing links closes the gap between behavior and knowledge.
Elizabeth A. Hobson*, Department of Biology, New Mexico State University
Simon Dedeo, School of Informatics and Computing, Indiana University
□ 9:30 a.m.
Local and nonlocal information in a traffic network: how important is the horizon?
Giovanni Petri, ISI Foundation
Samuel V Scarpino*, Santa Fe Institute
□ 10:00 a.m.
Mathematical modeling of crime hotspots.
Martin B Short*, Georgia Institute of Technology School of Mathematics
□ 10:30 a.m.
• Saturday April 5, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Physical Knots, honoring the retirement of Jonathan K. Simon, I
Room 211, Mitchell Hall
Greg Buck, St. Anselm College
Eric Rawdon, University of St. Thomas ejrawdon@stthomas.edu
• Saturday April 5, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Spectral Theory, I
Room 118, Mitchell Hall
Milivoje Lukic, Rice University
Maxim Zinchenko, University of New Mexico maxim@math.unm.edu
• Saturday April 5, 2014, 8:00 a.m.-10:50 a.m.
Special Session on The Common Core and University Mathematics Instruction, I
Room 207, Mitchell Hall
Justin Boyle, University of New Mexico
Michael Nakamaye, University of New Mexico
Kristin Umland, University of New Mexico umland@math.unm.edu
• Saturday April 5, 2014, 8:00 a.m.-10:50 a.m.
Special Session on The Inverse Problem and Other Mathematical Methods Applied in Physics and Related Sciences, I
Room 102, Mitchell Hall
Hanna Makaruk, Los Alamos National Laboratory
Robert Owczarek, University of New Mexico and Enfitek, Inc rowczare@unm.edu
• Saturday April 5, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Topics in Spectral Geometry and Global Analysis, I
Room 104, Mitchell Hall
Ivan Avramidi, New Mexico Institute of Mining and Technology iavramid@nmt.edu
Klaus Kirsten, Baylor University
• Saturday April 5, 2014, 8:30 a.m.-10:50 a.m.
Special Session on Harmonic Analysis and Dispersive Equations, I
Room 119, Mitchell Hall
Matthew Blair, University of New Mexico blair@math.unm.edu
Jason Metcalfe, University of North Carolina
• Saturday April 5, 2014, 8:30 a.m.-10:50 a.m.
Special Session on Harmonic Analysis and Operator Theory (in memory of Cora Sadosky), I
Room 202, Mitchell Hall
Laura De Carli, Florida International University
Alex Stokolos, Georgia Southern University
Wilfredo Urbina, Roosevelt University wurbinaromero@roosevelt.edu
• Saturday April 5, 2014, 8:30 a.m.-10:50 a.m.
Special Session on Mathematical Finance, I
Room 120, Mitchell Hall
Indranil SenGupta, North Dakota State University indranil.sengupta@ndsu.edu
• Saturday April 5, 2014, 8:30 a.m.-10:50 a.m.
Special Session on Progress in Noncommutative Analysis, I
Room 107, Mitchell Hall
Anna Skripka, University of New Mexico askripka@unm.edu
Tao Mei, Wayne State University
• Saturday April 5, 2014, 9:00 a.m.-10:40 a.m.
Special Session on Flat Dynamics, I
Room 212, Mitchell Hall
Jayadev Athreya, University of Illinois, Urbana-Champaign
Robert Niemeyer, University of New Mexico, Albuquerque niemeyer@math.unm.edu
Richard E. Schwartz, Brown University
Sergei Tabachnikov, The Pennsylvania State University
• Saturday April 5, 2014, 9:00 a.m.-10:50 a.m.
Special Session on Nonlinear Waves and Singularities in Water Waves, Optics and Plasmas, I
Room 121, Mitchell Hall
Alexander O. Korotkevich, University of New Mexico, Albuquerque alexkor@math.unm.edu
Pavel Lushnikov, University of New Mexico, Albuquerque
• Saturday April 5, 2014, 9:00 a.m.-10:50 a.m.
Special Session on Partial Differential Equations in Materials Science, I
Room 206, Mitchell Hall
Lia Bronsard, McMaster University
Tiziana Giorgi, New Mexico State University tgiorgi@math.nmsu.edu
• Saturday April 5, 2014, 9:00 a.m.-10:50 a.m.
Special Session on Stochastic Processes in Noncommutative Probability, I
Room 204, Mitchell Hall
Michael Anshelevich, Texas A&M University manshel@math.tamu.edu
Todd Kemp, University of California San Diego
• Saturday April 5, 2014, 9:00 a.m.-10:50 a.m.
Special Session on Stochastics and PDEs, I
Room 106, Mitchell Hall
Juraj Földes, Institute for Mathematics and Its Applications
Nathan Glatt-Holtz, Institute for Mathematics and Its Applications and Virginia Tech negh@vt.edu
Geordie Richards, Institute for Mathematics and Its Applications and University of Rochester
• Saturday April 5, 2014, 11:10 a.m.-12:00 p.m.
Invited Address
Some problems and results in spectral graph theory.
Room 102, Science and Math Learning Center
Fan Chung Graham*, University of California, San Diego
• Saturday April 5, 2014, 2:00 p.m.-2:50 p.m.
Invited Address
Hyperbolic dynamics and spectral properties of one-dimensional quasicrystals.
Room 102, Science and Math Learning Center
Anton Gorodetski*, University of California Irvine
• Saturday April 5, 2014, 3:00 p.m.-5:50 p.m.
Special Session on Analysis and Topology in Special Geometries, II
Room 221, Mitchell Hall
Charles Boyer, University of New Mexico
Daniele Grandini, University of New Mexico
Dimiter Vassilev, University of New Mexico vassilev@unm.edu
• Saturday April 5, 2014, 3:00 p.m.-6:20 p.m.
Special Session on Commutative Algebra, II
Room 115, Mitchell Hall
Daniel J. Hernandez, University of Utah
Karen E. Smith, University of Michigan
Emily E. Witt, University of Minnesota ewitt@umn.edu
• Saturday April 5, 2014, 3:00 p.m.-5:50 p.m.
Special Session on Descriptive Set Theory and its Applications, II
Room 356, Science and Math Learning Center
Alexander Kechris, California Institute of Technology
Christian Rosendal, University of Illinois, Chicago rosendal.math@gmail.com
• Saturday April 5, 2014, 3:00 p.m.-5:40 p.m.
Special Session on Flat Dynamics, II
Room 212, Mitchell Hall
Jayadev Athreya, University of Illinois, Urbana-Champaign
Robert Niemeyer, University of New Mexico, Albuquerque niemeyer@math.unm.edu
Richard E. Schwartz, Brown University
Sergei Tabachnikov, The Pennsylvania State University
• Saturday April 5, 2014, 3:00 p.m.-5:20 p.m.
Special Session on Harmonic Analysis and Dispersive Equations, II
Room 119, Mitchell Hall
Matthew Blair, University of New Mexico blair@math.unm.edu
Jason Metcalfe, University of North Carolina
• Saturday April 5, 2014, 3:00 p.m.-5:50 p.m.
Special Session on Harmonic Analysis and Its Applications, II
Room 117, Mitchell Hall
Jens Gerlach Christensen, Colgate University
Joseph Lakey, New Mexico State University jlakey@nmsu.edu
Nicholas Michalowski, New Mexico State University
• Saturday April 5, 2014, 3:00 p.m.-5:20 p.m.
Special Session on Harmonic Analysis and Operator Theory (in memory of Cora Sadosky), II
Room 202, Mitchell Hall
Laura De Carli, Florida International University
Alex Stokolos, Georgia Southern University
Wilfredo Urbina, Roosevelt University wurbinaromero@roosevelt.edu
• Saturday April 5, 2014, 3:00 p.m.-5:20 p.m.
Special Session on Hyperbolic Dynamics, Dynamically Defined Fractals, and Applications, I
Room 122, Mitchell Hall
Anton Gorodetski, University of California Irvine asgor@math.uci.edu
• Saturday April 5, 2014, 3:00 p.m.-5:50 p.m.
Special Session on Interactions in Commutative Algebra, II
Room 220, Mitchell Hall
Louiza Fouli, New Mexico State University
Bruce Olberding, New Mexico State University
Janet Vassilev, University of New Mexico jvassil@math.unm.edu
• Saturday April 5, 2014, 3:00 p.m.-5:50 p.m.
Special Session on Mathematical Finance, II
Room 120, Mitchell Hall
Indranil SenGupta, North Dakota State University, indranil.sengupta@ndsu.edu
• Saturday April 5, 2014, 3:00 p.m.-6:00 p.m.
Special Session on Modeling Complex Social Processes Within and Across Levels of Analysis, II
Room 213, Mitchell Hall
Simon DeDeo, Indiana University simon.dedeo@gmail.com
Richard Niemeyer, University of Colorado, Denver
□ 3:00 p.m.
□ 3:30 p.m.
Quantifying the dynamics of cities with human activity patterns.
Markus Schläpfer*, Massachusetts Institute of Technology
□ 4:00 p.m.
Systemic Risk: Robustness and Fragility in Trade Networks.
Nick Foti, University of Washington
Scott D. Pauls*, Dartmouth College
Daniel Rockmore, Dartmouth College
□ 4:30 p.m.
Human Macroecology: Food, Storage, Transportation, and Human Civilization.
Sean T. Hammond*, Department of Biology, University of New Mexico
James H. Brown, Department of Biology, University of New Mexico
Astrid Kodric-Brown, Department of Biology, University of New Mexico
Joseph R. Burger, Department of Biology, University of New Mexico
Trevor S. Fristoe, Department of Biology, University of New Mexico
Norman Mercado-Silva, Department of Biology, University of New Mexico
Jeffrey C. Nekola, Department of Biology, University of New Mexico
Jordan G. Okie, School of Earth and Space Exploration, Arizona State University
Tatiana P. Flanagan, Department of Biology, University of New Mexico
□ 5:00 p.m.
Universal diversity patterns across complex natural and social systems.
Jeffrey C. Nekola*, Biology Department, University of New Mexico
□ 5:30 p.m.
• Saturday April 5, 2014, 3:00 p.m.-5:50 p.m.
Special Session on Nonlinear Waves and Singularities in Water Waves, Optics and Plasmas, II
Room 121, Mitchell Hall
Alexander O. Korotkevich, University of New Mexico, Albuquerque alexkor@math.unm.edu
Pavel Lushnikov, University of New Mexico, Albuquerque
• Saturday April 5, 2014, 3:00 p.m.-5:20 p.m.
Special Session on Partial Differential Equations in Materials Science, II
Room 206, Mitchell Hall
Lia Bronsard, McMaster University
Tiziana Giorgi, New Mexico State University tgiorgi@math.nmsu.edu
• Saturday April 5, 2014, 3:00 p.m.-5:20 p.m.
Special Session on Physical Knots, honoring the retirement of Jonathan K. Simon, II
Room 211, Mitchell Hall
Greg Buck, St. Anselm College
Eric Rawdon, University of St. Thomas ejrawdon@stthomas.edu
• Saturday April 5, 2014, 3:00 p.m.-5:50 p.m.
Special Session on Progress in Noncommutative Analysis, II
Room 107, Mitchell Hall
Anna Skripka, University of New Mexico askripka@unm.edu
Tao Mei, Wayne State University
• Saturday April 5, 2014, 3:00 p.m.-5:50 p.m.
Special Session on Spectral Theory, II
Room 118, Mitchell Hall
Milivoje Lukic, Rice University
Maxim Zinchenko, University of New Mexico maxim@math.unm.edu
• Saturday April 5, 2014, 3:00 p.m.-5:20 p.m.
Special Session on Stochastic Processes in Noncommutative Probability, II
Room 204, Mitchell Hall
Michael Anshelevich, Texas A&M University manshel@math.tamu.edu
Todd Kemp, University of California San Diego
• Saturday April 5, 2014, 3:00 p.m.-5:50 p.m.
Special Session on Stochastics and PDEs, II
Room 106, Mitchell Hall
Juraj Földes, Institute for Mathematics and Its Applications
Nathan Glatt-Holtz, Institute for Mathematics and Its Applications and Virginia Tech negh@vt.edu
Geordie Richards, Institute for Mathematics and Its Applications and University of Rochester
• Saturday April 5, 2014, 3:00 p.m.-5:20 p.m.
Special Session on The Common Core and University Mathematics Instruction, II
Room 207, Mitchell Hall
Justin Boyle, University of New Mexico
Michael Nakamaye, University of New Mexico
Kristin Umland, University of New Mexico umland@math.unm.edu
• Saturday April 5, 2014, 3:00 p.m.-5:50 p.m.
Special Session on The Inverse Problem and Other Mathematical Methods Applied in Physics and Related Sciences, II
Room 102, Mitchell Hall
Hanna Makaruk, Los Alamos National Laboratory
Robert Owczarek, University of New Mexico and Enfitek, Inc rowczare@unm.edu
• Saturday April 5, 2014, 3:00 p.m.-6:20 p.m.
Special Session on Topics in Spectral Geometry and Global Analysis, II
Room 104, Mitchell Hall
Ivan Avramidi, New Mexico Institute of Mining and Technology iavramid@nmt.edu
Klaus Kirsten, Baylor University
• Saturday April 5, 2014, 3:00 p.m.-5:50 p.m.
Special Session on Weighted Norm Inequalities and Related Topics, II
Room 101, Mitchell Hall
Oleksandra Beznosova, Baylor University Oleksandra_Beznosova@baylor.edu
David Cruz-Uribe, Trinity College
Cristina Pereyra, University of New Mexico
• Saturday April 5, 2014, 3:00 p.m.-6:40 p.m.
Session for Contributed Papers
Room 208, Mitchell Hall
Sunday April 6, 2014
• Sunday April 6, 2014, 7:30 a.m.-10:50 a.m.
Special Session on Weighted Norm Inequalities and Related Topics, III
Room 101, Mitchell Hall
Oleksandra Beznosova, Baylor University Oleksandra_Beznosova@baylor.edu
David Cruz-Uribe, Trinity College
Cristina Pereyra, University of New Mexico
□ 7:30 a.m.
Extremizers and Bellman function for martingale weak type inequality.
Alexander Reznikov*, Michigan State University
Vasiliy Vasyunin, V. A. STEKLOV MATH. INST., RUSSIAN ACADEMY OF SCIENCE
Alexander Volberg, Michigan State University
□ 8:00 a.m.
Extremizers and sharp weak-type estimates for positive dyadic shifts.
Guillermo Rey*, Michigan State University
Alexander Reznikov, Michigan Sate University
□ 8:30 a.m.
Mutual estimates for the dyadic Reverse Hölder and Muckenhoupt constants for the dyadically doubling weights.
Oleksandra V Beznosova, Baylor University
Temitope E Ode*, Baylor University
□ 9:00 a.m.
Best constants for a family of Carleson sequences.
Leonid Slavin*, University of Cincinnati
□ 9:30 a.m.
Uniform estimates for Fourier restriction to polynomial curves.
Betsy Stovall*, University of Wisconsin-Madison
□ 10:00 a.m.
□ 10:30 a.m.
Two weight norm inequalities for singular and fractional integral operators in $R^n$.
Eric T. Sawyer, McMaster University
Chun-Yen Shen, National Central University, Chungli, Taiwan
Ignacio Uriarte-Tuero*, Michigan State University
• Sunday April 6, 2014, 8:00 a.m.-12:00 p.m.
Exhibit and Book Sale
Room 120, Science and Math Learning Center
• Sunday April 6, 2014, 8:00 a.m.-12:00 p.m.
Meeting Registration
Lobby, Science and Math Learning Center
• Sunday April 6, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Analysis and Topology in Special Geometries, III
Room 221, Mitchell Hall
Charles Boyer, University of New Mexico
Daniele Grandini, University of New Mexico
Dimiter Vassilev, University of New Mexico vassilev@unm.edu
• Sunday April 6, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Arithmetic and Differential Algebraic Geometry, I
Room 108, Mitchell Hall
Alexandru Buium, University of New Mexico
Taylor Dupuy, University of California, Los Angeles
Lance Edward Miller, University of Arkansas lem016@uark.edu
• Sunday April 6, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Commutative Algebra, III
Room 115, Mitchell Hall
Daniel J. Hernandez, University of Utah
Karen E. Smith, University of Michigan
Emily E. Witt, University of Minnesota ewitt@umn.edu
• Sunday April 6, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Descriptive Set Theory and its Applications, III
Room 356, Science and Math Learning Center
Alexander Kechris, California Institute of Technology
Christian Rosendal, University of Illinois, Chicago rosendal.math@gmail.com
• Sunday April 6, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Harmonic Analysis and Its Applications, III
Room 117, Mitchell Hall
Jens Gerlach Christensen, Colgate University
Joseph Lakey, New Mexico State University jlakey@nmsu.edu
Nicholas Michalowski, New Mexico State University
• Sunday April 6, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Harmonic Analysis and Operator Theory (in memory of Cora Sadosky), III
Room 202, Mitchell Hall
Laura De Carli, Florida International University
Alex Stokolos, Georgia Southern University
Wilfredo Urbina, Roosevelt University wurbinaromero@roosevelt.edu
• Sunday April 6, 2014, 8:00 a.m.-9:50 a.m.
Special Session on Mathematical Finance, III
Room 120, Mitchell Hall
Indranil SenGupta, North Dakota State University, indranil.sengupta@ndsu.edu
□ 8:00 a.m.
Covariance and Correlation Swaps for Markov-modulated Volatilities.
Anatoliy Swishchuk*, University of Calgary
□ 8:30 a.m.
Boundedness And Exponential Stability In Highly Nonlinear Stochastic Differential Equations.
Youssef N Raffoul*, University of Dayton
□ 9:00 a.m.
Non-arbitrage under Uncertainty.
Jun Deng*, Mathematical and Statistical Sciences Dept., University of Alberta, Edmonton, Alberta
Tahir Choulli, Mathematical and Statistical Sciences Dept., University of Alberta, Edmonton, Alberta
JunFeng Ma, Bank of Montreal
Anna Aksamit, Laboratoire Analyse et Probabilités, Université d'Evry Val d'Essonne, Evry, France
Monique Jeanblanc, Laboratoire Analyse et Probabilités, Université d'Evry Val d'Essonne, Evry, France
□ 9:30 a.m.
Fast Solution Methods for the Fractional Diffusion Equation and Its Application in Mathematical Finance.
Treena S. Basu*, Rhodes College
• Sunday April 6, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Physical Knots, honoring the retirement of Jonathan K. Simon, III
Room 211, Mitchell Hall
Greg Buck, St. Anselm College
Eric Rawdon, University of St. Thomas ejrawdon@stthomas.edu
• Sunday April 6, 2014, 8:00 a.m.-10:50 a.m.
Special Session on Spectral Theory, III
Room 118, Mitchell Hall
Milivoje Lukic, Rice University
Maxim Zinchenko, University of New Mexico maxim@math.unm.edu
• Sunday April 6, 2014, 8:00 a.m.-10:50 a.m.
Special Session on The Inverse Problem and Other Mathematical Methods Applied in Physics and Related Sciences, III
Room 102, Mitchell Hall
Hanna Makaruk, Los Alamos National Laboratory
Robert Owczarek, University of New Mexico and Enfitek, Inc rowczare@unm.edu
• Sunday April 6, 2014, 8:00 a.m.-10:40 a.m.
Special Session on Topics in Spectral Geometry and Global Analysis, III
Room 104, Mitchell Hall
Ivan Avramidi, New Mexico Institute of Mining and Technology iavramid@nmt.edu
Klaus Kirsten, Baylor University
□ 8:00 a.m.
Spectral instability of selfadjoint extensions.
Gerardo A Mendoza*, Department of Mathematics, Temple University
□ 9:00 a.m.
Magnetic resolvent trace formula for 2d black hole vacua.
Floyd L. Williams*, Dept.of Mathematics and Statistics/University of Massachusetts,Amherst
□ 10:00 a.m.
Some subtleties in the relationships among heat kernel invariants, eigenvalue distributions, and quantum vacuum energy.
Stephen A. Fulling, Texas A&M University, College Station, Texas
Yunyun Yang*, Louisiana State University, Baton Rouge, Louisiana
• Sunday April 6, 2014, 8:30 a.m.-10:50 a.m.
Special Session on Harmonic Analysis and Dispersive Equations, III
Room 119, Mitchell Hall
Matthew Blair, University of New Mexico blair@math.unm.edu
Jason Metcalfe, University of North Carolina
• Sunday April 6, 2014, 8:30 a.m.-10:50 a.m.
Special Session on Interactions in Commutative Algebra, III
Room 220, Mitchell Hall
Louiza Fouli, New Mexico State University
Bruce Olberding, New Mexico State University
Janet Vassilev, University of New Mexico jvassil@math.unm.edu
• Sunday April 6, 2014, 8:30 a.m.-10:50 a.m.
Special Session on Progress in Noncommutative Analysis, III
Room 107, Mitchell Hall
Anna Skripka, University of New Mexico askripka@unm.edu
Tao Mei, Wayne State University
• Sunday April 6, 2014, 9:00 a.m.-10:40 a.m.
Special Session on Flat Dynamics, III
Room 212, Mitchell Hall
Jayadev Athreya, University of Illinois, Urbana-Champaign
Robert Niemeyer, University of New Mexico, Albuquerque niemeyer@math.unm.edu
Richard E. Schwartz, Brown University
Sergei Tabachnikov, The Pennsylvania State University
• Sunday April 6, 2014, 9:00 a.m.-10:50 a.m.
Special Session on Hyperbolic Dynamics, Dynamically Defined Fractals, and Applications, II
Room 122, Mitchell Hall
Anton Gorodetski, University of California Irvine asgor@math.uci.edu
• Sunday April 6, 2014, 9:00 a.m.-10:50 a.m.
Special Session on Nonlinear Waves and Singularities in Water Waves, Optics and Plasmas, III
Room 121, Mitchell Hall
Alexander O. Korotkevich, University of New Mexico, Albuquerque alexkor@math.unm.edu
Pavel Lushnikov, University of New Mexico, Albuquerque
• Sunday April 6, 2014, 9:00 a.m.-10:50 a.m.
Special Session on Partial Differential Equations in Materials Science, III
Room 206, Mitchell Hall
Lia Bronsard, McMaster University
Tiziana Giorgi, New Mexico State University tgiorgi@math.nmsu.edu
• Sunday April 6, 2014, 9:00 a.m.-10:50 a.m.
Special Session on Stochastic Processes in Noncommutative Probability, III
Room 204, Mitchell Hall
Michael Anshelevich, Texas A&M University manshel@math.tamu.edu
Todd Kemp, University of California San Diego
• Sunday April 6, 2014, 9:00 a.m.-10:50 a.m.
Special Session on Stochastics and PDEs, III
Room 106, Mitchell Hall
Juraj Földes, Institute for Mathematics and Its Applications
Nathan Glatt-Holtz, Institute for Mathematics and Its Applications and Virginia Tech negh@vt.edu
Geordie Richards, Institute for Mathematics and Its Applications and University of Rochester
• Sunday April 6, 2014, 11:10 a.m.-12:00 p.m.
Invited Address
Characteristic p Tricks in Algebra, Geometry and Combinatorics.
Room 102, Science and Math Learning Center
Karen E. Smith*, University of Michigan, Ann Arbor
• Sunday April 6, 2014, 2:00 p.m.-2:50 p.m.
Invited Address
Rigidity for von Neumann algebras and ergodic group actions.
Room 102, Science and Math Learning Center
Adrian Ioana*, University of California, San Diego
• Sunday April 6, 2014, 3:00 p.m.-4:20 p.m.
Special Session on Analysis and Topology in Special Geometries, IV
Room 221, Mitchell Hall
Charles Boyer, University of New Mexico
Daniele Grandini, University of New Mexico
Dimiter Vassilev, University of New Mexico vassilev@unm.edu
• Sunday April 6, 2014, 3:00 p.m.-5:50 p.m.
Special Session on Arithmetic and Differential Algebraic Geometry, II
Room 108, Mitchell Hall
Alexandru Buium, University of New Mexico
Taylor Dupuy, University of California, Los Angeles
Lance Edward Miller, University of Arkansas lem016@uark.edu
• Sunday April 6, 2014, 3:00 p.m.-4:20 p.m.
Special Session on Descriptive Set Theory and its Applications, IV
Room 356, Science and Math Learning Center
Alexander Kechris, California Institute of Technology
Christian Rosendal, University of Illinois, Chicago rosendal.math@gmail.com
• Sunday April 6, 2014, 3:00 p.m.-4:50 p.m.
Special Session on Harmonic Analysis and Operator Theory (in memory of Cora Sadosky), IV
Room 202, Mitchell Hall
Laura De Carli, Florida International University
Alex Stokolos, Georgia Southern University
Wilfredo Urbina, Roosevelt University wurbinaromero@roosevelt.edu
• Sunday April 6, 2014, 3:00 p.m.-4:20 p.m.
Special Session on Interactions in Commutative Algebra, IV
Room 220, Mitchell Hall
Louiza Fouli, New Mexico State University
Bruce Olberding, New Mexico State University
Janet Vassilev, University of New Mexico jvassil@math.unm.edu
• Sunday April 6, 2014, 3:00 p.m.-5:50 p.m.
Special Session on Mathematical Finance, IV
Room 120, Mitchell Hall
Indranil SenGupta, North Dakota State University, indranil.sengupta@ndsu.edu
• Sunday April 6, 2014, 3:00 p.m.-5:20 p.m.
Special Session on Nonlinear Waves and Singularities in Water Waves, Optics and Plasmas, IV
Room 121, Mitchell Hall
Alexander O. Korotkevich, University of New Mexico, Albuquerque alexkor@math.unm.edu
Pavel Lushnikov, University of New Mexico, Albuquerque
• Sunday April 6, 2014, 3:00 p.m.-5:50 p.m.
Special Session on Physical Knots, honoring the retirement of Jonathan K. Simon, IV
Room 211, Mitchell Hall
Greg Buck, St. Anselm College
Eric Rawdon, University of St. Thomas ejrawdon@stthomas.edu
• Sunday April 6, 2014, 3:00 p.m.-6:00 p.m.
Special Session on Stochastics and PDEs, IV
Room 106, Mitchell Hall
Juraj Földes, Institute for Mathematics and Its Applications
Nathan Glatt-Holtz, Institute for Mathematics and Its Applications and Virginia Tech negh@vt.edu
Geordie Richards, Institute for Mathematics and Its Applications and University of Rochester
□ 3:00 p.m.
On infinite volume Gibbs measures for the defocusing NLS.
Philippe Sosoe*, Princeton University
Tadahiro Oh, University of Edinburgh
Jeremy Quastel, University of Toronto
□ 3:30 p.m.
The Gross-Pitaevskii hierarchy on the three-dimensional torus.
Philip T Gressman, University of Pennsylvania, Mathematics Department
Vedran Sohinger*, University of Pennsylvania, Mathematics Department
Gigliola Staffilani, Massachusetts Institute of Technology, Mathematics Department
□ 4:00 p.m.
On Strichartz and localized energy estimates in exterior domains.
Matthew D Blair*, University of New Mexico
□ 4:30 p.m.
Discussion I
□ 5:00 p.m.
Discussion II
□ 5:30 p.m.
Discussion III
• Sunday April 6, 2014, 3:00 p.m.-5:50 p.m.
Special Session on The Inverse Problem and Other Mathematical Methods Applied in Physics and Related Sciences, IV
Room 102, Mitchell Hall
Hanna Makaruk, Los Alamos National Laboratory
Robert Owczarek, University of New Mexico and Enfitek, Inc rowczare@unm.edu
• Sunday April 6, 2014, 3:00 p.m.-5:40 p.m.
Special Session on Weighted Norm Inequalities and Related Topics, IV
Room 101, Mitchell Hall
Oleksandra Beznosova, Baylor University Oleksandra_Beznosova@baylor.edu
David Cruz-Uribe, Trinity College
Cristina Pereyra, University of New Mexico
□ 3:00 p.m.
Lower bounds for $L_1$ discrepancy.
Armen Vagharshakyan*, Kent State University
□ 3:30 p.m.
On a two weight estimate for the dyadic operators.
Daewon Chung*, Inha University, Korea
Oleksandra Beznosova, Baylor University
Jean Carlo Moraes, Universidade Federal De Pelotas, Brasil
Maria Cristina Pereyra, University of New Mexico
□ 4:00 p.m.
Sharp bounds for $t$-Haar multipliers on $L^2$.
O Beznosova, Baylor University
J C Moraes*, Universidade Federal do Rio Grande do Sul
M C Pereyra, University of New Mexico
□ 4:30 p.m.
Continuity of halo functions associated to homothecy invariant density bases.
Oleksandra V Beznosova*, Baylor University
Paul A Hagelstein, Baylor University
□ 5:00 p.m.
Inquiries: meet@ams.org | {"url":"http://ams.org/meetings/sectional/2215_progfull.html","timestamp":"2014-04-17T08:22:51Z","content_type":null,"content_length":"199957","record_id":"<urn:uuid:a529b5cd-1c65-4952-b5a1-79e91ac239d8>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Karl Menger
Born: 13 January 1902 in Vienna, Austria
Died: 5 October 1985 in Chicago, Illinois, USA
Click the picture above
to see two larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Karl Menger's father, Carl Menger (1840-1921) was a famous economist, a professor at the University of Vienna, and founder the Austrian School of Economics. Karl's mother was Hermine Andermann
(1869-1924), a journalist, author and musician. Carl was Roman Catholic and Hermine was Jewish so, although they lived together, the two could not marry (all marriages in Austria at this time were
religious ceremonies). When their only child Karl (the subject of this biography) was born in 1902 this was seen as unacceptable by the Viennese social conventions and Carl was forced to withdraw
from public life. He stopped teaching at the University of Vienna as soon as his son was born and he resigned his chair at the University a year later. Carl applied to Emperor Franz Joseph of Austria
to have his son Karl made legitimate and his request was eventually granted. Twenty-five years earlier, Carl Menger had given Franz Joseph a three month course on economics.
Karl Menger attended the Döblinger Gymnasium in Vienna (1913-1920) where two of his fellow students were Wolfgang Pauli and Richard Kuhn (1900-1967). It is worth noting that Kuhn was awarded the
Nobel Prize for Chemistry in 1938 and Pauli was awarded the Nobel Prize for Physics in 1945. One of the students in Menger's class was Heinrich Schnitzler (1902-1982) who went on to become an actor
and film director. Heinrich Schnitzler was the son of the famous author Arthur Schnitzler (1862-1931) and, at this stage, Menger had ideas of writing dramas. This was not an easy time to be growing
up in Vienna for, after the outbreak of World War I, in September 1914 the Döblinger Gymnasium building was converted into a war hospital, and teaching was transferred to a secondary school building
in the Krottenbachstrasse. In February 1916 the war hospital closed and pupils returned to the original building. The years following the end of World War I in 1918 were particularly hard at the
school. Political events such as the end of the monarchy forced major educational changes and the Gymnasium became a state school. After Menger graduated from the Döblinger Gymnasium, he entered the
University of Vienna in 1920 to study physics. He had not given up his idea of writing dramas, for at this time he began to write a drama about the apocryphal Pope Joan.
At the University of Vienna, Menger attended physics lectures by the theoretical physicist Hans Thirring (1888-1976) who had made significant contributions to the theory of general relativity.
However Hans Hahn became a lecturer in Vienna in March 1921 and Menger attended a course he gave on What's new concerning the concept of a curve. Seymour Kass writes [21]:-
In the first lecture Hahn formulated the problem of making precise the idea of a curve, which no one had been able to articulate, mentioning the unsuccessful attempts of Cantor, Jordan, and Peano
. The topology used in the lecture was new to Menger, but he "was completely enthralled and left the lecture room in a daze" [1]. After a week of complete engrossment, he produced a definition of
a curve and confided it to fellow student Otto Schreier, who could find no flaw but alerted Menger to recent commentary by Hausdorff and Bieberbach as to the problem's intractability, which Hahn
hadn't mentioned. Before the seminar's second meeting Menger met with Hahn, who, unaccustomed to giving first-year students a serious hearing, nevertheless listened and after some thought agreed
that Menger's was a promising attack on the problem.
Although Menger was now fascinated in the topic and was encouraged by Hahn to work on it, he was diagnosed with tuberculosis and went to a sanatorium in Aflenz in the mountains of Styria in southern
Austria. Menger's mathematical investigations, carried out in the sanatorium, led him to a definition of dimension independently of Pavel Urysohn. However Urysohn had died in a drowning accident
before he could publish his work and Menger was not aware of it. The severe lung disease forced Menger to spend more than a year in the sanatorium, but he returned to Vienna with important papers he
had written on dimension while in the sanatorium and, advised by Hahn, completed his doctorate in 1924 with his thesis Über die Dimensionalität von Punktmengen. This was not the only work Menger
undertook at this time. His father, Carl Menger, had died while Menger was in the sanatorium and had left the second edition of his book Grundsätze der Volkswirthschaftslehre (1st edition 1871)
unfinished. Menger completed his father's work and supervised the publication of the second edition which appeared in 1922. By the time he had completed this work, Menger had gained considerable
expertise in economics.
In March 1925 Menger was invited by L E J Brouwer to use his recently won Rockefeller Fellowship to come to the University of Amsterdam where he spent two years working as Brouwer's assistant. He
wrote from Amsterdam to his girlfriend Hilda Axamit, an actuarial mathematics student, in Vienna [22]:-
I am lecturing on the calculus of variations. Personally, I am occupied by geometry of all kinds, furthermore by epistemology. I hope I'll get the energy to put together my views about the
problem of truth. In the last weeks, I have had so many ideas that I don't have any time at all to write them down, and run away every evening to distract myself ... in order not to overwork.
Apart from that, I curse the fact that I am not in Vienna but rather here. I can't get used to living here, and I will try my best to leave here forever in the month of June.
In 1927 Menger was invited by Hahn to accept the chair of geometry at the University of Vienna when Kurt Reidemeister left for Königsberg. Menger was not sorry to leave Amsterdam since he had become
involved in a priority dispute with Brouwer and they were not on the best of terms. However, he certainly respected Brouwer [21]:-
Though he found Brouwer testy, he retained warm feelings about his stay in Amsterdam and cited the good Brouwer did for some young mathematicians and "the beautiful experience of watching him
listen to reports of new discoveries".
In Vienna, Menger became a member of the Vienna Circle which comprised philosophers, mathematicians and logicians. He also started up the Mathematical Colloquium in Vienna in 1928 which was addressed
by leading mathematicians and Menger published the Proceedings. Steve Abbott, reviewing a 1998 reprinting of the Proceedings, writes [3]:-
The 'Ergebnisse eines Mathematischen Kolloquiums' were published in Vienna between 1929 and 1937. The eight issues included many contributions by highly respected mathematicians but suffered from
a limited distribution, and few complete sets remain. This book reprints all the articles (in German) along with chapters (in English) surveying the important developments in economics, logic,
topology and geometry that were reported in the 'Ergebnisse'. The 'Kolloquiums' were organised by Karl Menger at the University of Vienna, initially in response to a request from some of the
mathematical students there. The meetings included a mixture of lectures, discussions of unsolved problems and reviews of recent work. Menger kept a record of the meetings and published the notes
through Teubner Verlag or Deuticke. ... It is the stellar collection of 'Kolloquium' speakers that justifies the reprinting of the papers. There are contributions from Čech, Gödel, Menger,
Popper, Tarski, Taussky, von Neumann, Wald and Wiener.
In 1928 Menger published the book Dimensiontheorie.. Paul Althaus Smith (1900-1980) writes in a review [37]:-
There is an important phase in the development of modern point set theoretical geometry which has been closely associated with the concept of dimensionality, - we refer to the attempt to create
precise mathematical meaning for the simple geometric spaces of our intuition in terms of primitive non-arithmetical concepts. That the idea of dimensionality should have come into play and
itself have been studied and made precise is indeed natural, since the curves, surfaces, and solids of our experience furnish the very basis for our intuitive ideas of dimensionality. ... [A
definition of dimension] was introduced by Professor Karl Menger of Vienna and developed in a sequence of shorter papers characterized by their elegance and generality. The essentials of the
dimensionality theory, which has by now attained a considerable perfection through the recent writings of Menger, Hurewicz, P S Aleksandrov and others, have been developed with admirable clarity
and completeness in a recently published book by Professor Menger.
Of course, more sophisticated ideas on dimension continue to be developed. However, James Kesling, in a 1977 review of another work, wrote of Menger's Dimensiontheorie:-
It reveals at one and the same time the naiveté of the early investigators by modern standards and yet their remarkable perception of what the important results were and the future direction of
the theory.
Menger spent the academic year 1930-31 in the United States. He visited Harvard University and the Rice Institute in Houston, Texas. Before Menger went to the United States, Kurt Gödel had joined his
Mathematical Colloquium. While in the United States, Menger kept in touch with the Colloquium in Vienna through Georg Nöbeling and also corresponded with Gödel. For example, in early 1931 he wrote:-
I am replying to your charming letter in a moving train and therefore by typewriter. Nöbeling had already written to me last autumn about your great discovery. I read your article with the utmost
interest and immediately delivered a report on it here. I rank your achievement among the greatest of modern logic and send you my heartiest congratulations.
Back in Vienna, Menger published Kurventheorie in 1932. Wallace Alvin Wilson writes [42]:-
The book under discussion is to be the second volume of a work entitled 'Mengentheoretische Geometrie'. In addition to being the first treatise on the theory of curves and their topological
properties, the book performs the further service of gathering together in an organic whole much of the research work on this subject done since the war - work which, from the nature of its
piecemeal appearance, must have seemed to many to be of a pointless character.
This book contains Menger's 'n-Arc Theorem', described by Frank Harary as:-
... one of the most important results in graph theory ... the fundamental theorem on connectivity in graphs.
Menger attended the International Congress of Mathematicians in Zurich in September 1932 when he gave one of the plenary addresses on Neuere Methoden und Probleme der Geometrie. He became engaged to
his girlfriend Hilda Axamit and in 1934 they spent a holiday together in Ramsau and Strobl. They married on 5 December 1934 and had four children; Karl Jr (born 9 July 1936), twins Rosemary and Fred
(born 13 December 1937), and Eve (born 1942). When Hitler came to power in Germany in 1933, Menger soon realised the problems that lay ahead for Austria. Steve Abbot, writing about Menger's
Colloquium, explains [3]:-
The Ergebnisse was criticised at the time for having 'too many Jewish contributors'. With the rise of the Nazis in Germany, it was only a matter of time before the political situation in Austria
forced the end of the Kolloquium meetings. Menger left the country a year before the Anschluss.
The problems had become very real to Menger when, in June 1936, Moritz Schlick (professor of philosophy in Vienna and one of the founders of the Vienna Circle) was shot dead by a student. A month
later Menger, still stunned by the tragedy, was in Oslo at the International Congress of Mathematicians. He explained to everyone how the situation in Vienna was deteriorating fast. Soon after this
he was offered a chair at the University of Notre Dame, Indiana and he went to the United States in 1937 to take up the post. At this stage he kept open his chair in Vienna but, in March 1938, as a
result of the political situation in Austria, he resigned his chair in Vienna. Certainly at this stage he was still expecting to return to Vienna after the war but, as Karl Signund writes:-
After the war, the reconstruction of the bombed-out State Opera was accorded highest priority by democratic new Austria. Men like ... Menger, however, were politely told that the University of
Vienna had no place for them.
At Notre Dame, Menger was a colleague of Emil Artin who was escaping from the Nazis and spent the year 1937-38 there. Also on the staff at Notre Dame was Paul Milton Pepper (1909-2010) who had just
completed his doctorate at the University of Cincinnati. Menger, with these two colleagues and a couple of others, set up a Ph.D. programme at Notre Dame. Menger also organised a Mathematical
Colloquium based on the one he had set up at Vienna and began publishing Reports of a Mathematical Colloquium in 1938. He arranged a visit by Gödel to Notre Dame but failed to persuade him to accept
a post there. However after the war began to affect the United States in 1941, academic life was disrupted and Menger's Mathematical Colloquium failed to become as influential as the Vienna Circle
had been. The Reports stopped publication in 1946. Rudolf Carnap, one of Menger's colleagues in the Vienna Circle, had also emigrated to the United States and had set up the Chicago Circle. Even
while Menger was running his own Colloquium at Notre Dame, he still made the effort to attend Carnap's Chicago Circle whenever possible.
Around this time Menger's interests in mathematics broadened and he began to work on hyperbolic geometry, probabilistic geometry and the algebra of functions. Menger's work on geometry failed to have
the impact that his work on dimension theory had. This is almost certainly because geometry, at this time, was a rather neglected area of mathematics, particularly in the United States. Also during
the war years Menger's contribution to the war effort was teaching calculus to Naval cadets as part of the V-12 Navy College Training Program which ran from 1942 to 1944. This led to his interest in
mathematical education and, during the 1950s and 1960s, he wrote articles on mathematical education and published books with new ideas on teaching calculus, geometry and other branches of
An article by Menger on teaching is available at THIS LINK
reviews of Karl Menger's books are at THIS LINK.
The visits that Menger made to Carnap's Chicago Circle led to him feeling that Chicago would be both a better place for him to work and a better place for his children to be educated. The chairman of
the mathematics department at the Illinois Institute of Technology in Chicago was Lester R Ford whom Menger had known from the time of his 1931 visit to the Rice Institute in Houston, Texas. Menger
talked to Ford about wanting to move to Chicago and Ford was soon in a position to make him an offer. In 1948 Menger went to the Illinois Institute of Technology and he was to remain in Chicago for
the rest of his life. Seymour Kass, who was a colleague of Menger's at Chicago during the 1960s, writes [21]:-
Menger has been described as a fiery personality. As a junior faculty member at IIT in the 1960s, I found him gracious, charming, and vivacious. Menger was solicitous of students. From his early
days in Vienna onward he invited students and faculty to his home. In Chicago it included a tour of his decorative tile collection, which lined the walls of his living room. And he sometimes
invited doctoral students for early morning mathematical walks along Lake Michigan. His office was a showplace of chaos, the desktop covered with a turbulent sea of papers. He knew the exact
position of each scrap. On the telephone he could instruct a secretary exactly how to locate what he needed. Once, in his absence, a new secretary undertook to "make order", making little stacks
on his desk. Upon his return, discovering the disaster, he nearly wept, because "Now I don't know where anything is."
Menger taught in Chicago until he retired in 1971.
We have seen how Menger's interests extended beyond mathematics to philosophy and economics. Donald Gillies, in his fascinating paper on 'Karl Menger as a Philosopher', writes [13]:-
Menger is not always given his due recognition as a philosopher. In fact ..., many important ideas of the Vienna Circle originated with Menger - though they are often attributed to others.
Moreover, Menger's own philosophy of mathematics (for which I have coined the name: 'laissez-faire formalism') is the implicit philosophy of many working mathematicians. Generally, Menger could
be described as the most logical positivist of them all.
In Austrian Marginalism and Mathematical Economics (1973), Menger writes:-
Since I am the son of the author of the 'Grundsätze' as well as a mathematician, 'two souls reside within my breast '. I hope that this fact will help me in discussing various points of the
controversy objectively.
Karl Sigmund, reviewing Menger's paper, writes:-
By bringing together economists and mathematicians, Menger had played a catalytic role for which he was uniquely predisposed by his upbringing and education.
In [1] his interests are described as follows:-
He had a great love of music. ... he built up a notable collection of decorative tiles from all over the world. ... he ate meat sparingly, particularly in his last years. But he was always glad
to sample cuisines, from Cuban to Ethiopian, that were new to him. He liked baked apples.
Article by: J J O'Connor and E F Robertson
Click on this link to see a list of the Glossary entries for this page
List of References (43 books/articles)
Mathematicians born in the same country
Additional Material in MacTutor
Honours awarded to Karl Menger
(Click below for those honoured in this way)
Speaker at International Congress 1932
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © March 2014 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is: | {"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Menger.html","timestamp":"2014-04-16T22:20:56Z","content_type":null,"content_length":"34634","record_id":"<urn:uuid:cfef7821-f32d-4aa4-ad92-2a44dd4117d1>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics Tutors
Jamaica Plain, MA 02130
Success in math and English
...Most recently I have done this at the secondary level as a public charter school teacher. Prior to that, alongside my own graduate work in
, I taught and assistant-taught college-level math classes, from remedial Calculus to Multivariate Calculus. Because...
Offering 10+ subjects including algebra 1, algebra 2 and calculus | {"url":"http://www.wyzant.com/Roxbury_MA_Mathematics_tutors.aspx","timestamp":"2014-04-17T04:29:21Z","content_type":null,"content_length":"61172","record_id":"<urn:uuid:d1639aae-25f9-430a-ad02-3c29a40bcdea>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results in a math exam
May 17th 2010, 11:52 PM #1
May 2010
Normal Distribution Question
Suppose that 11,877 students in New Zealand sat a mathematics exam, and 3.6% of these students scored 85% or higher.
Does this conform to a Normal distribution?
What is the theoretical expected value of students who should score 85% or more?
What % of students scored 88% or more?
11,877 students in New Zealand took a mathematics exam.
What percentage of these students should be expected to score 85% or more?
What percentage of these students should be expected to score 88% or more?
If 3.6% of students scored 85% or more, is this an expected result?
If 3.6% of students scored 85% or more, what percentage of students scored 88% or more?
Both versions of this question look ill-formed to me, is that exactly the wording of the question/s?
May 17th 2010, 11:53 PM #2
May 2010
May 18th 2010, 01:34 AM #3
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/advanced-statistics/145258-results-math-exam.html","timestamp":"2014-04-21T06:01:01Z","content_type":null,"content_length":"34415","record_id":"<urn:uuid:226d2798-17ee-44c1-adbf-0cddfa351dcc>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
f(x) = x^3 f'(x) = 3x^2 why then when we use this notation: y = x^3 dy/dx = 3x^2(dx/dy) <----isn't y the dependent variable.. I'm so confused today.
Best Response
You've already chosen the best response.
I should go study...
Best Response
You've already chosen the best response.
Two peple developed Calculus, Newton and Leibniz. Newton chose to use primes "'" , or f'(x) while lebinz chose dy/dx, the differential in y/the differential in x. They both mean the same thing
for now, but dy/dx becomes important later on in Calculus 2 and 3.
Best Response
You've already chosen the best response.
when you're doing related rate problems though that extra dx/dt... bllllaaaaa.. :(
Best Response
You've already chosen the best response.
y is the dependent variable. y' = dy/dx = d/dx of y
Best Response
You've already chosen the best response.
The notation dy/dx is very useful when integrating and it can simply be read as "the derivative of y with respect to x."
Best Response
You've already chosen the best response.
you don't get the dx/dy when you use leibniz notation taking the differential of that equation.
Best Response
You've already chosen the best response.
I get confused when we start doing related rates and dt is thrown in the mix. that what's going on.
Best Response
You've already chosen the best response.
\[y = x^3\]\[\implies \frac{d}{dx}[y] = \frac{d}{dx}[x^3]\]\[\implies \frac{dy}{dx} = 3x^2\]
Best Response
You've already chosen the best response.
Ok, and so if I took both sides with respect to dt... that's why there's an "extra" dx/dt.. Ok... I get it.
Best Response
You've already chosen the best response.
If however we wanted to take the differential with respect to t: \[\frac{d}{dt}[y] = \frac{d}{dt}[x^3]\]\[\implies \frac{dy}{dt} = \frac{d}{dx}[x^3] \cdot \frac{dx}{dt}\]\[\implies \frac{dy}{dt}
= 3x^2\frac{dx}{dt}\]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
right on.. foo! :)
Best Response
You've already chosen the best response.
I <3 Leibniz notation. Newton can suck it.
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4e4198d80b8b1fdcd4b43be4","timestamp":"2014-04-18T19:21:37Z","content_type":null,"content_length":"59062","record_id":"<urn:uuid:32cafc07-3d04-4467-ad08-64cde216a20b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
word problem
December 29th 2006, 11:32 AM #1
Dec 2006
word problem
there are 216 trees to be planned in a park. there are 54 people who will plant the trees. Each person will plant the same number of trees how many trees will each person plant
Method 1:
Suppose each person plants N trees, then the total number of trees planted
Nx54 = 216.
Divide both sides of this equation by 54 gives:
N = 216/54 = 4.
Method 2:
Suppose they plant 1 tree each, then the total is 54 trees, but this is less
than the 216 we are told were planted.
So now guess they plant 2, then the total planted is 108, still not enogh.
Try 3, total is now 162 getting closser.
Try 4, total is now 216, which is the required total.
Method 3:
The round the numbers, then the total number of trees is 200, and planters
is 50. 50 planters would have to plant 4 trees each to plant a total of 200.
Now check that if 54 planters plant 4 trees each then the total is 216.
ben wants to buy a guitar the regular price of the guitar is 329.99. the sale price of the guitar is 25% off the regular price. what is the sale price of the guitar?
ben must pay 7.25% sales tax in addition to the sale price of the guitar. what is the total amount ben must pay for the guitar
ben wants to buy a guitar the regular price of the guitar is 329.99. the sale price of the guitar is 25% off the regular price. what is the sale price of the guitar?
ben must pay 7.25% sales tax in addition to the sale price of the guitar. what is the total amount ben must pay for the guitar
original price is 100% -------------> $329,99
sale price is 75% -----------------> $329.99 * 0.75 = $247.49
tax is 7.25% from the sale price ---> $247.49 * 0.0725 = $17.94
Add sale price and tax to get the total amount: $265.43
Do us a favor: If you have a new problem, please start a new thread. Otherwise you risk that your problems will not be answered.
December 29th 2006, 11:40 AM #2
Grand Panjandrum
Nov 2005
December 30th 2006, 06:55 PM #3
Dec 2006
December 30th 2006, 11:24 PM #4 | {"url":"http://mathhelpforum.com/algebra/9358-word-problem.html","timestamp":"2014-04-20T12:28:05Z","content_type":null,"content_length":"39921","record_id":"<urn:uuid:5ea37028-dbc9-4834-8bd3-1d19b6195380>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
DMTCS Proceedings
2005 European Conference on Combinatorics, Graph Theory and Applications (EuroComb '05)
Stefan Felsner (ed.)
DMTCS Conference Volume AE (2005), pp. 145-150
author: Éric Rémila
title: Structure of spaces of rhombus tilings in the lexicograhic case
keywords: rhombus tiling, flip, connectivity
abstract: Rhombus tilings are tilings of zonotopes with rhombohedra. We study a class of lexicographic rhombus tilings of zonotopes, which are deduced from higher Bruhat orders relaxing the
unitarity condition. Precisely, we fix a sequence
, v
,…, v
of vectors of
and a sequence
, m
,…, m
of positive integers. We assume (lexicographic hypothesis) that for each subsequence
, v
,…, v
of length
, we have
, v
,…, v
) > 0
. The zonotope
is the set
{ Σα
0 ≤α
. Each prototile used in a tiling of
is a rhombohedron constructed from a subsequence of
vectors. We prove that the space of tilings of
is a graded poset, with minimal and maximal element.
If your browser does not display the abstract correctly (because of the different mathematical symbols) you may look it up in the PostScript or PDF files.
reference: Éric Rémila (2005), Structure of spaces of rhombus tilings in the lexicograhic case, in 2005 European Conference on Combinatorics, Graph Theory and Applications (EuroComb '05), Stefan
Felsner (ed.), Discrete Mathematics and Theoretical Computer Science Proceedings AE, pp. 145-150
bibtex: For a corresponding BibTeX entry, please consider our BibTeX-file.
ps.gz-source: dmAE0129.ps.gz (73 K)
ps-source: dmAE0129.ps (228 K)
pdf-source: dmAE0129.pdf (229 K)
The first source gives you the `gzipped' PostScript, the second the plain PostScript and the third the format for the Adobe accrobat reader. Depending on the installation of your web browser, at
least one of these should (after some amount of time) pop up a window for you that shows the full article. If this is not the case, you should contact your system administrator to install your
browser correctly.
Due to limitations of your local software, the two formats may show up differently on your screen. If eg you use xpdf to visualize pdf, some of the graphics in the file may not come across. On the
other hand, pdf has a capacity of giving links to sections, bibliography and external references that will not appear with PostScript.
Automatically produced on Di Sep 27 10:09:49 CEST 2005 by gustedt | {"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/proceedings/article/viewArticle/dmAE0129/1029","timestamp":"2014-04-18T06:23:03Z","content_type":null,"content_length":"16908","record_id":"<urn:uuid:93f7cecf-fecc-40a9-9062-12be6fef1f0d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Energy versus angular momentum in black hole binaries
Damour, T. and Nagar, A. and Pollney, D. and Reisswig, C. (2012) Energy versus angular momentum in black hole binaries. Physical Review Letters, 108 (13). p. 131101. ISSN 1079-7114
Official URL: http://prl.aps.org/abstract/PRL/v108/i13/e131101
Using accurate numerical-relativity simulations of (nonspinning) black-hole binaries with mass ratios 1∶1, 2∶1, and 3∶1, we compute the gauge-invariant relation between the (reduced) binding energy E
and the (reduced) angular momentum j of the system. We show that the relation E(j) is an accurate diagnostic of the dynamics of a black-hole binary in a highly relativistic regime. By comparing the
numerical-relativity ENR(j) curve with the predictions of several analytic approximation schemes, we find that, while the canonically defined, nonresummed post-Newtonian–expanded EPN(j) relation
exhibits large and growing deviations from ENR(j), the prediction of the effective one body formalism, based purely on known analytical results (without any calibration to numerical relativity),
agrees strikingly well with the numerical-relativity results.
Item Type: Article
Uncontrolled Keywords: Analytic approximation; Analytical results; Black holes; Mass ratio; Numerical relativity; Relativistic regime; Binding energy; relativity
Subjects: Q Science > QA Mathematics
Q Science > QC Physics
Divisions: Faculty > Faculty of Science > Mathematics (Pure & Applied)
ID Code: 3812
Deposited By: Denis Pollney
Deposited On: 26 Oct 2012 06:33
Last Modified: 26 Oct 2012 06:33
full-text download(s) since 26 Oct 2012 06:33
full-text download(s) in the past 12 months
More statistics...
Repository Staff Only: item control page | {"url":"http://eprints.ru.ac.za/3812/","timestamp":"2014-04-19T00:01:24Z","content_type":null,"content_length":"21188","record_id":"<urn:uuid:ab8868f9-17a4-4f60-ab2c-06f712cc8d56>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
October 5th 2007, 05:34 PM #1
Feb 2007
Been stuck on this proof for a while (its in the attached pdf). There's a hint to either integrate by parts or differentiate both sides with respect to p. I've tried both but can't seem to get
anywhere with it. Any help would be greatly appreciated. Thanks!
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-statistics/20046-proof.html","timestamp":"2014-04-16T08:22:38Z","content_type":null,"content_length":"29180","record_id":"<urn:uuid:2fac7f9f-078f-44fc-be99-4922a7983745>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
Model Validation
6. Process or Product Monitoring and Control
6.6. Case Studies in Process Monitoring
6.6.2. Aerosol Particle Size
6.6.2.4. Model Validation
Residuals After fitting the model, we should check whether the model is appropriate.
As with standard non-linear least squares fitting, the primary tool for model diagnostic checking is residual analysis.
4-Plot of Residuals from ARIMA The 4-plot is a convenient graphical technique for model validation in that it tests the assumptions for the residuals on a single graph.
(2,1,0) Model
Interpretation of the 4-Plot We can make the following conclusions based on the above 4-plot.
1. The run sequence plot shows that the residuals do not violate the assumption of constant location and scale. It also shows that most of the residuals are in
the range (-1, 1).
2. The lag plot indicates that the residuals are not autocorrelated at lag 1.
3. The histogram and normal probability plot indicate that the normal distribution provides an adequate fit for this model.
Autocorrelation Plot of Residuals In addition, the autocorrelation plot of the residuals from the ARIMA(2,1,0) model was generated.
from ARIMA(2,1,0) Model
Interpretation of the The autocorrelation plot shows that for the first 25 lags, all sample autocorrelations except those at lags 7 and 18 fall inside the 95 % confidence bounds
Autocorrelation Plot indicating the residuals appear to be random.
Test the Randomness of Residuals We apply the Box-Ljung test to the residuals from the ARIMA(2,1,0) model fit to determine whether residuals are random. In this example, the Box-Ljung test shows
From the ARIMA(2,1,0) Model Fit that the first 24 lag autocorrelations among the residuals are zero (p-value = 0.080), indicating that the residuals are random and that the model provides an
adequate fit to the data.
4-Plot of Residuals from ARIMA The 4-plot is a convenient graphical technique for model validation in that it tests the assumptions for the residuals on a single graph.
(0,1,1) Model
Interpretation of the 4-Plot from We can make the following conclusions based on the above 4-plot.
the ARIMA(0,1,1) Model
1. The run sequence plot shows that the residuals do not violate the assumption of constant location and scale. It also shows that most of the residuals are in
the range (-1, 1).
2. The lag plot indicates that the residuals are not autocorrelated at lag 1.
3. The histogram and normal probability plot indicate that the normal distribution provides an adequate fit for this model.
This 4-plot of the residuals indicates that the fitted model is adequate for the data.
Autocorrelation Plot of Residuals The autocorrelation plot of the residuals from ARIMA(0,1,1) was generated.
from ARIMA(0,1,1) Model
Interpretation of the Similar to the result for the ARIMA(2,1,0) model, it shows that for the first 25 lags, all sample autocorrelations expect those at lags 7 and 18 fall inside the
Autocorrelation Plot 95% confidence bounds indicating the residuals appear to be random.
Test the Randomness of Residuals The Box-Ljung test is also applied to the residuals from the ARIMA(0,1,1) model. The test indicates that there is at least one non-zero autocorrelation amont the
From the ARIMA(0,1,1) Model Fit first 24 lags. We conclude that there is not enough evidence to claim that the residuals are random (p-value = 0.026).
Summary Overall, the ARIMA(0,1,1) is an adequate model. However, the ARIMA(2,1,0) is a little better than the ARIMA(0,1,1). | {"url":"http://www.itl.nist.gov/div898/handbook/pmc/section6/pmc624.htm","timestamp":"2014-04-21T02:21:32Z","content_type":null,"content_length":"9495","record_id":"<urn:uuid:ef724289-d024-4071-91b1-467a5a128b24>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-User] Cholesky problem (I need dtrtrs, not dpotrs)
Sturla Molden sturla@molden...
Thu Jul 8 23:33:28 CDT 2010
Sturla Molden skrev:
> Is there any way of getting access to Lapack function dtrtrs or BLAS
> function dtrsm from SciPy? cho_solve does not really do what I want (it
> calls dpotrs). Which by the way is extremely annoying, since 99% of use
> cases for Cholesky (at least in statistics) require solving U'X = Y, not
> U'UX = Y as cho_solve do. Goloub & van Loan does not even bother to
> mention the dpotrs algorithm.
Just to elaborate on this:
Say we want to calculate the Mahalanobis distance from somw points X to
a distribution N(m,S). With cho_factor and cho_solve, that would be
cx = X - m
sqmahal = (cx*cho_solve(cho_factor(S),cx.T).T).sum(axis=1)
whereas a similar routine "tri_solve" using dtrtrs would be
cx = X - m
sqmahal = (tri_solve(cho_factor(S),cx.T).T**2).sum(axis=1)
This looks almost the same in Python, but the solution with tri_solve (dtrtrs) requires only half as many flops as cho_solve (dpotrs) does.
In many statistical applications requiring substantial amount of computation (EM algorithms, MCMC simulation, and the like), computing Mahalanobis distances can be the biggest bottleneck.
So that is one thing I really miss in
P.S. In Fortran or C we would rather use a tight loop on dtrsv instead, computing sqmahal point by point, as it is more friendly to cache than dtrtrs on the whole block. Computing mahalanobis distances efficiently is a so common use case for Cholesky that I (almost) suggest this be added to SciPy as well.
P.P.S. Those transpositions are actually required to make it run fast; in Matlab they would slow things down terribly. NumPy and Matlab is very different in this respect. A transpose does not create a new array in NumPy, it just switches the order flag between C and Fortran. C order is NumPy's native, but we must have Fortran order before calling BLAS or LAPACK. If we don't, f2py will make a copy with a transpose. So we avoid a transpose by taking a transpose. It might seem a bit paradoxical.
More information about the SciPy-User mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2010-July/026028.html","timestamp":"2014-04-18T11:20:19Z","content_type":null,"content_length":"5025","record_id":"<urn:uuid:092d2e9e-c155-4fa3-871e-5ea060ab0e2f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
Establishing a standard definition for child overweight and obesity worldwide: international survey | BMJ
1. Correspondence to: T J Cole
Objective: To develop an internationally acceptable definition of child overweight and obesity, specifying the measurement, the reference population, and the age and sex specific cut off points.
Design: International survey of six large nationally representative cross sectional growth studies.
Setting: Brazil, Great Britain, Hong Kong, the Netherlands, Singapore, and the United States
Subjects: 97 876 males and 94 851 females from birth to 25 years of age
Main outcome measure: Body mass index (weight/height2).
Results: For each of the surveys, centile curves were drawn that at age 18 years passed through the widely used cut off points of 25 and 30 kg/m2 for adult overweight and obesity. The resulting
curves were averaged to provide age and sex specific cut off points from 2-18 years.
Conclusions: The proposed cut off points, which are less arbitrary and more internationally based than current alternatives, should help to provide internationally comparable prevalence rates of
overweight and obesity in children.
The prevalence of child obesity is increasing rapidly worldwide.1 It is associated with several risk factors for later heart disease and other chronic diseases including hyperlipidaemia,
hyperinsulinaemia, hypertension, and early atherosclerosis.2–4 These risk factors may operate through the association between child and adult obesity, but they may also act independently.5
Because of their public health importance, the trends in child obesity should be closely monitored. Trends are, however, difficult to quantify or to compare internationally, as a wide variety of
definitions of child obesity are in use, and no commonly accepted standard has yet emerged. The ideal definition, based on percentage body fat, is impracticable for epidemiological use. Although less
sensitive than skinfold thicknesses,6 the body mass index (weight/height^2) is widely used in adult populations, and a cut off point of 30 kg/m^2 is recognised internationally as a definition of
adult obesity.7
Body mass index in childhood changes substantially with age.8 9 At birth the median is as low as 13 kg/m^2, increases to 17 kg/m^2 at age 1, decreases to 15.5 kg/m^2 at age 6, then increases to 21 kg
/m^2 at age 20. Clearly a cut off point related to age is needed to define child obesity, based on the same principle at different ages, for example, using reference centiles.10 In the United States,
the 85th and 95th centiles of body mass index for age and sex based on nationally representative survey data have been recommended as cut off points to identify overweight and obesity.11
For wider international use this definition raises two questions: why base it on data from the United States, and why use the 85th or 95th centile? Other countries are unlikely to base a cut off
point solely on American data, and the 85th or 95th centile is intrinsically no more valid than the 90th, 91st, 97th, or 98th centile. Regardless of centile or reference population, the cut off point
can still be criticised as arbitrary.
A reference population could be obtained by pooling data from several sources, if sufficiently homogeneous. A centile cut off point could in theory be identified as the point on the distribution of
body mass index where the health risk of obesity starts to rise steeply. Unfortunately such a point cannot be identified with any precision: children have less disease related to obesity than adults,
and the association between child obesity and adult health risk may be mediated through adult obesity, which is associated both with child obesity and adult disease.
The adult cut off points in widest use—a body mass index of 25 kg/m^2 for overweight and 30 kg/m^2 for obesity—are related to health risk1 but are also convenient round numbers. A workshop organised
by the International Obesity Task Force proposed that these adult cut off points be linked to body mass index centiles for children to provide child cut off points.12 13 We describe the development
of age and sex specific cut off points for body mass index for overweight and obesity in children, using dataset specific centiles linked to adult cut off points.
Subjects and methods
We obtained data on body mass index for children from six large nationally representative cross sectional surveys on growth from Brazil, Great Britain, Hong Kong, the Netherlands, Singapore, and the
United States (table 1). Each survey had over 10 000 subjects, with ages ranging from 6-18 years, and quality control measures to minimise measurement error. Four of the datasets were based on single
samples whereas the British and American data consisted of pooled samples collected over a period of time. We omitted the most recent survey data from the United States (1988-94) because we preferred
to use data predating the recent increase in prevalence of obesity.19 In practice this decision made virtually no difference to the final cut off points.
Centile curves
which summarises the data in terms of three smooth age specific curves called L (lambda), M (mu), and S (sigma). The M and S curves correspond to the median and coefficient of variation of body mass
index at each age whereas the L curve allows for the substantial age dependent skewness in the distribution of body mass index. The values for L, M, and S can be tabulated for a series of ages. The
Brazilian and US surveys (table 1) used a weighted sampling design, and their data were analysed accordingly.
The assumption underlying the LMS method is that after Box-Cox power transformation the data at each age are normally distributed. The points on each centile curve are define in terms of the formula
where L, M, and S are values of the fitted curves at each age, and z indicates the z score for the required centile, for example, z=1.33 for the 91st centile. Figure 1 shows centiles for body mass
index by sex based on the British reference,9 with seven centiles spaced two thirds of a z score apart—that is, z=−2, −1.33, −0.67, 0, +0.67, +1.33, and +2.
Figure 1 also shows body mass index values of 25 and 30 kg/m^2 at age 18; 25 kg/m^2 is just below the 91st centile in both sexes, whereas 30 kg/m^2 is above the 98th centile. The body mass index
(BMI) values can be converted to exact z scores from the L, M, and S values at age 18, with the formula2:
The body mass index of 25 kg/m^2 at age 18 is z score +1.19 in females, corresponding to the 88th centile, and +1.30 in males, on the 90th centile. Therefore the prevalence of overweight at age 18 is
10-12%. A body mass index of 30 kg/m^2 at age 18 is on the 99th centile in both sexes, an obesity prevalence of about 1%.
Each z score substituted into equation 1 provides the formula for an extra centile curve passing through the specified point (dotted line in fig 1). Each centile curve defines cut off points through
childhood that correspond in prevalence of overweight or obesity to that of the adult cut off point—the curve joins up points where the prevalence matches that seen at age 18.
This process is repeated for all six datasets, by sex. Superimposing their curves leads to a cluster of centile curves that all pass through the adult cut off point yet represent a wide range of
overweight and obesity. The hypothesis is that the relation between cut off point and prevalence at different ages gives the same curve shape irrespective of country or obesity. If sufficiently
similar the curves can be averaged to provide a single smooth curve passing through the adult cut off point. The curve is representative of all the datasets involved but is unrelated to their
obesity—the cut off point is effectively independent of the spectrum of obesity in the reference data.
Figure 2 shows the median curves for body mass index in the six datasets by sex from birth to 20 years. A wide range of values spans several units of body mass index in both sexes. These show the
different extents of overweight across datasets, reflecting national differences in fatness. The median curves are all about the same shape, although the curve for Singaporean males is more curved,
being lowest at ages 6 and 19 and highest at age 11.
Averaging the median curves would be a simple way to summarise the age trend in body mass index through childhood. But the resulting position of the curve at each age would depend on the overweight
prevalence of the countries in the reference set, and so would be comparatively arbitrary. In any case the median is not an extreme centile and is ineffective as a cut off point. So averaging the
median curves is not the answer.
Instead the centile curves are linked to adult cut off points of 25 and 30 kg/m^2, positioned at age 18 to maximise the available data. These values are expressed as centiles for each dataset, and
the corresponding centile curves are drawn. Figure 1 shows the centile curves for overweight and obesity for the British reference.
Figure 3 presents the centile curves for overweight for the six datasets by sex, passing through the adult cut off point of 25 kg/m^2 at age 18. They are much closer together than the median curves
(fig 2), particularly above age 10, because the national differences in overweight prevalence have been largely adjusted out. The divergence of the Singaporean curve is more pronounced than in figure
Figure 4 gives the corresponding centile curves for obesity in each dataset, all passing through a body mass index of 30 kg/m^2 at age 18. There is less agreement than for the centiles for
overweight, and again Singapore stands out.
table 2 gives the centiles for overweight corresponding to a body mass index of 25 kg/m^2 at age 18 for each dataset by sex. For example, they approximate the 95th centile for Dutch males and the
90th centile for British males—that is, prevalences of overweight of 5-10%. The centiles for obesity corresponding to a body mass index of 30 kg/m^2 in table 3 are mainly above the 97th centile, less
than 3% prevalence, and they show more variability.
The curves in figures 3 and 4 are reasonably consistent across countries between ages 8 and 18, although those for Singapore are higher between ages 10 and 15. This is due partly to the increased
median (fig 2) and partly to greater variability. The LMS method estimates the coefficient of variation (or S curve) of body mass index during the centile fitting process, and figure 5 compares the S
curves for the six datasets. Between ages 6 and 15 the coefficient of variation in Singapore is greater than for the other countries. The range of values for the coefficient of variation in puberty
is greater for males than females, and for Brazil, Singapore, and the United States the curves for both sexes show a peak in puberty.
The amount of skewness, as measured by the sample L curves, is similar across countries. The Box-Cox powers are consistently between −1 and −2 indicating extreme skewness (not shown).
Table 4 shows international cut off points for body mass index for overweight and obesity from 2-18 years, obtained by averaging the centile curves in figures 3 and 4. From 2-6 years the cut off
points do not include Singapore because its data start at age 6 years. Figure 6 shows the cut off points, with the values at 5.5 and 6 years adjusted slightly to ensure a smooth join between the two
sets of curves.
Our method addresses the two main problems of defining internationally acceptable cut off points for body mass index for overweight and obesity in children. The reference population was obtained by
averaging across a heterogeneous mix of surveys from different countries, with widely differing prevalence rates for obesity, whereas the appropriate cut off point was defined in body mass index
units in young adulthood and extrapolated to childhood, conserving the corresponding centile in each dataset. This principle, proposed at a meeting in 1997,13 was discussed in a recent editorial.12
Although less arbitrary and potentially more internationally acceptable than other cut off points, this approach still provides a statistical definition, with all the advantages and disadvantages
that that implies.20 Our terminology corresponds to adult cut off points, but the health consequences for children above the cut off points may differ from those for adults. Children who are
overweight but not obese should be evaluated for other factors as well.11 Nonetheless, the cut off points based on a heterogeneous worldwide population can be applied widely to determine whether the
children and adolescents they identify are at increased risk of morbidity related to obesity.
Agreement of the centile curves
The major uncertainty with our approach, and the test of its validity, is the extent to which the centile curves for the datasets are of the same shape. Figures 3 and 4 show that although the
agreement is reasonable it is not perfect. If it were perfect—that is, all the curves were superimposed—the reference cut off points applied to a given dataset would give the same prevalence for
obesity at all ages, which could be predicted from the prevalence at age 18. So the different shapes in figures 3 and 4 show to what extent the age specific prevalence deviates from the age 18
prevalence within datasets.
We did consider six other datasets for our analysis (Canada, France, Japan, Russia, Sweden, and Venezuela) but we excluded them because they were either too small or nationally unrepresentative.
Their centile curves for overweight in figure 7 are similar to those in figures 3 and 4. (Data for Japan and girls in Sweden and Venezuela are omitted as they do not extend to age 18). Singapore and
Canada are clear outliers during puberty, whereas Russia stands out earlier in childhood. The median curves for Japan and Hong Kong are similar in shape (not shown), suggesting that Singapore is
atypical of Asia.
Nothing obvious explains Singapore's unusual pattern of overweight in puberty. Omitting it from the averaged country curves would lower the cut off points for both sexes by up to 0.4 body mass index
units or 0.14 z score units at age 11-12. This compares to a range of three units between the lowest and highest curves at this age. Therefore, even though Singapore looks different from the other
countries, its impact on the cut off points is only modest. Because there is no a priori reason to exclude Singapore, and because so little is known about growth patterns across countries, we have
chosen to retain it in the reference population.
Extending the dataset
We recognise that the reference population made up of these countries is less than ideal. It probably reflects Western populations adequately but lacks representation from other parts of the world.
The Hong Kong sample may, however, be fairly representative of the Chinese, and the Brazilian and US datasets include many subjects of African descent. Although additional datasets from Africa and
Asia would be helpful, our stringent inclusion criteria of a large sample, national representativeness, minimum age range 6-18 years, and data quality control, mean that further datasets are unlikely
to emerge from these continents in the foreseeable future. To our knowledge no other available surveys satisfy the criteria. It is not realistic to wait for them because there is an urgent need for
international cut off points now. Also, our methodology aims to adjust for differences in overweight between countries, so it could be argued that adding other countries to the reference set would
make little difference to the cut off points. None the less, further research is needed to explore patterns of body mass index in children in Africa and Asia.
The body mass index curves in figure 6 show a fairly linear pattern for males but a higher and more concave shape for females. This sex difference can also be seen in the individual curves of figures
2 to 4 reflecting earlier puberty in females. The sensitivity of the curve's shape to the timing of puberty may affect the performance of the cut off points in countries where puberty is appreciably
delayed,21 although delays of less than two years are unlikely to make much difference.
Use of cut off points
The cut off points in table 4 are tabulated at exact half year ages and for clinical use need to be linearly interpolated to the subject's age. For epidemiological use, with age groups of one year
width, the cut off point at the mid year value (for example, at age 7.5 for the 7.0-8.0 age group) will give an essentially unbiased estimate of the prevalence.
The centiles for obesity involve more extrapolation than the centiles for overweight, which may explain the greater variability across datasets in figure 4 compared with figure 3. For this reason the
obesity cut off points in figure 6 are fairly imprecise and are likely to be less useful than the cut off points for overweight.
The approximate prevalence values for overweight and obesity in tables 2 and 3 are calculated as the tail areas of the body mass index distribution in each sample at age 18, as estimated by the LMS
method. This assumes that the distribution is normal after adjusting for skewness, which is inevitably only an approximation. In the British data there was slight kurtosis (heavy tails) in the
distribution of body mass index,15 with 2.8% of the sample rather than the 2.3% expected exceeding a z score of 2. Therefore the true prevalences for the other datasets here may differ slightly from
the values quoted.
The principle used to obtain cut off points for overweight and obesity in children could also provide a cut off point for underweight in children, based on the World Health Organisation's cut off
point of a body mass index of 18.5 kg/m^2 for adult underweight. A body mass index of 18.5 kg/m^2 in a young adult is, however, equivalent to the British 12th centile,9 an unacceptably high
prevalence of child underweight. A possible alternative would be a cut off point of a body mass index of 17 kg/m^2, on the British second centile at age 18.9 Although substantial data link cut off
points of 25 and 30 kg/m^2 to morbidity in adults22 and the corresponding centile cut off points are associated with morbidity in children,23 the health effects of cut off points corresponding to a
body mass index below 17 or 18.5 kg/m^2 have not been studied. These cut off points for underweight need validating as markers of disease risk.
Based on cross sectional data the curves give no information about centile crossing over time—a weakness of most “growth” charts. Longitudinal data are needed to derive correlations of body mass
index from one age to another, which then define the likely variability of centile crossing.24 25
Our analysis provides cut off points for body mass index in childhood that are based on international data and linked to the widely accepted adult cut off points of a body mass index of 25 and 30 kg/
m^2. Our approach avoids some of the usual arbitrariness of choosing the reference data and cut off point. Applying the cut off points to the national datasets on which they are based gives a wide
range of prevalence estimates at age 18 of 5-18% for overweight and 0.1-4% for obesity.A similar range of estimates is likely to be seen from age 2-18. The cut off points are recommended for use in
international comparisons of prevalence of overweight and obesity.
What is already known on this topic
Child obesity is a serious public health problem that is surprisingly difficult to define
The 95th centile of the US body mass index reference has recently been proposed as a cut off point for child obesity, but like previous definitions it is far from universally accepted
What this study adds
A new definition of overweight and obesity in childhood, based on pooled international data for body mass index and linked to the widely used adult obesity cut off point of 30 kg/m2, has been
The definition is less arbitrary and more international than others, and should encourage direct comparison of trends in child obesity worldwide
We thank Carlos Monteiro (Brazil), Sophie Leung (Hong Kong), Machteld Roede (the Netherlands), Uma Rajan (Singapore), Claude Bouchard (Canada), Marie Françoise Rolland Cachera (France), Yuji
Matsuzawa (Japan), Barry Popkin (USA, for the Russian data), Gunilla Tanner-Lindgren (Sweden), and Mercedes Lopez de Blanco (Venezuela) for allowing us access to their data.
Contributors: TJC had the original idea, did most of the statistical analyses, and wrote the first draft of the paper. TJC, MCB, KMF, and WHD provided the data. KMF did further analyses of the US
data. All authors attended the original childhood obesity workshop, participated in the design and planning of the study, discussed the interpretation of the results, and contributed to the final
paper. TJC will act as guarantor for the paper.
• Funding This work was supported by the Childhood Obesity Working Group of the International Obesity Task Force. TJC is supported by a Medical Research Council programme grant
• Competing interests None declared | {"url":"http://www.bmj.com/content/320/7244/1240","timestamp":"2014-04-19T22:14:59Z","content_type":null,"content_length":"168976","record_id":"<urn:uuid:f122674a-36c0-4e62-b879-1bd70b00d5af>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Changing Time Zones
Date: 11/17/1999 at 11:41:06
From: Caroline Reynolds
Subject: Time zones
1) An airplane left New York at 09:15 to travel to London. If the
flight took 5 hrs. 45 min., what was the local time when the plane
landed in London?
2) When it is midnight on Sunday in London, what time is it at 165
degrees west longitude?
Date: 11/17/1999 at 12:58:22
From: Doctor Rick
Subject: Re: Time zones
Hi, Caroline.
1) Here you have 2 steps: (a) add 5 hr. 45 min. to 09:15. (b) You're
still on New York time, so convert to London time by adding or
subtracting the correct number of hours.
2) If you pretend that each time zone is just a band of longitudes,
then you can figure it out by math. To be sure you're right, you'd
have to look on a map; and then you might find that which time zone
you're in depends on your latitude.
Each time zone (a difference of 1 hour in local time) is 15 degrees
wide. How many time zones are there between 0 degrees and 165 degrees
Then you need to decide whether you add or subtract hours. Remember
that the sun "moves" from east to west, so as you go west, noon (or
any other time) comes later. At any one moment, the time is earlier to
the west. I live near New York, for instance, and I know that when
it's midnight in London, it's 7 PM here.
- Doctor Rick, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/57019.html","timestamp":"2014-04-21T08:18:21Z","content_type":null,"content_length":"6298","record_id":"<urn:uuid:2f5a0019-d981-4ba4-9c82-28a3a61fdd58>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: January 2011 [00744]
[Date Index] [Thread Index] [Author Index]
Re: a bug in Mathematica 7.0?
• To: mathgroup at smc.vnet.net
• Subject: [mg115857] Re: a bug in Mathematica 7.0?
• From: Daniel Lichtblau <danl at wolfram.com>
• Date: Sun, 23 Jan 2011 05:34:39 -0500 (EST)
----- Original Message -----
> From: "yaqi" <yaqiwang at gmail.com>
> To: mathgroup at smc.vnet.net
> Sent: Saturday, January 22, 2011 2:22:13 AM
> Subject: [mg115836] a bug in Mathematica 7.0?
> Hello,
> I was shocked by the integration result of spherical harmonics given
> by Mathematica 7.0. The notebook conducting these evaluations is
> attached at the end of this post.
> Basically, I create a vector of real harmonics Y={Y_{n,k},k=-
> n,n;n=0,4} and then integrate Y_{n,k}*Y_{n,k}*Omega_y over the entire
> 2D sphere. The integral should be zero for Y_{2,2}*Y_{4,-4}*Omega_y
> but Mathematica 7.0 gives me -55*Sqrt[21]/512. Similar for Y_{4,2}
> *Y_{4,-4}*Omega_y, it should be zero but I get 99*Sqrt[7]/2048.
> So I create another vector of normal spherical harmonics by using
> 'SphericalHarmonicY' and then map it to the real harmonics and do the
> integral mentioned above. The only difference is that I have a change
> of variable in this integral; instead of using the cosine of the polor
> angle, I used the polor angle for the intergal directly. This time,
> Mathematica 7.0 gives me correct results.
> The only different between the two results are the two terms I
> mentioned above. I did the similar thing with Mathematica 5.0.
> Everything is correct.
> So can somebody take a look on the notebook, see if I messed up some
> variable usages or this is indeed a bug in Mathematica 7.0? I use
> Mathematica 7.0 for my regular derivations, this really shocked me!
> I do not know how to attach a file, so I copy and paste the entire
> notebook and attached below.
> Many thanks.
> [...]
Please send the integrand and expected result for one of the bad cases. What you have is a large matrix, and I do not know which examples are problematic, let alone what specific integrands produced them. (For example, I do now know what integrand goes with the statement "Y_{2,2}*Y_{4,-4}*Omega_y". Maybe this is inexcusable ignorance on my part. Humor me.)
Can send to any or all of myself, MathGroup, or Wolfram Research Tech Support.
Daniel Lichtblau
Wolfram Research | {"url":"http://forums.wolfram.com/mathgroup/archive/2011/Jan/msg00744.html","timestamp":"2014-04-17T04:07:38Z","content_type":null,"content_length":"27191","record_id":"<urn:uuid:19e6488c-abe3-471f-b37d-d0d7957ebee6>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
direct image (functor)
direct image (functor)
If $f:X\to Y$ is a continuous map of topological spaces, and if ${\bf Sheaves}(X)$ is the category of sheaves of abelian groups on $X$ (and similarly for ${\bf Sheaves}(Y)$), then the direct image
functor $f_{*}:{\bf Sheaves}(X)\to{\bf Sheaves}(Y)$ sends a sheaf $\mathcal{F}$ on $X$ to its direct image $f_{*}\mathcal{F}$ on $Y$. A morphism of sheaves $g:\mathcal{F}\to\mathcal{G}$ obviously
gives rise to a morphism of sheaves $f_{*}g:f_{*}\mathcal{F}\to f_{*}\mathcal{G}$, and this determines a functor.
If $\mathcal{F}$ is a sheaf of abelian groups (or anything else), so is $f_{*}\mathcal{F}$, so likewise we get direct image functors $f_{*}:{\bf Ab}(X)\to{\bf Ab}(Y)$, where ${\bf Ab}(X)$ is the
category of sheaves of abelian groups on $X$.
Mathematics Subject Classification
no label found | {"url":"http://planetmath.org/directimagefunctor","timestamp":"2014-04-20T10:48:47Z","content_type":null,"content_length":"46057","record_id":"<urn:uuid:407f3dfc-4936-4415-a928-8d59de853e82>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
JavaScript Sudoku Solver
JavaScript Sudoku Solver
Sudoku is a puzzle that appears in my local newspaper. As humans are asked to do, the script in this page attempts to fill in the grid so that every row, every column, and every 3x3 box contains the
digits 1 through 9. You may be able to improve your Sudoku solving skills by studying the steps toward a solution produced by this script.
To use the script, fill in the cells in the initial grid with the values from your puzzle, and then push the solve button. A generated grid cell contains either a non-zero digit when the value of the
cell has been determined, a sequence of non-zero digits when the digits omitted have been eliminated as possibilities, or a period when no possibilities have been eliminated. A cell contains a
question mark if the grid is inconsistent.
A puzzle written in text can be loaded into the initial grid. Paste the puzzle into the multiple row text area below the grid, and then push the load button. The script ignores white space, and the
characters hyphen, vertical bar, and plus sign. It also ignores comments surrounded by parentheses. What should be left is eighty-one non-zero digits and periods, which are entered into the initial
Copyright © 2005 John D. Ramsdell. This web page, including its inline JavaScript program, is free software made available under the terms of the GNU General Public License. You are encouraged to
study the source code, and either add your own solution strategies, or delete the ones it uses, and create your own complete set of strategies. | {"url":"http://www.ccs.neu.edu/home/ramsdell/tools/sudoku.html","timestamp":"2014-04-18T15:40:34Z","content_type":null,"content_length":"36197","record_id":"<urn:uuid:0496b672-031a-4cb9-b073-2ef42ec231aa>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Making a singular matrix non-singular
Someone asked me on Twitter
Is there a trick to make an singular (non-invertible) matrix invertible?
The only response I could think of in less than 140 characters was
Depends on what you’re trying to accomplish.
Here I’ll give a longer explanation.
So, can you change a singular matrix just a little to make it non-singular? Yes, and in fact you can change it by as little as you’d like. Singular square matrices are an infinitely thin subset of
the space of all square matrices, and any tiny random perturbation is almost certain to take you out of that thin subset. (“Almost certain” is actually a technical term here, meaning with probability
Adding a tiny bit of noise to a singular matrix makes it non-singular. But why did you want a non-singular matrix? Presumably to solve some system of equations. A little noise makes the system go
from theoretically impossible to solve to theoretically possible to solve, but that may not be very useful in practice. It will let you compute a solution, but that solution may be meaningless.
A little noise makes the determinant go from zero to near zero. But here is an important rule in numerical computation:
If something is impossible at 0, it will be hard near zero.
“Hard” could mean difficult to do or it could mean the solution is ill-behaved.
Instead of asking whether a matrix is invertible, let’s ask how easy it is to invert. That is, let’s change a yes/no question into a question with a continuum of answers. The condition number of a
matrix measures how easy the matrix is to invert. A matrix that is easy to invert has a small condition number. The harder it is to invert a matrix, the larger its condition number. A singular matrix
is infinitely hard to invert, and so it has infinite condition number. A small perturbation of a singular matrix is non-singular, but the condition number will be large.
So what exactly is a condition number? And what do I mean by saying a matrix is “hard” to invert?
The condition number of a matrix is the norm of the matrix times the norm of its inverse. Defining matrix norms would take too long to go into here, but intuitively it is a way of sizing up a matrix.
Note that you take the norm of both the matrix and its inverse. A small change to a matrix might not change its norm much, but it might change the norm of its inverse a great deal. If that is the
case, the matrix is called ill-conditioned because it has a large condition number.
When I say a matrix is hard to invert, I mean it is hard to do accurately. You can use the same number of steps to invert a matrix by Gaussian elimination whether the matrix has a small condition
number or a large condition number. But if the matrix has a large condition number, the result may be very inaccurate. If the condition number is large enough, the solution may be so inaccurate as to
be completely useless.
You can think of condition number as an error multiplier. When you solve the linear system Ax = b, you might expect that a little error in b would result in only a little error in x. And that’s true
if A is well-conditioned. But a small change in b could result in a large change in x if A is ill-conditioned. If the condition number of A is 1000, for example, the error in x will be 1000 times
greater than the error in b. So you would expect x to have three fewer significant figures than b.
Note that condition number isn’t limited to loss of precision due to floating point arithmetic. If you have some error in your input b, say due to measurement error, then you will have some
corresponding error in your solution to Ax = b, even if you solve the system Ax = b exactly. If the matrix A is ill-conditioned, any error in b (rounding error, measurement error, statistical
uncertainty, etc.) will result in a correspondingly much larger error in x.
So back to the original question: can you make a singular matrix non-singular? Sure. In fact, it’s hard not to. Breathe on a singular matrix and it becomes non-singular. But the resulting
non-singular matrix may be useless. The perturbed matrix may be theoretically invertible but practically non-invertible.
So here’s a more refined question: can you change a singular matrix into a useful non-singular matrix? Yes you can, sometimes, but the answer depends very much on the problem you’re trying to solve.
This is generally known as regularization, though it goes by different names in different contexts. You use knowledge of your domain beyond the specific data at hand to guide how you change your
Related posts:
Don’t invert that matrix
Example of not inverting a matrix
Ten surprises in numerical linear algebra
Another way to look at the condition number: it is the ratio of the largest-magnitude eigenvalue of the matrix to its smallest-magnitude eigenvalue. For a singular matrix, the latter is zero: this
means that a perturbation that will increase the smallest-magnitude eigenvalue will reduce the condition number, yielding an equation system that’s easier to solve, per John’s discussion.
How can you change the smallest-magnitude eigenvalue without changing the other eigenvalues too much? Well, let us assume that the magnitude of the eigenvalues of the matrix cover a range of multiple
orders of magnitude; in other terms, the largest-magnitude eigenvalue is a few orders of magnitude larger than zero. Applying a diagonal perturbation corresponding to lambda times the identity will
increase ALL eigenvalues by lambda. If the largest-magnitude eigenvalue is as large as assumed, then that eigenvalue is not significantly affected by the change; on the other hand, the
smallest-magnitude eigenvalue has become lambda, so the condition number of the perturbed matrix is (largest eigenvalue / lambda). Now, the effect of this perturbation depends on the distribution of
eigenvalues: if the majority of the eigenvalues are similar to the largest, the system is not much disturbed. On the other hand, if most eigenvalues are close to zero, the system is not quite what it
was anymore.
In the context of symmetrical semi-definite matrices (decomposable as the product of the transposition of some matrix to itself), this multiple-identity perturbation is called Tikhonov regularization
Interesting. So if you could (let’s say with a theorem) perturb a singular matrix in such a way as to minimally perturb the norm of the inverse — would that be “good” in some way? Is that in fact
Benoit, great comment.
Re: human mathematics
Your question actually raises an unbounded problem :-). The “minimal” perturbation of the matrix is zero. John’s point is that there are many ways to perturb a matrix to take it from singular to
non-singular, but this is often not enough. You have to make it “sufficiently non-singular” to compute, at least, a product of its inverse to some right-hand-side vector, otherwise this computation
becomes imprecise.
In other words, if you ask what is $A^{-1} y$ for singular $A$, it makes as much sense as asking what 5/0 is. Now, we can agree that we can nudge that zero by as little as you want to get a question
that makes sense: 5/x, for non-zero x. The answer to that question, then, depends on x, and you can examine the variation of the answer as you make x ever smaller. The same goes when dealing with a
singular matrix: how does the result of $(A+P)^{-1} y$ varies as you make $P$ ever smaller? In terms of my eigenvalue anaysis, how does $(A+lambda I)^{-1} y$ varies as you make $lambda$ ever smaller?
Can you extrapolate something useful? That as close as you can get to answering the original question about the singular matrix.
Great post.
[...] there a trick to make an singular (non-invertible) matrix invertible? He posted his response Making a singular matrix non singular on his blog The Endeavour. John’s mention of Twitter reminded
me of the value of developing [...]
Tagged with: Math
Posted in Math | {"url":"http://www.johndcook.com/blog/2012/06/13/matrix-condition-number/","timestamp":"2014-04-20T01:15:38Z","content_type":null,"content_length":"39646","record_id":"<urn:uuid:9e348f2b-72a5-42dd-be9a-76d48299e1ff>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
Doubly Special Relativity [Archive] - IceInSpace
22-03-2011, 05:17 PM
Take a look at this addition to Special Relativity:
Doubly special relativity (http://www.physorg.com/news/2011-03-doubly-special-relativity.html)
Here we go … those damned mathematicians at it again …. :) ...
Interesting article. The idea that a Planck length can be considered an invariable constant independent of an observer’s frame of reference, whilst light speed isn't, is one of those counterintuitive
thingys .. | {"url":"http://www.iceinspace.com.au/forum/archive/index.php/t-73440.html","timestamp":"2014-04-16T19:01:13Z","content_type":null,"content_length":"11112","record_id":"<urn:uuid:3ebd3227-13f6-4e37-a508-2ade17b7e6ac>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: CROSSTALK COMPENSATION IN ANALYSIS OF ENERGY STORAGE DEVICES
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
Estimating impedance of energy storage devices includes generating input signals at various frequencies with a frequency step factor therebetween. An excitation time record (ETR) is generated to
include a summation of the input signals and a deviation matrix of coefficients is generated relative to the excitation time record to determine crosstalk between the input signals. An energy storage
device is stimulated with the ETR and simultaneously a response time record (RTR) is captured that is indicative of a response of the energy storage device to the ETR. The deviation matrix is applied
to the RTR to determine an in-phase component and a quadrature component of an impedance of the energy storage device at each of the different frequencies with the crosstalk between the input signals
substantially removed. This approach enables rapid impedance spectra measurements that can be completed within one period of the lowest frequency or less.
A method of estimating an impedance of an energy storage device, comprising: generating two or more input signals at different frequencies with a frequency step factor therebetween; generating an
excitation time record comprising a summation of the two or more input signals; generating a deviation matrix of coefficients relative to parameters of the excitation time record to determine
crosstalk between the two or more input signals; stimulating an energy storage device with the excitation time record and substantially simultaneously capturing a response time record indicative of a
response of the energy storage device to the excitation time record; applying the deviation matrix of coefficients to the response time record to determine response signals including an in-phase
component and a quadrature component of the impedance of the energy storage device at each of the different frequencies with cross talk crosstalk between the response signals substantially removed.
The method of claim 1, wherein generating two or more input signals comprises generating sinusoidal signals.
The method of claim 1, wherein generating two or more input signals comprises generating non-harmonic signals.
The method of claim 1, wherein generating two or more input signals comprises generating harmonic signals.
The method of claim 1, further comprising generating a magnitude and a phase for each of the different frequencies to determine a frequency response of the energy storage device.
The method of claim 1, wherein generating the deviation matrix of coefficients further comprises performing a time domain analysis by developing a least squares solution for each frequency in the
summation of the two or more input signals with a pseudoinverse matrix for N time steps wherein there are N+1 samples in the response time record.
The method of claim 1, wherein generating the deviation matrix of coefficients further comprises performing a frequency domain analysis by: generating a synchronously detected response time record
for each frequency in the summation of the two or more input signals; generating a discrete Fourier transform of the response time record for each frequency in the summation of the two or more input
signals; convolving the response time record in the frequency domain for each frequency in the summation of the two or more input signals; normalizing the discrete Fourier transform of the convolved
response time record to one-half the number of samples in the response time record; evaluating the normalized discrete Fourier transform of the convolved response time record at zero frequency; and
wherein generating the deviation matrix of coefficients relative to the response time record further comprises combining the normalized discrete Fourier transform of the convolved response and the
synchronously detected response time record for each frequency in the summation of the two or more input signals to form the deviation matrix of coefficients.
The method of claim 1, further comprising conditioning the excitation time record to be compatible with measurement conditions of the energy storage device.
The method of claim 1, wherein the two or more input signals are a series of frequencies comprising a lowest frequency and each additional frequency in the series of frequencies is the frequency step
factor higher than a previous frequency in the series of frequencies.
The method of claim 9, wherein generating the excitation time record comprises generating the excitation time record to have a duration of about a period of the lowest frequency.
The method of claim 9, wherein generating the excitation time record comprises generating the excitation time record to have a duration of about one-half the period of the lowest frequency.
The method of claim 9, wherein generating the excitation time record comprises generating the excitation time record to have a duration less than the period of the lowest frequency.
The method of claim 1, further comprising augmenting the deviation matrix of coefficients with noise information within the energy storage device, a system coupled to the energy storage device, or a
combination thereof, and wherein applying the deviation matrix of coefficients to the response time record also substantially removes the noise information from the response signals.
An apparatus for characterizing an energy storage device, comprising: a signal generator configured for stimulating an energy storage device with a stimulus signal generated responsive to an
excitation time record; a response measurement device configured for measuring a response signal substantially simultaneously with when the stimulus signal is applied to the energy storage device,
the response signal indicative of a response of the energy storage device to the stimulus signal; an analyzer operably coupled to the stimulus signal and the response signal, the analyzer configured
for: generating two or more input signals at different frequencies with a frequency step factor therebetween; generating the excitation time record comprising a summation of the two or more input
signals; generating a deviation matrix of coefficients relative to parameters of the excitation time record to determine a crosstalk between the two or more input signals; sampling the response
signal to generate a response time record; applying the deviation matrix of coefficients to the response time record to determine response signals including an in-phase component and a quadrature
component of the impedance of the energy storage device at each of the different frequencies with the crosstalk between the response signals substantially removed.
The apparatus of claim 14, wherein the analyzer is further configured for generating the two or more input signals as sinusoidal signals.
The apparatus of claim 14, wherein the analyzer is further configured for generating the two or more input signals as non-harmonic signals.
The apparatus of claim 14, wherein the analyzer is further configured for generating the two or more input signals as non-harmonic signals.
The apparatus of claim 14, wherein the analyzer is further configured for generating a magnitude and a phase for each of the different frequencies to determine a frequency response of the energy
storage device.
The apparatus of claim 14, wherein the analyzer is further configured for generating the deviation matrix of coefficients by performing a time domain analysis to develop a least squares solution for
equations for each frequency in the summation of the two or more input signals with a pseudoinverse matrix for N time steps wherein there are N+1 samples in the response time record.
The apparatus of claim 14, wherein the analyzer is further configured for generating the deviation matrix of coefficients by performing a frequency domain analysis, comprising: generating a
synchronously detected response time record for each frequency in the summation of the two or more input signals; generating a discrete Fourier transform of the response time record for each
frequency in the summation of the two or more input signals; convolving the response time record in the frequency domain for each frequency in the summation of the two or more input signals;
normalizing the discrete Fourier transform of the convolved response time record to one-half the number of samples in the response time record; evaluating the normalized discrete Fourier transform of
the convolved response time record at zero frequency; and wherein generating the deviation matrix of coefficients relative to the response time record further comprises combining the normalized
discrete Fourier transform of the convolved response and the synchronously detected response time record for each frequency in the summation of the two or more input signals to form the deviation
matrix of coefficients.
The apparatus of claim 14, wherein the analyzer is further configured for conditioning the excitation time record to be compatible with measurement conditions of the energy storage device.
The apparatus of claim 14, wherein the two or more input signals are a series of frequencies comprising a lowest frequency and each additional frequency in the series of frequencies is the frequency
step factor higher than a previous frequency in the series of frequencies.
The apparatus of claim 22, wherein the analyzer is further configured for generating the excitation time record by generating the excitation time record to have a duration of about a period of the
lowest frequency.
The apparatus of claim 22, wherein the analyzer is further configured for generating the excitation time record by generating the excitation time record to have a duration less than the period of the
lowest frequency.
The apparatus of claim 14, wherein the analyzer is further configured for augmenting the deviation matrix of coefficients with noise information within the energy storage device, a system coupled to
the energy storage device, or a combination thereof, and wherein applying the deviation matrix of coefficients to the response time record also substantially removes the noise information from the
response signals.
Computer-readable media including computer-executable instructions, which when executed on one or more computers, cause the computers to: generate two or more input signals at different frequencies
with a frequency step factor therebetween; generate an excitation time record comprising a summation of the two or more input signals; generate a deviation matrix of coefficients relative to
parameters of the excitation time record to determine a crosstalk between the two or more input signals; stimulate an energy storage device with the excitation time record and substantially
simultaneously capture a response time record indicative of a response of the energy storage device to the excitation time record; applying the deviation matrix of coefficients to the response time
record to determine response signals including an in-phase component of an impedance of the energy storage device and a quadrature component of the impedance of the energy storage device at each of
the different frequencies with the crosstalk between the response signals substantially removed.
The computer-readable media of claim 26, wherein the computer instructions are further configured to generate two or more input signals as non-harmonic sinusoidal signals.
The computer-readable media of claim 26, wherein the computer instructions are further configured to cause the computer to generate a magnitude and a phase for each of the different frequencies to
determine a frequency response of the energy storage device.
The computer-readable media of claim 26, wherein the computer instructions are further configured to cause the computer to generate the deviation matrix of coefficients are further configured to
cause the computer to perform a time domain analysis by developing a least squares solution for each frequency in the summation of the two or more input signals with a pseudoinverse matrix for N time
steps wherein there are N+1 samples in the response time record.
The computer-readable media of claim 26, wherein the computer instructions are further configured to cause the computer to generate the deviation matrix of coefficients are further configured to
cause the computer to perform a frequency domain analysis by: generating a synchronously detected response time record for each frequency in the summation of the two or more input signals; generating
a discrete Fourier transform of the response time record for each frequency in the summation of the two or more input signals; convolving the response time record in the frequency domain for each
frequency in the summation of the two or more input signals; normalizing the discrete Fourier transform of the convolved response time record to one-half the number of samples in the response time
record; evaluating the normalized discrete Fourier transform of the convolved response time record at zero frequency; and wherein generating the deviation matrix of coefficients relative to the
response time record further comprises combining the normalized discrete Fourier transform of the convolved response and the synchronously detected response time record for each frequency in the
summation of the two or more input signals to form the deviation matrix of coefficients.
The computer-readable media of claim 26, wherein the two or more input signals comprise input signals at a series of frequencies comprising a lowest frequency and each additional frequency in the
series of frequencies is the frequency step higher than the previous frequency in the series of frequencies.
The computer-readable media of claim 31, wherein the computer instructions are further configured to generate the excitation time record are further configured to generate the excitation time record
to have a duration of about a period of the lowest frequency.
The computer-readable media of claim 31, wherein the computer instructions are further configured to generate the excitation time record are further configured to generate the excitation time record
to have a duration less than the period of the lowest frequency.
The computer-readable media of claim 26, wherein the computer instructions are further configured to augment the deviation matrix of coefficients with noise information within the energy storage
device, a system coupled to the energy storage device, or a combination thereof, and wherein applying the deviation matrix of coefficients to the response time record also substantially removes the
noise information from the response signals.
This application claims the benefit of U.S. Provisional Patent Application Ser. No. 61/330,733, filed May 3, 2010, the disclosure of which is hereby incorporated herein in its entirety by this
TECHNICAL FIELD [0003]
Embodiments of the present disclosure relate generally to determining parameters of energy storage devices and, more specifically, to determining impedance and output characteristics of energy
storage devices.
BACKGROUND [0004]
All patents, patent applications, and publications referred to or cited herein, are incorporated by reference in their entirety.
Electrical energy storage devices (e.g., batteries, fuel cells, ultracapacitors, etc.) have become important subsystems for many military, space and commercial applications. Consequently, in-situ
diagnostics for accurate state-of-health estimations have also gained significant interest. For many applications, however, it is insufficient to monitor simple parameters such as voltage, current,
and temperature to gauge the remaining capacity of the energy storage device. Knowledge of impedance and power capability also may be necessary for an accurate state-of-health estimation. An
important component of in-situ impedance monitoring is rapid measurements that minimally perturb the energy storage device.
Advanced techniques for real-time assessment of the impedance spectra for energy storage devices have been proposed. Many of these techniques can be implemented in an embedded device and periodically
query the energy storage device to determine its state-of-health. For example, it has been shown that a shift in the impedance spectra of battery technologies strongly correlates to the corresponding
pulse resistance and power capability.
One technique, referred to herein as Impedance Noise Identification (INI), disclosed in U.S. Pat. No. 7,675,293 to Christophersen et al., uses a random signal excitation covering a frequency range of
interest and monitors a response. The input and response signals may be cross-correlated, normalized by an auto-correlated input signal, and then averaged and converted to the frequency domain
through Fast Fourier Transforms. INI can be implemented on an embedded system and yield high-resolution data.
Another technique, referred to herein as Compensated Synchronous Detection (CSD), disclosed in U.S. Pat. No. 7,395,163 to Morrison et al., uses a sum-of-sines (SOS) input signal that adequately
covers a frequency range of interest. The magnitude and phase at each frequency of the response signal is initially determined through synchronous detection. However, these data may be tainted by
cross-talk error, so the response signal is reassembled with all the frequencies except the one of interest, and then subtracted from the original response signal and synchronously detected again.
Generally, CSD may be more rapid than INI, but it may need three periods of the lowest frequency, and trades off resolution for speed of measurement.
Yet another technique, referred to herein as Fast Summation Transformation (FST), disclosed in U.S. patent application Ser. No. 12/217,013 to Morrison et al., also uses an SOS input signal that
covers a frequency range of interest. However, to eliminate the cross-talk error, the frequency is increased in octave harmonic steps. Thus, no compensation is required and the response signal can
simply be rectified relative to the sine and the cosine to establish the impedance spectra. Some attributes of FST are that it only requires a time record of acquired data covering one period of the
lowest frequency, and the data processing algorithm is very simple. However, with FST the resolution in frequencies cannot be any finer than octave steps.
The inventors have appreciated that a need remains for an approach to analyzing energy storage devices that requires a time record duration of only one period of the lowest frequency or less, but can
perform with a higher frequency resolution and more flexible frequency selection than octave steps and harmonic frequency steps.
BRIEF SUMMARY [0011]
Embodiments of the present disclosure include apparatuses, methods, and computer-readable media for analyzing energy storage devices that use a sum-of-sines stimulus signal and require a time record
duration of one period or less of the lowest frequency, but can perform with a higher frequency resolution and more flexible frequency selection than octave steps and harmonic frequency steps.
In accordance with an embodiment of the present disclosure, a method of estimating an impedance of an energy storage device includes generating two or more input signals at different frequencies with
a frequency step factor therebetween. The method also includes generating an excitation time record comprising a summation of the two or more input signals and generating a deviation matrix of
coefficients relative to parameters of the excitation time record to determine a cross-talk between the two or more input signal. An energy storage device is stimulated with the excitation time
record and substantially simultaneously a response time record is captured that is indicative of a response of the energy storage device to the excitation time record. The deviation matrix of
coefficients is applied to the response time record to determine response signals including an in-phase component and a quadrature component of the impedance of the energy storage device at each of
the different frequencies with the cross-talk between the response signals substantially removed.
In accordance with another embodiment of the present disclosure, an apparatus for characterizing an energy storage device includes a signal generator configured for stimulating an energy storage
device with a stimulus signal generated responsive to an excitation time record. A response measurement device is configured for measuring a response signal substantially simultaneously when the
stimulus signal is applied to the energy storage device, wherein the response signal is indicative of a response of the energy storage device to the stimulus signal. An analyzer operably coupled to
the stimulus signal and the response signal configured for generating two or more input signals at different frequencies with a frequency step factor therebetween. The analyzer is also configured for
generating the excitation time record comprising a summation of the two or more input signals and generating a deviation matrix of coefficients relative to parameters of the excitation time record to
determine a cross-talk between the two or more input signals. The analyzer is also configured for sampling the response signal to generate a response time record and applying the deviation matrix of
coefficients to the response time record to determine response signals including an in-phase component and a quadrature component of the impedance of the energy storage device at each of the
different frequencies with the cross-talk between response signals substantially removed.
In accordance with yet another embodiment of the present disclosure, computer readable media include computer-executable instructions, which when executed on one or more computers, cause the
computers to generate two or more input signals at different frequencies with a frequency step factor therebetween and generate an excitation time record comprising a summation of the two or more
input signals. The instructions also cause the computers to generate a deviation matrix of coefficients relative to parameters of the excitation time record to determine a cross-talk between the two
or more input signals and stimulate an energy storage device with the excitation time record and substantially simultaneously capture a response time record indicative of a response of the energy
storage device to the excitation time record. The instructions also cause the computers to apply the deviation matrix of coefficients to the response time record to determine response signals
including an in-phase component and a quadrature component of the impedance of the energy storage device at each of the different frequencies with the cross-talk between the response signals
substantially removed.
BRIEF DESCRIPTION OF THE DRAWINGS [0015]
FIG. 1 illustrates an ideal frequency spectrum for a Sum-of-Sines (SOS) stimulus signal;
[0016] FIG. 2
illustrates an actual frequency spectrum for an SOS stimulus signal with a finite time record;
[0017] FIG. 3
is a simplified block diagram of a system for in-situ measurements of energy storage devices according to one or more embodiments of the present disclosure;
[0018] FIG. 4
, is a simplified flow diagram of a crosstalk compensation (CTC) method according to one or more embodiments of the present disclosure;
[0019] FIG. 5
illustrates a lumped parameter model for use in validating crosstalk compensation (CTC) algorithms;
FIGS. 6A and 6B illustrate simulation comparisons between synchronous detection, time domain CTC simulation, and an ideal impedance simulation for a magnitude response and a phase response,
FIGS. 7A and 7B illustrate simulation comparisons of magnitude response and phase response, respectively, for time domain CTC simulation and an ideal impedance simulation using 48 frequencies;
FIGS. 8A and 8B simulation comparisons of magnitude response and phase response, respectively, for time domain CTC simulation and an ideal impedance simulation using 49 frequencies; and
FIGS. 9A and 9B illustrate half-period simulations for the time domain CTC, frequency domain CTC, and an ideal impedance simulation for a magnitude response and a phase response, respectively.
DETAILED DESCRIPTION [0024]
In the following description, reference is made to the accompanying drawings which form a part hereof, and in which is shown, by way of illustration, specific embodiments in which the disclosure may
be practiced. The embodiments are intended to describe aspects of the disclosure in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized
and changes may be made without departing from the scope of the disclosure. The following detailed description is not to be taken in a limiting sense, and the scope of the present invention is
defined only by the appended claims.
Furthermore, specific implementations shown and described are only examples and should not be construed as the only way to implement the present disclosure unless specified otherwise herein. It will
be readily apparent to one of ordinary skill in the art that the various embodiments of the present disclosure may be practiced by numerous other partitioning solutions.
In the following description, elements, circuits, and functions may be shown in block diagram form in order not to obscure the present disclosure in unnecessary detail. Conversely, specific
implementations shown and described are exemplary only and should not be construed as the only way to implement the present disclosure unless specified otherwise herein. Additionally, block
definitions and partitioning of logic between various blocks is exemplary of a specific implementation. It will be readily apparent to one of ordinary skill in the art that the present disclosure may
be practiced by numerous other partitioning solutions. For the most part, details concerning timing considerations and the like have been omitted where such details are not necessary to obtain a
complete understanding of the present disclosure and are within the abilities of persons of ordinary skill in the relevant art.
Those of ordinary skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions,
commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or
particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal for clarity of presentation and description. It will be understood by a
person of ordinary skill in the art that the signal may represent a bus of signals, wherein the bus may have a variety of bit widths and the present disclosure may be implemented on any number of
data signals including a single data signal.
The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a
special purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete
gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the
alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a
combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
Also, it is noted that the embodiments may be described in terms of a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may
describe operational acts as a sequential process, many of these acts can be performed in another sequence, in parallel, or substantially concurrently. In addition, the order of the acts may be
re-arranged. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. Furthermore, the methods disclosed herein may be implemented in hardware, software, or
both. If implemented in software, the functions may be stored or transmitted as one or more instructions or code on computer readable media. Computer readable media includes both computer storage
media and communication media including any medium that facilitates transfer of a computer program from one place to another.
It should be understood that any reference to an element herein using a designation such as "first," "second," and so forth does not limit the quantity or order of those elements, unless such
limitation is explicitly stated. Rather, these designations may be used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to
first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner. In addition, unless stated otherwise a
set of elements may comprise one or more elements.
Elements described herein may include multiple instances of the same element. These elements may be generically indicated by a numerical designator (e.g. 110) and specifically indicated by the
numerical indicator followed by an alphabetic designator (e.g., 110A) or a numeric indicator preceded by a "dash" (e.g., 110-1). For ease of following the description, for the most part element
number indicators begin with the number of the drawing on which the elements are introduced or most fully discussed. Thus, for example, element identifiers on a FIG. 1 will be mostly in the numerical
format 1xx and elements on a
FIG. 4
will be mostly in the numerical format 4xx.
Headings are included herein to aid in locating certain sections of detailed description. These headings should not be considered to limit the scope of the concepts described under any specific
heading. Furthermore, concepts described in any specific heading are generally applicable in other sections throughout the entire specification.
Embodiments of the present disclosure include apparatuses, methods, and computer-readable media for analyzing energy storage devices that use a sum-of-sines stimulus signal and require a time record
duration of one period or less of the lowest frequency, but can perform with a higher frequency resolution and more flexible frequency selection than octave steps and harmonic frequency steps.
Embodiments of the present disclosure use a methodology, referred to herein as CrossTalk Compensation (CTC), for obtaining high-resolution impedance spectra for energy storage devices (e.g.,
batteries) within one period or less of the lowest frequency of a Sum-Of-Sines (SOS) stimulus signal. Unlike the FST method discussed above, embodiments of the present disclosure are not limited to
octave harmonic increases in frequency or even generally to harmonic frequencies. Instead, the SOS signal can include sinusoids that increase from the lowest frequency of interest by any step factor
greater than one, thereby significantly increasing the measurement resolution.
Conceptually, a CTC method attempts to remove the restriction to harmonic frequencies of other rapid techniques, and this allows the ability to pick and choose which frequencies a test is going to
measure. For example, to obtain a general scattering of data throughout the frequency spectrum for a baseline, and also wanted higher-resolution data around a particular characteristic, this CTC
would allow for these data to be acquired in one period of the lowest frequency or less. One such characteristic that might interest a researcher is the impedance trough used to track the power fade
and state of health of a battery.
FIG. 1 illustrates an ideal frequency spectrum for a Sum-of-Sines (SOS) stimulus signal. The ideal SOS stimulus signal of infinite duration transforms to the frequency domain into a picket fence of
impulses at each present frequency as seen in FIG. 1.
[0037] FIG. 2
illustrates an actual frequency spectrum for an SOS stimulus signal for a finite time record. Because a time record for an SOS stimulus signal is not infinitely long, the frequency spectrum becomes a
sum of overlapping sinc functions such that, at each frequency of interest, the overall response is made up of the response of interest plus the tails of all of the other non-harmonic frequencies
present in the signal. Thus, as referred to herein, crosstalk is the interference of non-harmonic frequencies between sine waves in an SOS stimulus signal.
In techniques such as CSD and FST discussed above, frequencies are chosen that are harmonically related to each other so that at each frequency of interest, all other sinc functions have a zero
crossing in the frequency domain. Because the other frequencies present have Cl zero magnitude, only the response of interest remains. In other words, harmonic frequencies have zero crosstalk.
The need to select harmonics also causes a low resolution of frequencies in methods such as CSD and FST. For example, given a range of 0.1 Hz to 2000 Hz; only fifteen octave harmonic frequencies can
be used with the highest frequency being 1638.4 Hz. Therefore, if a method is to fit more than fifteen frequencies in that range, it will need to deal with the effects of crosstalk. A CTC method
allows a choice of any set of frequencies because it will remove the effect of crosstalk from the results. This removal will yield a much higher resolution in frequency without sacrificing the speed
of measurement.
A general system for use in employing CTC methods is first discussed. This is followed by a general description of the CTC method and development of the excitation time record. The CTC analysis of
the response time record can be implemented in either the time domain or frequency domain. Mathematical descriptions using both the time and frequency domains are also provided herein.
[0041] FIG. 3
is a simplified block diagram of a system for in-situ measurements of energy storage devices according to one or more embodiments of the present disclosure.
FIG. 3
illustrates an impedance analysis system 300 for characterizing an energy storage device 360 according to the present invention. The impedance analysis system 300 includes a signal vector assembler
310 for generating a signal vector 315 and may also include a smoothing filter 312 for smoothing and modifying the signal vector 315. The signal vector 315 is an SOS signal as described below. For
application to an energy storage device 360, a signal generator 320 converts the signal vector 315 to a stimulus signal 322, which may be a current source signal or a voltage source signal suitable
for application to a terminal of the energy storage device 360. The energy storage device 360 generally may be connected to the stimulus signal 322 and the other terminal may be coupled to a ground.
The energy storage device 360 may be coupled to operational circuitry 365 via connection 367. The operational circuitry 365 represents any loads configured to be driven by the energy storage device
360 that may discharge the energy storage device 360, as well as any charging circuitry for restoring charge to the energy storage device 360. The impedance analysis system 300 may be configured for
in-situ operation. As such, the stimulus signal 322 may be coupled to the energy storage device for performing impedance analysis during normal operation of the operational circuitry 365.
In some systems, according to the present disclosure, the signal vector assembler 310, filter 312, and an analyzer 390 may be discrete elements targeted at their specific function. However, in other
systems according to the present disclosure, these functions may be included in a computing system 305. Thus, the computing system 305 may include software for performing the functions of assembling
signal vectors, digital filtering, and analyzing various input signals relative to the signal vector to determine impedance of the energy storage device 360. In still other systems, some of these
functions may be performed with dedicated hardware and others may be performed with software.
In addition, the computing system 305 may include a display 395 for presenting control selection operations and data in a format useful for interpreting impedance characteristics of the energy
storage device 360. The display 395 may also be used for presenting more general battery characteristics of interest, such as, for example, state-of-charge or state-of-health. The computing system
305 may also include storage 398 for storing sampled information from any of the processes described below as well as for containing computing instructions for execution by the analyzer 390 to carry
out the processes described below.
The signal vector assembler 310 may be any suitable apparatus or software for generating the signal vector 315 with an average amplitude substantially near zero. The signal vector assembler 310 may
be configured as digital logic or as computer instructions for execution on the computing system 305. The smoothing filter 312 may be a bandpass filter or a low-pass filter used for smoothing the
signal vector 315 by removing high frequencies, low frequencies, or a combination thereof to present an analog signal more suitable for application to the energy storage device 360. The smoothing
filter 312 may include a digital filter configured as digital logic or as computer instruction for execution on the computing system 305. The smoothing filter 312 also may include an analog filter
configured as analog elements. Finally, the smoothing filter 312 may include a digital filter and an analog filter in combination.
The actual current, at the energy storage device 360 as a result of the stimulus signal 322 may be determined by a current measurement device 340 coupled to the stimulus signal 322 and configured to
generate a measured current response 345.
The actual voltage, at the energy storage device 360 as a result of the stimulus signal 322 may be determined by a voltage measurement device 350 and configured to generate a measured voltage
response 355.
The measured voltage response 355 and the measured current response 345 may be operably coupled to the analyzer 390. The analyzer 390 is configured for periodically sampling signals that may be
analog input signals and converting them to digital data to create records of a time-varying response signals. The time-varying response signals may be used by the analyzer 390 for determining
impedance characteristics of the energy storage device, as is discussed more fully below.
Further details of an impedance analysis system 300 that may be useful with embodiments of the present disclosure may be found in U.S. patent application (TBD), to Christophersen et al., entitled
"IN-SITU REAL-TIME ENERGY STORAGE DEVICE IMPEDANCE IDENTIFICATION," which is filed concurrently with the present application and for which the contents are incorporated herein in its entirety.
[0050] FIG. 4
is a simplified flow diagram of a CTC method 400 according to one or more embodiments of the present disclosure. In operation block 402 a number of test frequencies is selected based on a lowest
frequency of interest and a step factor. The step factor is chosen to be greater than one and generally chosen to be a non-integer to create a series of frequencies that are non-harmonic. Selecting
an integer step factor to create harmonic frequencies is possible but much of the CTC methods may be redundant since harmonic frequencies will generate no crosstalk.
However, the use of CTC using harmonic frequencies may be useful when the operational circuitry 365 contains noise that is injected into the ESD 360 through the connection 367. In such an embodiment,
noise that may be present at the ESD 360 may be sampled by the impedance analysis system 300, estimated analytically, or a combination thereof. The noise information may then by incorporated into the
deviation matrix of coefficients, which is explained below. As a result, when a response time record (also explained below) is captured, application of the deviation matrix of coefficients to the
response time record will remove the previously determined noise components to yield noise corrected in-phase and quadrature components of the impedance of the ESD 360.
With the lowest frequency defined, operation block 404 indicates that each additional frequency is defined as the step factor higher than the next lower frequency. For example, with a lowest
frequency of 1 Hz, a step factor of 1.5 and a total of five frequencies in the series, the series of frequencies would be 1, 1.5, 3.0, 4.5, and 6 Hz.
Operation block 406 indicates that sinusoid waveforms of the frequencies in the series are summed to obtain an Excitation Time Record (ETR) that includes a sum-of-sines of the series of frequencies
with a duration of one period of the lowest frequency (e.g., a one second duration given a low frequency of 1 Hz in the example above).
Operation block 408 indicates that the SOS is evaluated to generate a deviation matrix of coefficients relative to the ETR and correlated to the crosstalk of each frequency in the series to each
other frequency in the series. Thus, this deviation matrix is indicative of how crosstalk from any one frequency in the series affects any other frequency in the series.
Operation block 410 indicates that the ETR may be conditioned (e.g., filtered) as described above to create a signal suitable for the type of system and energy storage device in use.
Operation block 412 indicates that the energy storage device is stimulated with the ETR and a Response Time Record (RTR) is captured at substantially the same time as the ETR is applied. The sampling
rate should be at a high enough frequency so as to avoid aliasing errors (e.g., at least twice the highest frequency). Moreover, it may be advisable to use a sampling rate that is at least three
times the highest frequency in the SOS signal.
Operation block 414 indicates that the deviation matrix of coefficients is applied to the RTR to determine in-phase and quadrature components of impedance of the ESD at each frequency of the series
of frequencies. While not illustrated, these in-phase and quadrature components can be further analyzed to arrive at magnitude and phase for each frequency of the series.
Significantly, the deviation matrix does not depend on the response of the energy storage device. As a result, the deviation matrix may be determined before, during, or after the time period when the
ETR is applied to the energy storage device. Therefore, while computing the deviation matrix may be computationally intensive (depending on the step factor and number of frequencies), it can be done
out of the time line of stimulating the energy storage device and acquiring the response of the energy storage device.
1.0 CTC Algorithm Description
With reference to FIG. 1, an excitation time record includes an SOS signal whose lowest frequency, frequency step factor (S), and total number of frequencies included in the signal (M), are known.
The excitation time record (e.g., the signal vector 315 in
FIG. 3
) may be applied to the energy storage device 360 by the signal generator 320. The energy storage device 360 responds to the stimulus signal 322, and the resulting RTR is captured with a sample
period that is compatible with Nyquist constraints. It is assumed that the captured time record has a duration of N discrete points.
A CTC algorithm includes a finite SOS input signal and a captured response from an ESD. The response is processed using constants and parameters associated with the signal (duration, frequencies of
interest, etc.) to yield the real and imaginary components of the signal's frequency spectrum at each discrete frequency of interest.
Consider a RTR that includes a sum of phase shifted sine waves in discrete time, as shown in Equation (1):
( n ) = i = 1 M A i sin ( 2 π T i n Δ t + Φ i ) ( 1 ) ##EQU00001##
is the unknown amplitude at the i
is the unknown phase shift at the i
n is the discrete time step integer (a total of N discrete points)
Δt is the sample time step
M is the number of frequencies
is the period of the i
Note that y(n) for n=0, 1, 2, 3 . . . , N is the measured response obtained by sending a current signal through an ESD and sampling its voltage response.
Equation (1) may be converted into its in-phase and quadrature components using the trigonometric identity of sin(α+β)=sin(α)cos(β)+cos(α) sin(β), as shown in Equation (2):
( n ) = i = 1 M a i sin ( 2 π T i n Δ t ) + b i cos ( 2 π T i n Δ t ) ( 2 ) ##EQU00002##
Where: a
) and b
Equation (2) forms the basis of an SOS signal that may be analyzed to determine a deviation matrix in either the time domain or the frequency domain.
1.1 Time Domain CTC Algorithm
For the time domain CTC algorithm, Equation (2) is assumed to fit every point of the captured time record and this results in an over-determined system of equations. Therefore, the CTC analysis takes
the form of a least squares solution that minimizes the squared error terms. A pseudoinverse matrix is used to get a solution for all a
, b
in Equation (2). If this equation is rewritten in matrix multiplication form, it becomes Equation (3). Expanding Equation (3) for each value of n from 1 to N (i.e., each datum in the time record)
yields Equation (4).
( n ) = [ sin ( 2 π T 1 n Δ t ) , sin ( 2 π T 2 n Δ t ) , sin ( 2 π T M n Δ t ) , cos ( 2 π T 1 n Δ t ) , cos ( 2 π T 2 n Δ t ) , , cos ( 2 π T M n Δ t ) ] [ a 1 a 2 ↓ a M b 1 b 2 ↓ b M ] ( 3 ) [ y (
0 ) y ( 1 ) y ( N ) ] [ Y ] = [ sin ( 2 π T 1 0 Δ t ) , sin ( 2 π T 2 0 Δ t ) , sin ( 2 π T M 0 Δ t ) , cos ( 2 π T 1 0 Δ t ) , cos ( 2 π T 2 0 Δ t ) , , cos ( 2 π T M 0 Δ t ) sin ( 2 π T 1 Δ t ) ,
sin ( 2 π T 2 Δ t ) , sin ( 2 π T M Δ t ) , cos ( 2 π T 1 Δ t ) , cos ( 2 π T 2 Δ t ) , cos ( 2 π T M Δ t ) sin ( 2 π T 1 ( N ) Δ t ) , sin ( 2 π T 2 N Δ t ) , sin ( 2 π T M N Δ t ) , cos ( 2 π T 1 N
Δ t ) , cos ( 2 π T 2 N Δ t ) , cos ( 2 π T M N Δ t ) ] [ C ] [ a 1 a 2 ↓ a M b 1 b 2 ↓ b M ] [ A B ] ( 4 ) ##EQU00003##
In Equation (4), there are N time steps and a solution for all the a
and b
values can be obtained by the least squares method of pseudoinverse. Equation (4) is important to time domain CTC and hence labels are given to its respective parts. The captured response time record
is referred to as the Y vector, the matrix with the sine and cosine terms is called the C matrix, and the matrix with the sine and cosine coefficients is called the A and B Matrix.
Thus, Equation (4) may be rewritten into Equation (5) using this terminology, where:
[Y] is a column vector of length (N+1), also called the time record [C] is a matrix of size (N+1) by (2M) composed of sines and cosines
[ A B ] ##EQU00004##
is a column vector of length
(2M) where A includes in-phase coefficients and B includes quadrature coefficients:
[ Y ] = [ C ] [ A B ] ( 5 ) ##EQU00005##
To find the in-phase and quadrature components, Equation (5) is solved for the A and B Matrix as shown in Equations (6). Note that C is a constant matrix and therefore the pseudoinverse of C is also
a constant matrix. Thus, in Equation (6) the Y matrix represents the sampled RTR.
[ A B ] = [ [ C ] T [ C ] ] - 1 [ C ] T [ Y ] ( 6 ) ##EQU00006##
The time domain CTC technique is applied to find the impedance by stimulating an energy storage device with the ETR and recording the response as the RTR. The C matrix is built using sines and
cosines at the known frequencies present in the SOS. Thus, the term [[C]
represents the deviation matrix of coefficients. The Y matrix is the RTR, which represents the current due to the ETR multiplied by the impedance of the battery. The A and B Matrix represents the
unknown in-phase and quadrature components of the battery's impedance. As a result, using Equation (6), the deviation matrix is applied to the RTR to determine a discrete impedance spectrum for the
energy storage device compensated for crosstalk. The A and B values may then be further processed, for example, to be displayed as a Nyquist plot.
In the time domain CTC solution, the A and B terms taken as pairs, a
, b
and can be converted into a polar form of magnitude and phase at each frequency based on Equation
M i
= ( a i ) 2 + ( b i ) 2 , φ i = a tan ( b i a i ) ( 7 ) ##EQU00007##
Where: a
, b
are the real and imaginary parts for the i
, φ
are the magnitude and phase parts for the frequency
1.2 Frequency Domain CTC Algorithm
The frequency domain CTC algorithm takes the RTR signal in Equation (2), that includes a finite sum of sines (SOS) with unknown magnitudes and phases, processes it using constants and parameters
associated with the signal (duration, frequencies of interest, etc.), and yields the in-phase and quadrature components of the signal's frequency spectrum at each discrete frequency of interest. The
following are derivations used to implement the frequency domain CTC algorithm.
A Discrete Fourier Transform, defined in Equation (8), may be applied to Equation (2) to give the RTR signal in the frequency domain shown in Equation (9). Using Euler and summation identities,
Equation (9) can be expanded to the form shown in Equation (10).
( y ( n ) ) = n = - ∞ ∞ y ( n ) - jω n ( 8 ) FT ( y ( n ) ) = n = 0 N - 1 ( i = 1 M a i sin ( 2 π T i n Δ t ) + b i cos ( 2 π T i n Δ t ) ) - jω n ( 9 ) Y ( ω ) = i = 1 M ( ( a i 2 j + b i 2 ) ( 1 -
j N ( 2 π T i Δ t - ω ) 1 - j ( 2 π T i Δ t - ω ) ) + ( b i 2 - a i 2 j ) ( 1 - - j N ( 2 π T i Δ t + ω ) 1 - - j ( 2 π T i Δ t + ω ) ) ) ( 10 ) ##EQU00008##
Equation (10) can then be simplified to Equation (11) by converting the exponential terms to sinc functions.
( ω ) = i = 1 M ( ( a i 2 j + b i 2 ) ( Ne - j ( ( N - 1 2 ) ( 2 π T i Δ t - ω ) ) sin c ( N ( π T i Δ t - ω 2 ) ) sin c ( ( π T i Δ t - ω 2 ) ) ) + ( b i 2 + a i 2 j ) ( Ne j ( ( N - 1 2 ) ( 2 π T i
Δ t + ω ) ) sin c ( N ( π T i Δ t + ω 2 ) ) sin c ( ( π T i Δ t + ω 2 ) ) ) ) ( 11 ) ##EQU00009##
For synchronous detection in the frequency domain, Equation (11) may be convolved with a discrete Fourier Transform of a sine and a cosine, normalizing to the time record, and evaluating at ω=0.
Continuous convolution of a periodic function in frequency may be represented as shown in Equation (12):
( ω ) = 1 2 π ∫ 2 π x ( ζ ) h ( ω - ζ ) ζ ( 12 ) ##EQU00010##
Applying the convolution expression in Equation (12) to the RTR in Equation (11) relative to the sine and cosine functions yield the expressions shown in Equation (13) and (14).
( ω n ) sin = i = 1 M ( a i ( cos ( ( N - 1 2 ) ( 2 π T i Δ t - ω n ) ) ( sin c ( N ( π T i Δ t - ω n 2 ) ) sin c ( ( π T i Δ t - ω n 2 ) ) ) - cos ( ( N - 1 2 ) ( 2 π T i Δ t + ω n ) ) ( sin c ( N (
π T i Δ t + ω n 2 ) ) sin c ( ( π T i Δ t + ω n 2 ) ) ) ) + b i ( sin ( ( N - 1 2 ) ( 2 π T i Δ t - ω n ) ) ( sin c ( N ( π T i Δ t + ω n 2 ) ) sin c ( ( π T i Δ t + ω n 2 ) ) ) - sin ( ( N - 1 2 ) (
2 π T i Δ t - ω n ) ) ( sin c ( N ( π T i Δ t - ω n 2 ) ) sin c ( ( π T i Δ t - ω n 2 ) ) ) ) ( 13 ) Y ( ω n ) cos = i = 1 M ( a i ( sin ( ( N - 1 2 ) ( 2 π T i Δ t + ω n ) ) ( sin c ( N ( π T i Δ t
+ ω n 2 ) ) sin c ( ( π T i Δ t + ω n 2 ) ) ) + sin ( ( N - 1 2 ) ( 2 π T i Δ t - ω n ) ) ( sin c ( N ( π T i Δ t - ω n 2 ) ) sin c ( ( π T i Δ t - ω n 2 ) ) ) ) + b i ( cos ( ( N - 1 2 ) ( 2 π T i Δ
t + ω n ) ) ( sin c ( N ( π T i Δ t + ω n 2 ) ) sin c ( ( π T i Δ t + ω n 2 ) ) ) + cos ( ( N - 1 2 ) ( 2 π T i Δ t - ω n ) ) ( sin c ( N ( π T i Δ t - ω n 2 ) ) sin c ( ( π T i Δ t - ω n 2 ) ) ) ) )
( 14 ) ##EQU00011##
These synchronously detected response signals in Equations (13) and (14) for sine and cosine, respectively, can be simplified as shown in Equations (19) and (20), where the C matrices are as defined
in Equations (15) through (18).
C a ni sin
= cos ( ( N - 1 2 ) ( 2 π T i Δ t - ω n ) ) ( sin c ( N ( π T i Δ t - ω n 2 ) ) sin c ( ( π T i Δ t - ω n 2 ) ) ) - cos ( ( N - 1 2 ) ( 2 π T i Δ t + ω n ) ) ( sin c ( N ( π T i Δ t + ω n 2 ) ) sin c
( ( π T i Δ t + ω n 2 ) ) ) ( 15 ) C b ni sin = sin ( ( N - 1 2 ) ( 2 π T i Δ t + ω n ) ) ( sin c ( N ( π T i Δ t + ω n 2 ) ) sin c ( ( π T i Δ t + ω n 2 ) ) ) - sin ( ( N - 1 2 ) ( 2 π T i Δ t - ω n
) ) ( sin c ( N ( π T i Δ t - ω n 2 ) ) sin c ( ( π T i Δ t - ω n 2 ) ) ) ( 16 ) C a ni cos = sin ( ( N - 1 2 ) ( 2 π T i Δ t + ω n ) ) ( sin c ( N ( π T i Δ t + ω n 2 ) ) sin c ( ( π T i Δ t + ω n 2
) ) ) + sin ( ( N - 1 2 ) ( 2 π T i Δ t - ω n ) ) ( sin c ( N ( π T i Δ t - ω n 2 ) ) sin c ( ( π T i Δ t - ω n 2 ) ) ) ( 17 ) C b ni cos = cos ( ( N - 1 2 ) ( 2 π T i Δ t + ω n ) ) ( sin c ( N ( π T
i Δ t + ω n 2 ) ) sin c ( ( π T i Δ t + ω n 2 ) ) ) + cos ( ( N - 1 2 ) ( 2 π T i Δ t - ω n ) ) ( sin c ( N ( π T i Δ t - ω n 2 ) ) sin c ( ( π T i Δ t - ω n 2 ) ) ) ( 18 ) ##EQU00012##
Hence Equations (13) and (14) can be written as:
( ω n ) sin = i = 1 M ( a i C a ni sin + b i C b ni sin ) ( 19 ) Y ( ω n ) cos = i = 1 M ( a i C a ni cos + b i C b ni cos ) ( 20 ) ##EQU00013##
Thus, Equations (19) and (20) can be combined in a single matrix equation for a finite M as shown in Equation (21).
[ Y ( ω 1 ) sin Y ( ω 2 ) sin ↓ Y ( ω M ) sin Y ( ω 1 ) cos Y ( ω 2 ) cos ↓ Y ( ω M ) cos ] [ Y ] = [ C a 11 sin C a 12 sin → C a 1 M sin C b 11 sin C b 12 sin → C b 1 M sin C a 21 sin C a 22 sin → C
a 2 M sin C b 21 sin C b 22 sin → C b 2 M sin ↓ ↓ ↓ ↓ ↓ ↓ C a M 1 sin C a M 2 sin → C a MM sin C b M 1 sin C b M 2 sin → C b MM sin C a 11 cos C a 12 cos → C a 1 M cos C b 11 cos C b 12 cos → C b 1 M
cos C a 21 cos C a 22 cos → C a 2 M cos C b 21 cos C b 22 cos → C b 2 M cos ↓ ↓ ↓ ↓ ↓ ↓ C a M 1 cos C a M 2 cos → C a MM cos C b M 1 cos C b M 2 cos → C b MM cos ] [ C ] [ a 1 a 2 ↓ a M b 1 b 2 ↓ b M
] [ A B ] ( 21 ) ##EQU00014##
In Equation (21), the elements in the Y matrix represent the synchronously detected results of the RTR, which is expressed as the matrix multiplication of the deviation matrix, C, and the unknown
in-phase and quadrature coefficients in the A and B Matrix. As with the time domain CTC, the C matrix is constant and is derived from the test parameters (duration, frequencies of interest, etc.)
Therefore, Equation (21) can be simplified as shown in Equation (22). If the C matrix is non-singular, the A and B Matrix has the solution shown in Equation (23).
[ Y ] = [ C ] [ A B ] ( 22 ) [ A B ] = [ C ] - 1 [ Y ] ( 23 ) ##EQU00015##
The frequency domain CTC technique is applied to find impedance of the energy storage device by stimulating the energy storage device with the ETR and recording the response as the RTR. The C matrix
and its inverse may be pre-calculated using the number of points in the time record N, the sample rate, and the frequencies of interest such that the deviation matrix is represented by the inverse C
matrix. Synchronous detection is performed on the RTR signal to obtain the Y(ω) terms relative to the sine and cosine. At this point, the Y(ω) function includes both the response of interest and the
cross-talk error. Using Equation (23), the deviation matrix is applied to the RTR to determine a discrete impedance spectrum compensated for crosstalk by solving for the A and B values. The A and B
values are the real and imaginary components, of the ESD that may then be further processed, for example, to be displayed as a Nyquist plot.
In the frequency domain CTC solution, the A and B terms taken as pairs, a
, b
, can be converted into a polar form of magnitude and phase at each frequency based on Equation (7).
2.0 Validation of the CTC Algorithm
The time domain CTC and the frequency domain CTC algorithms as defined above have been verified with MATLAB matrix calculation computer software simulations and the Lumped Parameter Model (LPM). The
LPM, shown in
FIG. 5
, is a linear model that is used to predict battery behavior under pulse conditions. However, it is also useful for validating the CTC algorithms since the theoretical response of the LPM, given a
known ETR, can be determined and compared to the simulated results.
The theoretical response of the LPM is shown in Equation (24). Selected parameter values representing a typical lithium-ion cell under pulse conditions are shown in Table 1. These data are used in
conjunction with Equation (24) to determine the ideal response of an ETR at each frequency of interest within the sum-of-sines signal.
-US-00001 TABLE 1 LPM Parameters Parameter Value Voc 3.8V Cp 666.67F Coc 1,666.67F Ro 0.025Ω Rp 0.015Ω
( s ) = R 0 + 1 sC oc + ( R p sC p ) ( R p + 1 sC p ) ( 24 ) ##EQU00016##
For these validation studies, the frequencies within the SOS signal will be spread logarithmically between 0.1 Hz and 2,000 Hz. The step size is governed by Equation (25), where M frequencies are
selected between the maximum and minimum values (f
and f
, respectively).
Step Size
= ( f max f min ) ( 1 M - 1 ) ( 25 ) ##EQU00017##
FIGS. 6A and 6B show the magnitude and phase response given an ETR with a sum-of-sines including 30 frequencies (i.e., M=30 and the step size is 1.41). The ideal response of the LPM based on Equation
(24) is shown in the open circles. Assuming no deviation matrix were available, the synchronously detected results from the recursively solved voltage response of the LPM is given by the "+" symbols.
As shown, the uncompensated results do well at high frequencies, but get significantly worse as the frequency within the SOS is reduced. When the deviation matrix is included, the Time CTC
synchronously detected results, shown with the "*" symbol, yield significant improvement at the lower frequencies. Thus, the crosstalk error with 30 frequencies is successfully eliminated with the
CTC approach.
FIGS. 7A and 7B illustrate the magnitude and phase of the time domain CTC response given 48 frequencies. These data are also compared to the theoretical response of the LPM given Equation (24). As
shown, the CTC methodology is still able to successfully resolve the impedance spectra with a step factor as low as 1.23.
FIGS. 8A and 8B illustrate the magnitude and phase of the time domain CTC response given 49 frequencies. These data are also compared to the theoretical response of the LPM given Equation (24). As
shown, some errors at low frequency begin to appear in the CTC methodology in this case. This indicates that error terms can increase with smaller step factors.
An error metric between the ideal response of the LPM and the time domain CTC simulation results can be defined as shown in Equation (26). Table 2 provides the error metric value for the magnitude,
phase, real, and imaginary components given an increasing number of frequencies within the SOS input signal (M=45, 46, 47, 48, and 49). As shown, the error increases when more frequencies are
included, and generally jumped more than an order of magnitude between M=48 and M=49.
Maximum Single Error
=max(|TimeCTC-Ideal|) (26)
-US-00002 TABLE 2 Time CTC Maximum Single Error for Different Numbers of Frequencies M = 45 M = 46 M = 47 M = 48 M = 49 Mag. Err 3.8223e-008 1.4819e-006 5.7584e-007 7.3563e-006 1.3004e-004 Phase Err
8.9992e-005 0.0043 0.0024 0.0194 0.2157 Real Err 3.5676e-008 1.3544e-006 6.6492e-007 7.4177e-006 1.2314e-004 Imag. Err 3.9899e-008 1.8650e-006 1.0144e-006 8.4383e-006 9.7847e-005
A similar effect was observed when assessing the RTR using the frequency domain CTC approach. Table 3 provides the error metric value based on Equation (26) for the magnitude, phase, real, and
imaginary components given an increasing number of frequencies within the SOS input signal. The error terms in the frequency domain are slightly larger than the Time CTC and this may be due to
rounding errors caused by increased computational requirements for frequency domain CTC. As with time domain CTC, however, the error metric significantly increases between M=48 and M=49.
Nevertheless, the error term with 49 frequencies within a SOS excitation signal is still relatively low and yields a very high resolution imnedance spectrum within one period of the lowest frequency
for state-of-health assessment.
-US-00003 TABLE 3 Frequency CTC Maximum Single Error for Different Numbers of Frequencies M = 45 M = 46 M = 47 M = 48 M = 49 Mag. Err 1.3017e-007 3.1605e-006 8.0059e-006 5.2176e-005 1.3420e-004 Phase
Err 4.2931e-004 0.0090 0.0189 0.1512 0.3319 Real Err 1.1515e-007 2.8842e-006 7.4597e-006 4.8384e-005 1.2433e-004 Imag. Err 1.8504e-007 3.9234e-006 8.2796e-006 6.6541e-005 1.4705e-004
2.1 Half Period Simulation
Since CTC is not limited by harmonically-related frequencies within the SOS excitation signal, one significant advantage is that it is not limited to one period of the lowest frequency to complete
the measurement. Faster measurements can be achieved at the expense of resolution in the impedance spectrum measurement. For example, it is possible with CTC to measure the impedance spectrum within
the range of 0.1 Hz to 2,000 Hz within a half period of the lowest frequency (i.e., a 5-s measurement).
FIGS. 9A and 9B illustrate half-period simulations for the time domain CTC, frequency domain CTC, and an ideal impedance simulation for a magnitude response and a phase response, respectively. Notice
that reducing the duration of testing reduces the number of frequencies that can be fit in a given range (M=15 in this case). However, the CTC algorithms still closely match the ideal impedance
simulations. As a result, a SOS using one half the period of the lowest frequency can still yield good results if a decrease in frequency resolution is acceptable.
Of course, other fractions less that the duration of the lowest frequency may also be used.
While the invention is susceptible to various modifications and implementation in alternative forms, specific embodiments have been shown by way of non-limiting examples in the drawings and have been
described in detail herein. However, it should be understood that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention includes all modifications,
equivalents, and alternatives falling within the scope of the following appended claims and their legal equivalents.
Patent applications by Chester G. Motloch, Idaho Falls, ID US
Patent applications by John L. Morrison, Butte, MT US
Patent applications by Jon P. Christophersen, Idaho Falls, ID US
Patent applications by William H. Morrison, Manchester, CT US
Patent applications by BATTELLE ENERGY ALLIANCE, LLC
Patent applications in class Parameter related to the reproduction or fidelity of a signal affected by a circuit under test
Patent applications in all subclasses Parameter related to the reproduction or fidelity of a signal affected by a circuit under test
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20120032688","timestamp":"2014-04-24T15:02:57Z","content_type":null,"content_length":"106083","record_id":"<urn:uuid:c0b0d33e-40df-4fd7-ac55-1160d01c3378>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
Workshop on Higher Gauge Theory, TQFT and Categorification in Cardiff
Posted by Urs Schreiber
In May there is the following workshop in Cardiff:
• Workshop on Higher Gauge Theory, TQFT and Categorification
Monday 9th - Tuesday 10th May 2011
School of Mathematics, Cardiff University
organized by WIMCS, more precisely by the Mathematical Physics Cluster of the Wales Institute of Mathematical and Computational Sciences. Timothy Porter is doing the organisation of the programme
whilst the hard work is being done by Mathew Pugh and David Evans down in Cardiff.
Below are some of the speakers, talk titles and abstracts. The list is incomplete. If you are a speaker and would like more information about your talk be visible here, please send me an email.
• Aristide Baratin (Orsay, Paris)
State sum invariant from a 2-category
Abstract: ‘State sum models’ are discrete functional path integrals. Using the combinatorics of the Pachner moves of the triangulation to convert a topological problem into an algebraic one,
these models can be used to define manifold invariants and topological quantum field theories. Just as 3d state sum models can be built using categories of group representations, 2-categories of
2-group representations may provide interesting state sum models for 4d quantum topology, if not quantum gravity. I will describe the construction of the first non-trivial example of a such
models, based on the representations of the ‘Euclidean 2-group’, built from the rotation group SO(4) and its action on the translation group of Euclidean space. I will show that this model gives
a new way to compute Feynman integrals for ordinary quantum field theories on 4d Euclidean spacetime.
• Benjamin Bahr (Cambridge)
State-sum models and coarse graining in quantum gravity
Abstract: In this talk a general framework will be reviewed, in which to formulate physical theories discretized on two-complexes, being closely related to manifold invariants constructed from
TQFTs. Examples for such theories are Lattice gauge theories, Ising spin systems, and in particular the Spin Foam approach to quantum gravity. In the rest of the talk, the problem of finding a
continuum limit for the discretized theories is discussed, and it is shown how this is related to constructing triangulation-independent state-sum models, and renormalization in statistical field
• Jeffrey Giansiracusa (Bath)
Topological field theory and deformation theory
Abstract: This is a speculative talk about some very early stage research in progress. Costello proved that an open string topological conformal field theory is essentially the same as an $A_\
infty$ algebra, and that an open TCFT has a universal closed counterpart that is closely related to the deformation theory of the open part. Recently B. Cooper found a $C_\infty$ analogue of
Costello’s theorems. I’ll discuss these theorems and how they points towards the possibility of a generalized Deligne Conjecture that would construct from any (cyclic or modular) operad $P$ the
universal algebraic structure acting on the deformation theory of $P$-algebras.
• Alexander Kahle (Göttingen)
Higher Abelian Gauge Theory and Differential Cohomology
Abstract: In this talk I will describe how the subject of differential cohomology gives a useful framework for discussing higher gauge theory, and is in some sense necessitated by Dirac charge
quantisation. The talk will be example driven, and I hope to discuss ordinary and higher Maxwell theory, as well as some of the theory of Ramond-Ramond fields and D-Branes, and how (twisted)
K-theory enters the picture there. Time permitting, I will mention some recent work with Alessandro Valentino investigating T-Duality in this context.
• Jeffrey Morton (Lisbon)
ETQFT by Induced Representations
Abstract: Extended topological quantum field theory (ETQFT) is the categorification of ordinary TQFT, described in terms of a 2-functor from a cobordism category into 2-vector spaces. I describe
how a class of such ETQFT’s can be given, one for any finite group G, via gauge theory. This uses a universal construction a category of groupoids and spans, through the adjunction between
restriction and induction of representations. I will describe this construction and sketch the generalization to (compact) Lie groups.
• Urs Schreiber (Utrecht)
$\infty$-Connections and their Chern-Simons functionals
Abstract: I indicate a general theory of higher gauge fields/higher connections and how they naturally come with their higher Chern-Simons functionals / Chern-Weil homomorphisms. Then I
demonstrate some applications to supergravity, such as the description of the Green-Schwarz mechanism by twisted differential string structures.
(aspects of chapter 4 of differential cohomology in a cohesive topos)
• Jamie Vicary (Oxford)
123 TQFTs
Abstract: Abstract: I will present some new results on classifying 123 TQFTs, using a 2-categorical approach. The invariants defined by a TQFT are described using a new graphical calculus, which
makes them easier to define and to work with. Some new and interesting physical phenomena are brought out by this perspective, which we investigate. I will finish by banishing some TQFT myths!
This talk is based on joint work with Bruce Bartlett, Chris Schommer-Pries and Chris Douglas.
• Konrad Waldorf (Regensburg)
Geometric string structures and supersymmetric sigma models
Abstract: I describe a simple, finite-dimensional definition of geometric string structures, equivalent to the ones of Stolz and Teichner. Geometric string structures play an important role for
the path integral quantization of supersymmetric sigma models, and I try to explain this using recent work of Bunke and a new transgression formula for 2-gerbes.
• Christoph Wockel (Hamburg)
A smooth model for the string group
Abstract: There have been various constructions of the string group out that are suited for differential geometric applications. It cannot be a finite-dimensional Lie group, so one of the most
natural things to expect would be an infinite-dimensional Lie group. Surprisingly, such a model did not exist in the past. In this talk we explain how to construct such a model, based on the
construction of a topological model by Stolz. Moreover, we show how this can be extended to a Lie 2-group model, making explicit comparisons between ordinary and categorical differential geometry
Posted at April 21, 2011 9:25 AM UTC
Re: Workshop on Higher Gauge Theory, TQFT and Categorification in Cardiff
How about a link to
Jeffrey Giansiracusa (Bath)
Open closed field theories and deformation theory
his page or even just his e-address?
Posted by: jim stasheff on April 21, 2011 1:14 PM | Permalink | Reply to this
Re: Workshop on Higher Gauge Theory, TQFT and Categorification in Cardiff
There are webpages for Jeff at Bath, Swansea and Oxford!
This is one of them
Posted by: Tim Porter on April 21, 2011 4:49 PM | Permalink | Reply to this
Re: Workshop on Higher Gauge Theory, TQFT and Categorification in Cardiff
and I see that the Oxford one is now linked to the café entry.
Posted by: Tim Porter on April 21, 2011 4:51 PM | Permalink | Reply to this
Re: Workshop on Higher Gauge Theory, TQFT and Categorification in Cardiff
Posted by: Urs Schreiber on April 21, 2011 4:51 PM | Permalink | Reply to this
Re: Workshop on Higher Gauge Theory, TQFT and Categorification in Cardiff
I should have mentioned that this short meeting is organised by WIMCS, more precisely by the Mathematical Physics Cluster of the Wales Institute of Mathematical and Computational Sciences. I am doing
the organisation of the programme whilst the hard work is being done by Mathew Pugh and David Evans down in Cardiff.
Posted by: Tim Porter on April 22, 2011 6:38 AM | Permalink | Reply to this
Re: Workshop on Higher Gauge Theory, TQFT and Categorification in Cardiff
I have now added in the titles and abstracts for the talks by Jeffrey Giansiracusa, Jamie Vicary and Konrad Waldorf (kindly provided by Tim Porter).
(I have also added the details on the conference organization responsibilities, as mentioned by Tim.)
Posted by: Urs Schreiber on April 23, 2011 10:52 AM | Permalink | Reply to this
Re: Workshop on Higher Gauge Theory, TQFT and Categorification in Cardiff
Konrad and I will meet at 7pm local time down in the lobby of Parc Hotel in order to join forces to find some dinner. Anyone reading this here and also based in Parc Hotel or vicinity, feel invited
to join us!
Posted by: Urs Schreiber on May 8, 2011 6:08 PM | Permalink | Reply to this
Re: Workshop on Higher Gauge Theory, TQFT and Categorification in Cardiff
I made some “introductory and survey”-notes on $\infty$-Chern-Simons theory . Here.
Posted by: Urs Schreiber on May 9, 2011 1:04 PM | Permalink | Reply to this
Re: Workshop on Higher Gauge Theory, TQFT and Categorification in Cardiff
Konrad Waldorf in his talk reviewed the recent rigorous version by Uli Bunke of the old argument by Killingback of the quantum anomaly cancellation on the worldsheet for the heterotic superstring,
coming from a string structure on the target.
I used the occasion to start an entry fermionic path integral . This is now effectively 2/3rds of the notes of Konrad’s talks. He proceeded with discussion as can be seen at differential string
structure .
Have to run now to the next talk…
Posted by: Urs Schreiber on May 10, 2011 2:57 PM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2011/04/workshop_on_higher_gauge_theor.html","timestamp":"2014-04-16T10:10:58Z","content_type":null,"content_length":"32733","record_id":"<urn:uuid:afaa9806-8101-43f3-be3b-700fcd946e31>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
Surveys and Data Displays
11.8: Surveys and Data Displays
Created by: CK-12
The Bedtime Survey
Kelly is wishing for a new bedtime. Her parents insist that she heads to bed at 9 pm each night and Kelly wants to go to bed at 9:30 pm. After a long argument, she decides to conduct a survey of the
other kids in her class to figure out how many kids go to bed at 9 pm, 9:30 pm and 10 pm or later. She tells her penpal Justin about her plan and he decides to conduct the same survey in New Zealand.
Kelly is excited. Maybe with enough data from two different countries, her parents will allow her to change her bedtime. Kelly writes up the survey and asks the 25 students in her class to
participate. She asks Justin to survey 25 students as well so that their data can be easily compared. If they survey different numbers of students, it will be more challenging to compare the data.
Here are Kelly’s results.
9 pm = 7 students
9:30 pm = 12 students
10 pm or later = 6 students
Justin conducts a survey and emails Kelly the results. Here is what he discovered.
9 pm = 2 students
9:30 pm = 7 students
10 pm or later = 16 students
Kelly is amazed that most of the students that Justin surveyed go to bed at 10 pm or later. She knows that will never fly with her parents, but she might have collected enough evidence to get to 9:30
Next, Kelly wants to create a display to show her parents. She wishes to create two different displays-one to show her data alone and one to show her data compared with Justin’s data. Kelly isn’t
sure how to go about it. This is where you come in-learning about data displays is the focus of this lesson. With your help Kelly will be able to complete her task!
What You Will Learn
In this lesson, you will learn how to do the following:
• Collect and organize real-world survey data.
• Choose an appropriate data display.
• Distinguish which data displays are more effective for a specific purpose.
• Analyze and interpret statistical survey data.
Teaching Time
I. Collect and Organize Real-World Survey Data
A survey is a way of collecting data based on personal information given by individuals. Oftentimes, a survey can be taken to learn personal preferences. Surveys are done all the time. They are done
at schools, by businesses, even by the government. Sometimes, television companies conduct surveys to figure out the television preferences of their viewers.
A survey is one method of working with statistics. Statistics involve collecting, analyzing and displaying data. In the introduction problem, Kelly and Justin conducted a real-world survey.
Let’s look at some of what they did to conduct the survey.
1. They decided on a question. Their question had to do with bedtimes. They asked students “What time do you go to bed at night?”
2. Next, they chose parameters. Parameters are boundaries. They chose three bedtimes for students to choose from. If the boundaries had been left open, Kelly and Justin might have had so many
different responses that it would have been difficult to organize and analyze the data. They left the last category a bit more open-10 pm or later to cover anyone who did not specifically fit in
one of the other spots.
3. Then they conducted the survey and collected the data.
4. After finishing the survey, it is time to select a way to display the data. There are many different ways to display data in a visual way. Each way has a different purpose. By becoming familiar
with the different ways to display data, a person can choose the one that best serves his/her purpose.
II. Choose an Appropriate Data Display
Now that the survey has been conducted, it is time to choose a data display. When creating a display, there are different ways to show data.
As Kelly thinks about the different ways to display data, she thinks that she wants to create two different displays.
The first one will show only her data and will be a circle graph.
The second one will show her data and Justin’s and will be a double bar graph.
Are these good choices? Kelly isn’t sure. Let’s think about each type of data display and how Kelly can show her survey results.
III. Distinguish Which Data Displays are More Effective for a Specific Purpose
Kelly wants to select the method that best serves her purpose. Let’s look at some of the ways to display data and then think about Kelly’s survey and which way might help her accomplish getting a
later bedtime.
Ways to Display Data
1. Bar Graph – A bar graph displays the frequency of data or how often data occurs.
Kelly wants to show that many students have a later bedtime than she does. Given this information, a bar graph might be a possible way for Kelly to display her individual data without including
Justin’s data.
2. Double Bar Graph – Compares the frequency of two sets of data.
When Kelly creates a display to show her data and Justin’s data, a double bar graph is a way to show both sets of data in the same spot. Given this, a double bar graph is a possible option.
3. Line Graph – shows how data changes over time.
Kelly did not conduct a survey to address how bedtimes changed over time. She conducted a survey to count the how many of her peers had each bedtime. She wants to prove that many students have a
later bedtime than she does, so she should also have a later bedtime. Given her goal, a line graph is NOT a good option for Kelly.
4. Double Line Graph – compares how two sets of data change over time.
Given Kelly’s goal and the way that the survey was conducted, this is not an option for Kelly.
5. Circle Graph – shows a percentage out of a whole.
If Kelly was to change her data to show the percentage of students with each bedtime, she could probably prove to her parents that many students have a later bedtime than she does. This could be an
excellent option for Kelly.
Kelly’s selections will work.
She can create a circle graph to show her survey alone and a double bar graph to show her survey data and Justin’s.
IV. Analyze and Interpret Statistical Survey Data
Now that Kelly has chosen her two displays, she needs to analyze and interpret her data. First, to create a circle graph, Kelly needs to write her data in terms of percentages. She will need to
change each amount of the whole to a percentage for the work to make sense.
Let’s look at the survey results once again.
Kelly surveyed 25 students.
7 students have a 9 pm bedtime.
12 students have a 9:30 pm bedtime.
6 students have a bedtime that is 10 pm or later.
To convert this data to percentages, Kelly first needs to write a fraction for each bedtime.
$\frac{7}{25} =$
$\frac{12}{25} =$
$\frac{6}{25} =$
Next, Kelly needs to change each fraction to a %. To do this, she can rewrite each fraction as an equal fraction out of 100.
$\frac{7}{25} &= \frac{28}{100} = 28 \%\\\frac{12}{25} &= \frac{48}{100}=48 \%\\ \frac{6}{25} &= \frac{24}{100}=24 \%$
Now Kelly has percentages, and she can create her circle graph.
What about the double bar graph?
To do this, she can create two axes. On the $x$$y$
With a double bar graph Kelly can use the actual numbers. She doesn’t need to convert any numbers to fractions or percentages.
Now let’s revisit the original problem and create some data displays.
Real Life Example Completed
The Bedtime Survey
Here is the original problem once again. Reread it before beginning.
Kelly is wishing for a new bedtime. Her parents insist that she heads to bed at 9 pm each night and Kelly wants to go to bed at 9:30 pm. After a long argument, she decides to conduct a survey of the
other kids in her class to figure out how many kids go to bed at 9 pm, 9:30 pm and 10 pm or later. She tells her penpal Justin about her plan and he decides to conduct the same survey in New Zealand.
Kelly is excited. Maybe with enough data from two different countries, her parents will allow her to change her bedtime. Kelly writes up the survey and asks the 25 students in her class to
participate. She asks Justin to survey 25 students as well so that their data can be easily compared. If they survey different numbers of students, it will be more challenging to compare the data.
Here are Kelly’s results.
9 pm = 7 students
9:30 pm = 12 students
10 pm or later = 6 students
Justin conducts a survey and emails Kelly the results. Here is what he discovered.
9 pm = 2 students
9:30 pm = 7 students
10 pm or later = 16 students
Kelly is amazed that most of the students that Justin surveyed go to bed at 10 pm or later. She knows that will never fly with her parents, but she might have collected enough evidence to get to 9:30
Based on this lesson, Kelly has decided to create two displays. First, she will create a circle graph to show her data in terms of percentages. Then, she will create a double bar graph to show her
data in relationship to Justin’s data.
She hopes that her data will help her to prove to her parents that 9:30 is a reasonable bedtime.
Here is Kelly’s data for the circle graph.
$\frac{7}{25} &= \frac{28}{100} = 28\%\\\frac{12}{25} &= \frac{48}{100}=48\%\\ \frac{6}{25} &= \frac{24}{100}=24\%$
Next, she can use these percentages and draw them into a circle graph. Remember that a circle graph shows data out of 100%, so Kelly’s data is right on target.
Next, Kelly creates a double bar graph to show her data in comparison to Justin’s. Here is the double bar graph.
Based on these two displays, discuss with a partner whether or not you think Kelly’s parents will grant her wish for a later bedtime. Be sure to explain your thinking by citing examples and evidence
from the two surveys.
Here are the vocabulary words that are found in this lesson.
a method of collecting data where you ask a sample of people the same question. You create options for answers and then gather the data to create a display.
the method of collecting, analyzing and displaying data.
Technology Integration
Khan Academy, Reading Pie Graphs
Khan Academy, Reading Bar Graphs
Time to Practice
Directions: Select the best display for each description of data. Choose from circle graph, line graph, double line graph, bar graph or double bar graph.
1. The percentages of people who enjoy ice cream
2. How stamp prices have changed over time
3. How stamp prices changed in 1996 and in 1998.
4. The number of students who attended college in 1990, 1991, and 1992
5. The percentages of people who prefer chocolate, vanilla or strawberry ice cream.
6. The changes in prices at one movie theater over a period of three years.
7. The changes in prices at two different movie theaters over a period of three years.
8. A graph showing how sales had declined during the past month
9. A graph showing the number of students with perfect attendance during the past three months.
10. A graph showing the number of students with perfect attendance at two different schools during the past three months.
Files can only be attached to the latest version of None | {"url":"http://www.ck12.org/book/CK-12-Middle-School-Math---Grade-6/r4/section/11.8/anchor-content","timestamp":"2014-04-24T11:45:21Z","content_type":null,"content_length":"127493","record_id":"<urn:uuid:60ab6d7c-6f9e-42a7-b83a-64caed258ac9>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frank Morgan's Math Chat - BRIDGE VOIDS
September 20, 2001
OLD CHALLENGE. High school seniors will soon be applying to colleges. Can you think of a better way to match up students and colleges?
ANSWER. Perhaps not. The current system of applications and offers, however ungainly, well respects the individuality of the colleges and applicants.
OLD BRIDGE CHALLENGE. In the September 2001 Bridge Bulletin of the American Contract Bridge League (www.acbl.org), Betty Moore asks for the probability in a bridge deal that each of the four hands
has a void in a different suit.
ANSWER. Joseph DeVincentis computes the probability as .0009101%, or about 1 in 100,000. Here's a heuristic way to guess the answer. First estimate the chance that the first player is void clubs, the
second diamonds, the third hearts, and the fourth spades. For the first player, each card has about a 3/4 chance of avoiding clubs, so the probability of a void in clubs is very roughly (3/4)^13. The
probability that all four players have their respective voids is roughly (3/4)^52. Since there are 24 ways to decide which player is void which suit, the probability that each player has a void in a
different suit is very roughly 24 times (3/4)^52, which is about 1 in 130,000, very close to the actual 1 in 100,000. Such accuracy is a result of errors canceling each other. The probability that a
single card avoids a particular suit is sometimes less than 3/4: after several of the first player's cards have avoided clubs, there are more clubs left, and it becomes harder to continue to avoid
clubs. On the other hand, the probability is sometimes greater than 3/4: the abundance of clubs left makes it easier for the second player to avoid diamonds. These errors apparently cancel each other
out rather well.
DeVincentis's rigorous computation is based on the considering the distributions of the cards in each suit, which must be one of:
11-1-1, 10-1-2, 9-1-3, 9-2-2, 8-1-4, 8-2-3, 7-1-5, 7-2-4, 7-3-3, 6-1-6, 6-2-5, 6-3-4, 5-3-5, 5-4-4,
plus permutations of these, and a few other possibilities if additional voids beyond the required four are allowed. He calculates the number of ways for the cards in each suit to end up distributed
each of these ways (for example, there are 13x12 = 156 ways to distribute the cards in an 11-1-1 distribution). He finds all the combinations of distributions for the four suits that add up to 13
cards for each player and calculates the number of ways of dealing each one. Finally, he adds up all the results and divides by the total number of bridge deals, 52!/(13!^4). His computer finds that
482886422450433777597120 of 53644737765488792839237440000 bridge deals have the required four voids, if he assumes no other voids existed, for a probability of .000900156%. When he adds in hands with
additional voids, he gets 488220815199432625023384 deals with the four required voids, and the probability increases to .0009101%.
QUESTIONABLE MATHEMATICS. Derek Smith writes: It seems that John Blubaugh has been banned by the American Contract Bridge League (ACBL) for shuffling in a way that keeps the ace of spades on the
bottom of the deck until it is dealt to his partner. A Dallas magician, Norman Beck, was hired by the ACBL to interpret videotapes of some of John's deals. "Guilty" was his verdict, although I hope
he wasn't basing it on the probability he gave the reporter for the Wall Street Journal (July 17, 2001, "Bridge Was His Life, until John Blubaugh was Called a Cheat"):
"The odds of a card starting at the bottom of the deck, and being there again seven shuffles later are 1.026 trillion to one, Mr. Beck says in an interview."
Walter Wright comments: "The quote seems absurd on the face of it. For example, with the simplifying assumption that the particular card is equally likely to be in any place in the deck before the
shuffle, and equally likely to be any place in the deck after the shuffle, the odds of a particular card starting at the bottom and ending up at the bottom are 2,704 to one (52x52 to one). [The odds
of staying on the bottom for seven shuffles are just 2^7 = 128 to one.] What possible assumptions could this Mr. Beck have made in order to boost the odds to 1.026 trillion to one?"
Readers are invited to submit more examples of questionable mathematics.
NEW QUESTION. I remember noticing as a kid, contemplating cutting diagonally across a square lawn, that the diagonal was about one and a half times as long as the side. (The actual ratio is the
square root of two, about 1.4.) Do you have an early mathematical memory?
Copyright 2001, Frank Morgan.
Send answers, comments, and new questions by email to Frank.Morgan@williams.edu, to be eligible for Flatland and other book awards. Winning answers will appear in the next Math Chat. Math Chat
appears on the first and third Thursdays of each month. Prof. Morgan's homepage is at www.williams.edu/Mathematics/fmorgan.
THE MATH CHAT BOOK, including a $1000 Math Chat Book QUEST, questions and answers, and a list of past challenge winners, is now available from the MAA (800-331-1622). | {"url":"http://www.maa.org/frank-morgans-math-chat-bridge-voids","timestamp":"2014-04-16T06:22:55Z","content_type":null,"content_length":"93552","record_id":"<urn:uuid:486928ec-59bd-4afd-98d9-33da8267e240>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Home » Browse the archive » AMT/C: Unpublished manuscripts and drafts » AMT/C/24-27
Work on morphogenesis. Folders of notes, calculations and diagrams, received from N.E. Hoskin in 1978, and additional to the material on the subject listed as C/7-10. The papers are part of the
extensive notes and drafts left at AMT's death which some of his colleagues (R.O. Gandy, N.E. Hoskin, and B. Richards) intended to prepare for publication. The material is mainly AMT's AMS and TS,
with many additions and corrections. It includes diagrams, tables, calculations, and computer routines. Sometimes both sides of the paper are used; many pages have other work on the back. Some of the
material constitutes relatively substantial paginated sequences, whilst some consists of random pages which have not been assigned a definite place or description. All titles and descriptions in
inverted commas are those which appear on the AMS. Later notes or comments in the hands of Hoskin and Gandy are identified and indicated wherever possible. See also Dr. J. Swinton's web pages http://
www.swintons.net/jonathan/turing.htm for a detailed analysis of page sequences and contents.
Paper, 4 envelopes.
Copyright: Copyright © P.N. Furbank
Folder inscribed, 'Morphogens. Turing I'. With MS note by Gandy, ' This contains consecutive pieces which were not incorporated by Hoskin and Richards in their paper'. Includes, among other
unidentified pages:-
□ 'Outline of the development of the daisy'. Includes sections on 'Considerations governing the choice of parameter' and 'Early stages in pattern formation'. Numbered by R.O. Gandy
□ worked example of the theory in Morphogen theory of phyllotaxis II, showing how orthogonal eigenfunctions can be chosen for a linear model of diffusion between four discrete cells arranged as
three satellites around a central cell
□ '13 Stationary waves in continuous tissue and abstract space'
□ 'Forced waves'
□ 'discarded FIRSTART values'
□ '4. The equations applied to a plane'
□ 'Effect of quadratic terms'
□ Substantial set of drafts and calculations, with [? Hoskin's] separate page heading, 'Stability and approximations'
□ 'Hex stability with different values of ?'
□ 'Damping due to the J term'
□ draft of 'solvable and unsolvable problems' (Penguin Science News 31, 1954)
□ 'Lattice solutions and their stability'
□ 'Fig X': a nomogram.
Folder inscribed, 'Morphogen. Theory of Phyllotaxis. Turing 2'. With MS note by Gandy, 'This is the draft from which Hoskin and Richards prepared their paper Part 1 (which follows the draft
rather closely. Passages in light blue are I think by Hoskin)'. The Hoskin and Richards draft is catalogued at AMT/C/8. Includes:-
□ AMT's TS draft, with many AMS revisions, and numbered in another hand. AMT's AMS revisions are usually in black ink, but occasionally in pencil. Other corrections and markings appear in
another hand, in blue ink and pencil, and there is also some linking material on separately numbered pages, e.g. 26a, 57a
□ 'Naturally occurring phyllotactic patterns'
□ 'Some properties of Fibonacci numbers'
□ un-numbered MS headed, 'Stability of 2^nd order equations'
□ diagrams of florets in sunflower head, numbered and annotated by AMT
□ photograph of sunflower head
□ review by H.S.M. Coxeter of a book by L. Fejes Tóth, Lagerungen in der Ebene, auf der Kugel und im Raum (Berlin, 1953), reprinted from Bulletin of the American Mathematical Society (Vol. 60,
Mar. 1954).
Folder inscribed, 'Chemical Theory of Morphogenesis. Turing 3'. With MS note by R.O. Gandy, 'Part II of draft used by Hoskin and Richards'. Includes:-
□ TS draft, numbered in another hand. The draft is AMT's TS, with many AMS revisions in black ink. The title on p.1, 'Part II Chemical Theory of Morphogenesis', plus other corrections and
annotations, appear in another hand, in blue-black ink
□ TS draft for '4. "Noise" effects'. With MS note by Hoskin, 'Another § 4 but should be included here even though incomplete. N.H.'
□ TS draft, pages numbered in another hand, beginning, 'Effects of random disturbances', with AMS annotations by AMT in black ink, and in another hand in blue ink.
Folder inscribed, 'Turing 4'. With MS note by R.O. Gandy, 'It will be difficult, in some places impossible, to know exactly what the fragments are (exactly) about.' Inside the folder is a note by
Hoskin, 'I have not been able to fit in any of the following notes in the main articles. In many cases there are other notes, presumably superseded, on the reverse of these pages. Some of these
are not sequential.' Many of these drafts are on the back of programme sheets for Manchester University Computing Machine Laboratory, and some carry routines related to the themes of the MSS, and
perhaps intended to test them. The material is in the form of bundles clipped together, some with AMT's heading or description, and have been left as received. Includes:-
□ 'Pessimum compressed hexagonal golden lattice'
□ 'Half way compressible golden lattice'
□ 'Golden rectangular lattice'
□ 'FIRCONES. Paper theory'. Drafts, notes, and calculations
□ 'KJELL Theory'. Drafts, notes, and calculations. Page 7, labelled 'NORMAST', contains a subroutine tree including a number of the subroutines described in the folder
□ 'KJELL PREP TRACK PAIRS'. Drafts, notes and calculations, in ink and pencil, with many revisions and corrections. Some pages used on both sides, and some top and bottom
□ notes, signed B.R. [Bernard Richards] on 'Morphogenesis of cellular structure'
□ notes and calculations, with computer routine 'OUTERFIR', perhaps related to FIRCONES, and notes for modification of OUTERFIR programme
□ table of flower species used by AMT for work on morphogenesis
□ untitled notes, calculations and diagrams, many on lattices
□ 'Naturally occurring phyllotactic patterns'
□ 'Amplitude with 1 dimensional waves'
□ 'Rate of change of wavelength'
□ 'Measurements taken on some specimens'
□ TS, 'Evidence relating to the diffusion reaction theory of morphogenesis', by C.W. Wardlaw, with MS note at head, 'Rough draft. A.M. Turing'.
Back to AMT/C | Browse the archive | Back to the top | {"url":"http://www.turingarchive.org/browse.php/C/24-27","timestamp":"2014-04-17T12:32:52Z","content_type":null,"content_length":"11219","record_id":"<urn:uuid:df4e4926-cfe7-40fa-a832-50b27def238b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptology ePrint Archive Forum :: 2011 Reports :: Error in Report 2011/516: Protecting AES with Shamir's Secret Sharing Scheme by Louis Goubin and Ange Martinelli
Discussion forum for
Cryptology ePrint Archive
reports posted in
. Please put the report number in the subject.
Error in Report 2011/516: Protecting AES with Shamir's Secret Sharing Scheme by Louis Goubin and Ange Martinelli
Posted by:
(IP Logged)
Date: 26 September 2011 09:53
At the end of section 3.1 the authors claim that affine component A of the AES S-box (which is the composition of inversion in GF(256) with an GF(2)-affine map that is NOT affine over GF(256)) can be
simply implemented by applying the affine map A on the shares $y_i$ (ignoring the constant term of the affine map making it linear for simplicity's sake).
This is wrong.
As proof the authors claim that A(P) is a polynomial of degree d. A(P) can be interpreted as such a polynomial, but NOT as a polynomial of one variable over GF(256), ONLY as a polynomial in 8
variables over GF(2) when choosing a basis of GF(256) over GF(2). It is not clear at all, how to convert such a polynomial back to the form the authors need.
An easy way to see that replacing $y_i$ by $A(y_i)$ does NOT correspond to applying the affine map A to the secret value is by taking equation (1) of section 2.2:
The secret $a_0$ can be reconstructed given the shares $y_i$ by evaluating the sum $\sum_0^d y_i \cdot \beta_i$. Applying the affine map A on both sides (for simplicity, we assume again A to be
linear over GF(2)) one gets $A(a_0) = A(\sum_0^d y_i \cdot \beta_i) = \sum_0^d A(y_i \cdot \beta_i)$.
As $A$ is NOT affine/linear over GF(256), in general $A(y_i \cdot \beta_i)$ does NOT equal $A(y_i) \cdot \beta_i$ and having $A(a_0)$ equal to $\sum_0^d A(y_i) \cdot \beta_i$ would be pure | {"url":"http://eprint.iacr.org/forum/read.php?11,549,549,quote=1","timestamp":"2014-04-16T07:15:40Z","content_type":null,"content_length":"19572","record_id":"<urn:uuid:1aa00647-6574-4aec-b061-a663e97cc19b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding supply and demand curves
March 2nd 2012, 02:22 PM #1
Sep 2011
Finding supply and demand curves
Hello. Would someone please tell me a step by step solution on how to find the supply/demand curve, domain of those curves as well as how to find the market equilibrium based on the following
Re: Finding supply and demand curves
can you post the entire question?
typical questions tell you the supply and demand equations, and you solve them simultaneously to find the equilibrium price. If the equations you posted are supply and demand you can do that,
although they are both downwards sloping so i think you may have typoed somewhere (the supply equation normally slopes up).
Re: Finding supply and demand curves
You are given a pair of equations, one representing a supply curve and the other representing a demand curve, where p is the unit price for x items.
40p+x−1264 = 0
a. Identify which is the supply curve and demand curve and the appropriate domain. Put the domains in interval notation . For ∞ type infinity . For more than one interval use a U to represent a
”union”. Domain of the supply curve Domain of the demand curve
b. Determine the market equilibrium. Equilibrium: x =
c. Determine the revenue function. Revenue function R(x)=
d. Determine the revenue at market equilibrium.
Re: Finding supply and demand curves
(a) the supply curve is the upwards sloping one
(b) solve the two equations simultaneously to find equilibrium values of x and p
(c) revenue function: rearrange the demand curve into the form (x=....) , then multiply everything by p. this gives you (xp=p.....). You may recognise x*p as the total revenue.
(d) multiply the values of x and p that you obtained in part (b).
March 3rd 2012, 05:18 PM #2
MHF Contributor
May 2010
March 4th 2012, 11:09 AM #3
Sep 2011
March 4th 2012, 07:51 PM #4
MHF Contributor
May 2010 | {"url":"http://mathhelpforum.com/business-math/195565-finding-supply-demand-curves.html","timestamp":"2014-04-23T17:32:40Z","content_type":null,"content_length":"36854","record_id":"<urn:uuid:2bd3987c-5a85-4844-8ce8-3483eac9c47a>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Students will build new mathematical knowledge through problem solving.
3.PS.1 Explore, examine, and make observations about a social problem or mathematical situation
3.PS.2 Understand that some ways of representing a problem are more helpful than others
3.PS.3 Interpret information correctly, identify the problem, and generate possible solutions
Students will solve problems that arise in mathematics and in other contexts.
3.PS.4 Act out or model with manipulatives activities involving mathematical content from literature
3.PS.5 Formulate problems and solutions from everyday situations
3.PS.6 Translate from a picture/diagram to a numeric expression
3.PS.7 Represent problem situations in oral, written, concrete, pictorial, and graphical forms
3.PS.8 Select an appropriate representation of a problem
Students will apply and adapt a variety of appropriate strategies to solve problems.
3.PS.9 Use trial and error to solve problems
3.PS.10 Use process of elimination to solve problems
3.PS.11 Make pictures/diagrams of problems
3.PS.12 Use physical objects to model problems
3.PS.13 Work in collaboration with others to solve problems
3.PS.14 Make organized lists to solve numerical problems
3.PS.15 Make charts to solve numerical problems
3.PS.16 Analyze problems by identifying relationships
3.PS.17 Analyze problems by identifying relevant versus irrelevant information
3.PS.18 Analyze problems by observing patterns
3.PS.19 State a problem in their own words
Students will monitor and reflect on the process of mathematical problem solving.
3.PS.20 Determine what information is needed to solve a problem
3.PS.21 Discuss with peers to understand a problem situation
3.PS.22 Discuss the efficiency of different representations of a problem
3.PS.23 Verify results of a problem
3.PS.24 Recognize invalid approaches
3.PS.25 Determine whether a solution is reasonable in the context of the original problem
Students will recognize reasoning and proof as fundamental aspects of mathematics.
3.RP.1 Use representations to support mathematical ideas
3.RP.2 Determine whether a mathematical statement is true or false and explain why
Students will make and investigate mathematical conjectures.
3.RP.3 Investigate the use of knowledgeable guessing by generalizing mathematical ideas
3.RP.4 Make conjectures from a variety of representations
Students will develop and evaluate mathematical arguments and proofs.
3.RP.5 Justify general claims or conjectures, using manipulatives, models, and expressions
3.RP.6 Develop and explain an argument using oral, written, concrete, pictorial, and/or graphical forms
3.RP.7 Discuss, listen, and make comments that support or reject claims made by other students
Students will select and use various types of reasoning and methods of proof.
3.RP.8 Support an argument by trying many cases
Students will organize and consolidate their mathematical thinking through communication.
3.CM.1 Understand and explain how to organize their thought process
3.CM.2 Verbally explain their rationale for strategy selection
3.CM.3 Provide reasoning both in written and verbal form
Students will communicate their mathematical thinking coherently and clearly to peers, teachers, and others.
3.CM.4 Organize and accurately label work
3.CM.5 Share organized mathematical ideas through the manipulation of objects, drawings, pictures, charts, graphs, tables, diagrams, models, symbols, and expressions in written and verbal
3.CM.6 Answer clarifying questions from others
Students will analyze and evaluate the mathematical thinking and strategies of others.
3.CM.7 Listen for understanding of mathematical solutions shared by other students
3.CM.8 Consider strategies used and solutions found in relation to their own work
Students will use the language of mathematics to express mathematical ideas precisely.
3.CM.9 Increase their use of mathematical vocabulary and language when communicating with others
3.CM.10 Describe objects, relationships, solutions and rationale using appropriate vocabulary
3.CM.11 Decode and comprehend mathematical visuals and symbols to construct meaning
Students will recognize and use connections among mathematical ideas.
3.CN.1 Recognize, understand, and make connections in their everyday experiences to mathematical ideas
3.CN.2 Compare and contrast mathematical ideas
3.CN.3 Connect and apply mathematical information to solve problems
Students will understand how mathematical ideas interconnect and build on one another to produce a coherent whole.
3.CN.4 Understand multiple representations and how they are related
3.CN.5 Model situations with objects and representations and be able to make observations
Students will recognize and apply mathematics in contexts outside of mathematics.
3.CN.6 Recognize the presence of mathematics in their daily lives
3.CN.7 Apply mathematics to solve problems that develop outside of mathematics
3.CN.8 Recognize and apply mathematics to other disciplines
Students will create and use representations to organize, record, and communicate mathematical ideas.
3.R.1 Use verbal and written language, physical models, drawing charts, graphs, tables, symbols, and equations as representations
3.R.2 Share mental images of mathematical ideas and understandings
3.R.3 Recognize and use external mathematical representations
3.R.4 Use standard and nonstandard representations with accuracy and detail
Students will select, apply, and translate among mathematical representations to solve problems.
3.R.5 Understand similarities and differences in representations.
3.R.6 Connect mathematical representations with problem solving
3.R.7 Construct effective representations to solve problems
Students will use representations to model and interpret physical, social, and mathematical phenomena.
3.R.8 Use mathematics to show and understand physical phenomena (e.g., estimate and represent the number of apples in a tree)
3.R.9 Use mathematics to show and understand social phenomena (e.g., determine the number of buses required for a field trip)
3.R.10 Use mathematics to show and understand mathematical phenomena (e.g., use a multiplication grid to solve odd and even number problems)
Students will understand numbers, multiple ways of representing numbers, relationships among numbers, and number systems.
Number Systems
3.N.1 Skip count by 25’s, 50’s, 100’s to 1,000
3.N.2 Read and write whole numbers to 1,000
3.N.3 Compare and order numbers to 1,000
3.N.4 Understand the place value structure of the base ten number system:
10 ones = 1 ten
10 tens = 1 hundred
10 hundreds = 1 thousand
3.N.5 Use a variety of strategies to compose and decompose three-digit numbers
3.N.6 Use and explain the commutative property of addition and multiplication
3.N.7 Use 1 as the identity element for multiplication
3.N.8 Use the zero property of multiplication
3.N.9 Understand and use the associative property of addition
3.N.10 Develop an understanding of fractions as part of a whole unit and as parts of a collection
3.N.11 Use manipulatives, visual models, and illustrations to name and represent unit fractions ( and ) as part of a whole or a set of objects
3.N.12 Understand and recognize the meaning of numerator and denominator in the symbolic form of a fraction
3.N.13 Recognize fractional numbers as equal parts of a whole
3.N.14 Explore equivalent fractions (½, ⅓, ¼)
3.N.15 Compare and order unit fractions (½, ⅓, ¼) and find their approximate locations on a number line
Number Theory
3.N.16 Identify odd and even numbers
3.N.17 Develop an understanding of the properties of odd/even numbers as a result of addition or subtraction
Students will understand meanings of operations and procedures, and how they relate to one another.
3.N.18 Use a variety of strategies to add and subtract 3-digit numbers (with and without regrouping)
3.N.19 Develop fluency with single-digit multiplication facts
3.N.20 Use a variety of strategies to solve multiplication problems with factors up to 12 x 12
3.N.21 Use the area model, tables, patterns, arrays, and doubling to provide meaning for multiplication
3.N.22 Demonstrate fluency and apply single-digit division facts
3.N.23 Use tables, patterns, halving, and manipulatives to provide meaning for division
3.N.24 Develop strategies for selecting the appropriate computational and operational method in problem solving situations
Students will compute accurately and make reasonable estimates.
3.N.25 Estimate numbers up to 500
3.N.26 Recognize real world situations in which an estimate (rounding) is more appropriate
3.N.27 Check reasonableness of an answer by using estimation
Students will perform algebraic procedures accurately.
Equations and Inequalities
3.A.1 Use the symbols <, >, = (with and without the use of a number line) to compare whole numbers and unit fractions and
Students will recognize, use, and represent algebraically patterns, relations, and functions.
Patterns, Relations, and Functions
3.A.2 Describe and extend numeric (+, -) and geometric patterns
Students will use visualization and spatial reasoning to analyze characteristics and properties of geometric shapes.
3.G.1 Define and use correct terminology when referring to shapes (circle, triangle, square, rectangle, rhombus, trapezoid, and hexagon)
3.G.2 Identify congruent and similar figures
3.G.3 Name, describe, compare, and sort three-dimensional shapes: cube, cylinder, sphere, prism, and cone
3.G.4 Identify the faces on a three-dimensional shape as two-dimensional shapes
Students will apply transformations and symmetry to analyze problem solving situations.
Transformational Geometry
3.G.5 Identify and construct lines of symmetry
Students will determine what can be measured and how, using appropriate methods and formulas.
Units of Measurement
3.M.1 Select tools and units (customary) appropriate for the length measured
3.M.2 Use a ruler/yardstick to measure to the nearest standard unit (whole and ½ inches, whole feet, and whole yards)
3.M.3 Measure objects, using ounces and pounds
3.M.4 Recognize capacity as an attribute that can be measured
3.M.5 Compare capacities (e.g., Which contains more? Which contains less?)
3.M.6 Measure capacity, using cups, pints, quarts, and gallons
Students will use units to give meaning to measurements.
3.M.7 Count and represent combined coins and dollars, using currency symbols ($0.00)
3.M.8 Relate unit fractions to the face of the clock:
Whole = 60 minutes
½ = 30 minutes
¼ = 15 minutes
Students will develop strategies for estimating measurements.
3.M.9 Tell time to the minute, using digital and analog clocks
3.M.10 Select and use standard (customary) and non-standard units to estimate measurements
Students will collect, organize, display, and analyze data.
Collection of Data
3.S.1 Formulate questions about themselves and their surroundings
3.S.2 Collect data using observation and surveys, and record appropriately
Organization and Display of Data
3.S.3 Construct a frequency table to represent a collection of data
3.S.4 Identify the parts of pictographs and bar graphs
3.S.5 Display data in pictographs and bar graphs
3.S.6 State the relationships between pictographs and bar graphs
Analysis of Data
3.S.7 Read and interpret data in bar graphs and pictographs
Students will make predictions that are based upon data analysis.
Predictions from Data
3.S.8 Formulate conclusions and make predictions from graphs
Jump to:
││Table of Contents│Prekindergarten│Kindergarten│ Grade 1 │Grade 2│
││ Grade 4 │ Grade 5 │ Grade 6 │ Grade 7 │Grade 8│
│Integrated Algebra│ Geometry │Algebra 2 and Trigonometry │ │ | {"url":"http://www.p12.nysed.gov/ciai/mst/math/standards/revisedg3.html","timestamp":"2014-04-19T02:07:20Z","content_type":null,"content_length":"61026","record_id":"<urn:uuid:f8facfdb-8735-427a-8e3c-f3bb16265dc6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mill Creek, WA Algebra 1 Tutor
Find a Mill Creek, WA Algebra 1 Tutor
...Many students begin to have problems in pre-algebra. In my experience, those who do, have many gaps from previous years. My job is to fill in those gaps, while building their understanding of
the current material.
20 Subjects: including algebra 1, reading, calculus, statistics
...I played the Viola in my high school chamber orchestra and volunteered as a student counselor for a summer music camp to help develop fundamental musical abilities for K-8 students. I offer a
wide range of academic services and I can work with the the pupil to form a way to learn that best empha...
34 Subjects: including algebra 1, chemistry, English, reading
Hi, my name is Rita, and I got my degree from George Washington University in Chemistry. Although I started my career in research, I developed a passion for teaching when I started volunteering in
my kids classrooms. I have taught for more than 15 years in school, college, private individual and group settings.
24 Subjects: including algebra 1, English, reading, geometry
...I work as a tutor at my school's Math Learning Center and have tutored students in a one-on-one setting in physics, math, and beginning chemistry. I am currently enrolled in Calculus 4,
Physics: Electricity and Magnetism, and Engineering Statics. I have experience tutoring math students at the 8th grade level all the way up to integral calculus.
14 Subjects: including algebra 1, chemistry, PSAT, mechanical engineering
...I also managed undergraduates in my laboratory and taught them scientific techniques, data analysis, 'laboratory math', and safety training. The bottom line is that I love teaching, and now
that I am graduated and pursuing a career in industry, I find that I miss it. I am confident that with a ...
14 Subjects: including algebra 1, writing, geometry, biology | {"url":"http://www.purplemath.com/Mill_Creek_WA_algebra_1_tutors.php","timestamp":"2014-04-17T19:26:54Z","content_type":null,"content_length":"24180","record_id":"<urn:uuid:bbb912ff-a38d-4ebb-bffd-04e085d3c669>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
SparkNotes: SAT Physics: Center of Mass
9.1 What Is Linear Momentum? 9.5 Center of Mass
9.2 Impulse 9.6 Key Formulas
9.3 Conservation of Momentum 9.7 Practice Questions
9.4 Collisions 9.8 Explanations
Center of Mass
When calculating trajectories and collisions, it’s convenient to treat extended bodies, such as boxes and balls, as point masses. That way, we don’t need to worry about the shape of an object, but
can still take into account its mass and trajectory. This is basically what we do with free-body diagrams. We can treat objects, and even systems, as point masses, even if they have very strange
shapes or are rotating in complex ways. We can make this simplification because there is always a point in the object or system that has the same trajectory as the object or system as a whole would
have if all its mass were concentrated in that point. That point is called the object’s or system’s center of mass.
Consider the trajectory of a diver jumping into the water. The diver’s trajectory can be broken down into the translational movement of his center of mass, and the rotation of the rest of his body
about that center of mass.
A human being’s center of mass is located somewhere around the pelvic area. We see here that, though the diver’s head and feet and arms can rotate and move gracefully in space, the center of mass in
his pelvic area follows the inevitable parabolic trajectory of a body moving under the influence of gravity. If we wanted to represent the diver as a point mass, this is the point we would choose.
Our example suggests that Newton’s Second Law can be rewritten in terms of the motion of the center of mass:
Put in this form, the Second Law states that the net force acting on a system, , is equal to the product of the total mass of the system, M, and the acceleration of the center of mass,
Similarly, the equation for linear momentum can be written in terms of the velocity of the center of mass:
You will probably never need to plug numbers into these formulas for SAT II Physics, but it’s important to understand the principle: the rules of dynamics and momentum apply to systems as a whole
just as they do to bodies.
Calculating the Center of Mass
The center of mass of an object of uniform density is the body’s geometric center. Note that the center of mass does not need to be located within the object itself. For example, the center of mass
of a donut is in the center of its hole.
For a System of Two Particles
For a collection of particles, the center of mass can be found as follows. Consider two particles of mass d:
If you choose a coordinate system such that both particles fall on the x-axis, the center of mass of this system,
For a System in One Dimension
We can generalize this definition of the center of mass for a system of n particles on a line. Let the positions of these particles be M be the total mass of all n particles in the system, meaning
For a System in Two Dimensions
Defining the center of mass for a two-dimensional system is just a matter of reducing each particle in the system to its x- and y-components. Consider a system of n particles in a random arrangement
of x-coordinates y-coordinates x-coordinate of the center of mass is given in the equation above, while the y-coordinate of the center of mass is:
How Systems Will Be Tested on SAT II Physics
The formulas we give here for systems in one and two dimensions are general formulas to help you understand the principle by which the center of mass is determined. Rest assured that for SAT II
Physics, you’ll never have to plug in numbers for mass and position for a system of several particles. However, your understanding of center of mass may be tested in less mathematically rigorous
For instance, you may be shown a system of two or three particles and asked explicitly to determine the center of mass for the system, either mathematically or graphically. Another example, which we
treat below, is that of a system consisting of two parts, where one part moves relative to the other. In this cases, it is important to remember that the center of mass of the system as a whole
doesn’t move.
A fisherman stands at the back of a perfectly symmetrical boat of length L. The boat is at rest in the middle of a perfectly still and peaceful lake, and the fisherman has a mass ^1/[4] that of the
boat. If the fisherman walks to the front of the boat, by how much is the boat displaced?
If you’ve ever tried to walk from one end of a small boat to the other, you may have noticed that the boat moves backward as you move forward. That’s because there are no external forces acting on
the system, so the system as a whole experiences no net force. If we recall the equation
Because the boat is symmetrical, we know that the center of mass of the boat is at its geometrical center, at x = ^L/[2]. Bearing this in mind, we can calculate the center of mass of the system
containing the fisherman and the boat:
Now let’s calculate where the center of mass of the fisherman-boat system is relative to the boat after the fisherman has moved to the front. We know that the center of mass of the fisherman-boat
system hasn’t moved relative to the water, so its displacement with respect to the boat represents how much the boat has been displaced with respect to the water.
In the figure below, the center of mass of the boat is marked by a dot, while the center of mass of the fisherman-boat system is marked by an x.
At the front end of the boat, the fisherman is now at position L, so the center of mass of the fisherman-boat system relative to the boat is
The center of mass of the system is now ^3 /[5] from the back of the boat. But we know the center of mass hasn’t moved, which means the boat has moved backward a distance of ^1/[5 ]L, so that the
point ^3/ [5 ]L is now located where the point ^2 /[5] L was before the fisherman began to move. | {"url":"http://www.sparknotes.com/testprep/books/sat2/physics/chapter9section5.rhtml","timestamp":"2014-04-21T02:15:55Z","content_type":null,"content_length":"56873","record_id":"<urn:uuid:3592ca24-50da-44fb-a7aa-2ffd95249cb7>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for: Author/Editor=(Sanders_Peter)
Graph Partitioning and Graph Clustering
Table of Contents            
Contemporary Graph partitioning and graph clustering are ubiquitous subtasks in many applications where graphs play an important role. Generally speaking, both techniques aim at the
Mathematics identification of vertex subsets with many internal and few external edges. To name only a few, problems addressed by graph partitioning and graph clustering algorithms are:
2013; 240 pp; • What are the communities within an (online) social network?
softcover • How do I speed up a numerical simulation by mapping it efficiently onto a parallel computer?
• How must components be organized on a computer chip such that they can communicate efficiently with each other?
Volume: 588 • What are the segments of a digital image?
• Which functions are certain genes (most likely) responsible for?
0-8218-9038-7 The 10th DIMACS Implementation Challenge Workshop was devoted to determining realistic performance of algorithms where worst case analysis is overly pessimistic and probabilistic
models are too unrealistic. Articles in the volume describe and analyze various experimental data with the goal of getting insight into realistic algorithm performance in situations
ISBN-13: where analysis fails.
This book is published in cooperation with the Center for Discrete Mathematics and Theoretical Computer Science.
List Price: US$89
Graduate students and research mathematicians interested in graph theory and combinatorial algorithms.
Member Price:
Order Code: CONM/ | {"url":"http://ams.org/cgi-bin/bookstore/booksearch?fn=100&pg1=CN&s1=Sanders_Peter&arg9=Peter_Sanders","timestamp":"2014-04-20T01:23:57Z","content_type":null,"content_length":"15734","record_id":"<urn:uuid:51514ee4-0f69-483f-83fd-6eea242e1ea0>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
Information about the statistical techniques available in RevMan is addressed in section 9.4 of the Cochrane Handbook for Systematic Reviews of Interventions and you should read it now.
In order to choose the method you are going to use in your meta-analysis, the first concept to understand is the difference between a fixed effect model and a random effects model.
What does 'fixed effect' mean?
To come up with any statistical model, or method for meta-analysis, we first need to make some assumptions. It is these assumptions that form the differences between all the methods listed above.
A fixed effect model of meta-analysis is based on a mathematical assumption that every study is evaluating a common treatment effect. That means the effect of treatment, allowing for the play of
chance, was the same in all studies. Another way of explaining this is to imagine that if all the studies were infinitely large they'd give identical results.
The summary treatment effect estimate resulting from this method of meta-analysis is this one 'true' or 'fixed' treatment effect, and the confidence interval describes how uncertain we are about the
Sometimes this underlying assumption of a fixed effect meta-analysis (i.e. that diverse studies can be estimating a single effect) is too simplistic. Therefore, the alternative approaches to
meta-analysis are (i) to try to explain the variation or (ii) to use a random effects model.
Random effects meta-analyses (DerSimonian and Laird)
As we discussed above, fixed effect meta-analysis assumes that there is one identical true treatment effect common to every study.The random effects model of meta-analysis is an alternative approach
to meta-analysis that does not assume that a common ('fixed') treatment effect exists. The random effects model assumes that the true treatment effects in the individual studies may be different from
each other. That means there is no single number to estimate in the meta-analysis, but a distribution of numbers. The most common random effects model also assumes that these different true effects
are normally distributed. The meta-analysis therefore estimates the mean and standard deviation of the different effects.
By selecting 'random effects' in the analysis part of RevMan you can calculate an odds ratio, risk ratio or a risk difference based on this approach.
The Mantel-Haenszel approach
The Mantel-Haenszel approach was developed by Mantel and Haenszel over 40 years ago to analyse odds ratios, and has been extended by others to analyse risk ratios and risk differences. It is
unnecessary to understand all the details, but is sufficient to say that the Mantel-Haenszel method assumes a fixed effect and combines studies using a method similar to inverse variance approaches
to determine the weight given to each study.
The Peto method
The Peto method works for odds ratios only. Focus is placed on the observed number of events in the experimental intervention. We call this O for 'observed' number of events, and compare this with E,
the 'expected' number of events. Hence an alternative name for this method is the 'O - E' method. The expected number is calculated using the overall event rate in both the experimental and control
groups. Because of the way the Peto method calculates odds ratios, it is appropriate when trials have roughly equal number of participants in each group and treatment effects are small. Indeed, it
was developed for use in mega-trials in cancer and heart disease where small effects are likely, yet very important.
The Peto method is better than the other approaches at estimating odds ratios when there are lots of trials with no events in one or both arms. It is the best method to use with rare outcomes of this
The Peto method is generally less useful in Cochrane reviews, where trials are often small and some treatment effects may be large. | {"url":"http://www.cochrane-net.org/openlearning/HTML/mod12-3.htm","timestamp":"2014-04-17T13:03:11Z","content_type":null,"content_length":"21862","record_id":"<urn:uuid:7e948899-e177-4b6f-b2d1-6c926367baf5>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fluid dynamics and Bernoulli's equation
Sections 10.7 - 10.9
Moving fluids
Fluid dynamics is the study of how fluids behave when they're in motion. This can get very complicated, so we'll focus on one simple case, but we should briefly mention the different categories of
fluid flow.
Fluids can flow steadily, or be turbulent. In steady flow, the fluid passing a given point maintains a steady velocity. For turbulent flow, the speed and or the direction of the flow varies. In
steady flow, the motion can be represented with streamlines showing the direction the water flows in different areas. The density of the streamlines increases as the velocity increases.
Fluids can be compressible or incompressible. This is the big difference between liquids and gases, because liquids are generally incompressible, meaning that they don't change volume much in
response to a pressure change; gases are compressible, and will change volume in response to a change in pressure.
Fluid can be viscous (pours slowly) or non-viscous (pours easily).
Fluid flow can be rotational or irrotational. Irrotational means it travels in straight lines; rotational means it swirls.
For most of the rest of the chapter, we'll focus on irrotational, incompressible, steady streamline non-viscous flow.
The equation of continuity
The equation of continuity states that for an incompressible fluid flowing in a tube of varying cross-section, the mass flow rate is the same everywhere in the tube. The mass flow rate is simply the
rate at which mass flows past a given point, so it's the total mass flowing past divided by the time interval. The equation of continuity can be reduced to:
Generally, the density stays constant and then it's simply the flow rate (Av) that is constant.
Making fluids flow
There are basically two ways to make fluid flow through a pipe. One way is to tilt the pipe so the flow is downhill, in which case gravitational kinetic energy is transformed to kinetic energy. The
second way is to make the pressure at one end of the pipe larger than the pressure at the other end. A pressure difference is like a net force, producing acceleration of the fluid.
As long as the fluid flow is steady, and the fluid is non-viscous and incompressible, the flow can be looked at from an energy perspective. This is what Bernoulli's equation does, relating the
pressure, velocity, and height of a fluid at one point to the same parameters at a second point. The equation is very useful, and can be used to explain such things as how airplanes fly, and how
baseballs curve.
Bernoulli's equation
The pressure, speed, and height (y) at two points in a steady-flowing, non-viscous, incompressible fluid are related by the equation:
Some of these terms probably look familiar...the second term on each side looks something like kinetic energy, and the third term looks a lot like gravitational potential energy. If the equation was
multiplied through by the volume, the density could be replaced by mass, and the pressure could be replaced by force x distance, which is work. Looked at in that way, the equation makes sense: the
difference in pressure does work, which can be used to change the kinetic energy and/or the potential energy of the fluid.
Pressure vs. speed
Bernoulli's equation has some surprising implications. For our first look at the equation, consider a fluid flowing through a horizontal pipe. The pipe is narrower at one spot than along the rest of
the pipe. By applying the continuity equation, the velocity of the fluid is greater in the narrow section. Is the pressure higher or lower in the narrow section, where the velocity increases?
Your first inclination might be to say that where the velocity is greatest, the pressure is greatest, because if you stuck your hand in the flow where it's going fastest you'd feel a big force. The
force does not come from the pressure there, however; it comes from your hand taking momentum away from the fluid.
The pipe is horizontal, so both points are at the same height. Bernoulli's equation can be simplified in this case to:
The kinetic energy term on the right is larger than the kinetic energy term on the left, so for the equation to balance the pressure on the right must be smaller than the pressure on the left. It is
this pressure difference, in fact, that causes the fluid to flow faster at the place where the pipe narrows.
A geyser
Consider a geyser that shoots water 25 m into the air. How fast is the water traveling when it emerges from the ground? If the water originates in a chamber 35 m below the ground, what is the
pressure there?
To figure out how fast the water is moving when it comes out of the ground, we could simply use conservation of energy, and set the potential energy of the water 25 m high equal to the kinetic energy
the water has when it comes out of the ground. Another way to do it is to apply Bernoulli's equation, which amounts to the same thing as conservation of energy. Let's do it that way, just to convince
ourselves that the methods are the same.
Bernoulli's equation says:
But the pressure at the two points is the same; it's atmospheric pressure at both places. We can measure the potential energy from ground level, so the potential energy term goes away on the left
side, and the kinetic energy term is zero on the right hand side. This reduces the equation to:
The density cancels out, leaving:
This is the same equation we would have found if we'd done it using the chapter 6 conservation of energy method, and canceled out the mass. Solving for velocity gives v = 22.1 m/s.
To determine the pressure 35 m below ground, which forces the water up, apply Bernoulli's equation, with point 1 being 35 m below ground, and point 2 being either at ground level, or 25 m above
ground. Let's take point 2 to be 25 m above ground, which is 60 m above the chamber where the pressurized water is.
We can take the velocity to be zero at both points (the acceleration occurs as the water rises up to ground level, coming from the difference between the chamber pressure and atmospheric pressure).
The pressure on the right-hand side is atmospheric pressure, and if we measure heights from the level of the chamber, the height on the left side is zero, and on the right side is 60 m. This gives:
Why curveballs curve
Bernoulli's equation can be used to explain why curveballs curve. Let's say the ball is thrown so it spins. As air flows over the ball, the seams of the ball cause the air to slow down a little on
one side and speed up a little on the other. The side where the air speed is higher has lower pressure, so the ball is deflected toward that side. To throw a curveball, the rotation of the ball
should be around a vertical axis.
It's a little more complicated than that, actually. Although the picture here shows nice streamline flow as the air moves left relative to the ball, in reality there is some turbulence. The air does
exert a force down on the ball in the figure above, so the ball must exert an upward force on the air. This causes air that travels below the ball in the picture to move up and fill the space left by
the ball as it moves by, which reduces drag on the ball. | {"url":"http://physics.bu.edu/~duffy/py105/Bernoulli.html","timestamp":"2014-04-19T02:55:26Z","content_type":null,"content_length":"8302","record_id":"<urn:uuid:6c2c3067-9503-4939-810c-baee03691abf>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parallel Sparse Linear Algebra for Multi-core and Many-core Platforms â?? Parallel So [Archive] - nV News Forums
Partial differential equations are typically solved by means of finite difference, finite volume or finite element methods resulting in large, highly coupled, ill-conditioned and sparse (non-)linear
systems. In order to minimize the computing time we want to exploit the capabilities of modern parallel architectures. The rapid hardware shifts from single core to multi-core and many-core
processors lead to a gap in the progression of algorithms and programming environments for these platforms â?? the parallel models for large clusters do not fully utilize the performance capability
of the multi-core CPUs and especially of the GPUs. Software stack needs to run adequately on the next generation of computing devices in order to exploit the potential of these new systems. Moving
numerical software from one platform to another becomes an important task since every parallel device has its own programming model and language. The greatest challenge is to provide new techniques
for solving (non-)linear systems that combine scalability, portability, fine-grained parallelism and flexibility across the assortment of parallel platforms and programming models. The goal of this
thesis is to provide new fine-grained parallel algorithms embedded in advanced sparse linear algebra solvers and preconditioners on the emerging multi-core and many-core technologies.
With respect to the mathematical methods, we focus on efficient iterative linear solvers. Here, we consider two types of solvers â?? out-of-the-box solvers such as preconditioned Krylov subspace
solvers (e.g. CG, BiCGStab, GMRES), and problem-aware solvers such as geometric matrix-based multi-grid methods. Clearly, the majority of the solvers can be written in terms of sparse matrix-vector
and vector-vector operations which can be performed in parallel. Our aim is to provide parallel, generic and portable preconditioners which are suitable for multi-core and many-core devices. We focus
on additive (e.g.~Gauss-Seidel, SOR), multiplicative (ILU factorization with or without fill-ins) and approximate inverse preconditioners. The preconditioners can also be used as smoothing schemes in
the multi-grid methods via a preconditioned defect correction step. We treat the additive splitting schemes by a multi-coloring technique to provide the necessary level of parallelism. For
controlling the fill-in entries for the ILU factorization we propose a novel method which we call the power(q)-pattern method. We prove that this algorithm produces a new matrix structure with
diagonal blocks containing only diagonal entries. This approach provides higher degrees of parallelism in comparison with the level-scheduling/topological sort algorithm. With these techniques we can
perform the forward and backward substitution of the preconditioning step in parallel. By formulating the algorithm in block-matrix form we can execute the sweeps in parallel only by performing
matrix-vector multiplications. Thus, we can express the data-parallelism in the sweeps without any specification of the underlying hardware or programming models.
In object-oriented languages, an abstraction separates the object behavior from its implementation. Based on this abstraction, we have developed a linear algebra toolbox which supports several
platforms such as multi-core CPUs, GPUs and accelerators. The various backends (sequential, OpenMP, CUDA, OpenCL) consist of optimized and platform-specific matrix and vector routines. Using unified
interfaces across all platforms, the library allows users to build linear solvers and preconditioners without any information about the underlying hardware. With this technique, we can write our
solvers and preconditioners in a single source code for all platforms. Furthermore, we can extend the library by adding new platforms without modifying the existing solvers and preconditioners.
In our tests we consider two scenarios â?? preconditioned Krylov subspace methods and matrix-based multi-grid methods. We demonstrate speed ups in two directions: first, the preconditioners/smoothers
reduce the total solution time by decreasing the number of iterations, and second, the preconditioning/smoothing phase is efficiently executed in parallel providing good scalability across several
parallel architectures. We present numerical experiments and performance analysis on several platforms such as multi-core CPU and GPU devices. Furthermore, we show the viability and benefit of the
proposed preconditioning schemes and software approach.
(Dimitar Lukarski: â??Parallel Sparse Linear Algebra for Multi-core and Many-core Platforms : Parallel Solvers and Preconditionersâ??, PhD thesis, Fakultät für Mathematik, KIT Karlsruhe, Germany,
2012 [WWW (http://digbib.ubka.uni-karlsruhe.de/volltexte/1000026568)])
More... (http://gpgpu.org/2012/03/02/lukarski-phd) | {"url":"http://www.nvnews.net/vbulletin/archive/index.php/t-175246.html","timestamp":"2014-04-21T00:23:49Z","content_type":null,"content_length":"8108","record_id":"<urn:uuid:800c969a-f3ae-4ae9-9588-99912aa1c536>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
Richard Hamming: An Outspoken Techno-Realist
November 16, 1998
If there were an Academy of Computation in Athens, according to the author, carved on the lintel would surely be one of Richard Hamming's most well known statements: "The purpose of computation is
insight, not numbers."
Philip J. Davis
Richard Hamming, "Mathematics on a Distant Planet." American Mathematical Monthly, Vol. 105, No. 7, August-September, 1998, pages 640-650. (Based on a luncheon talk given at the Northern California
Section meeting of the Mathematical Association of America, San Francisco, February 22, 1997.)
Having just read and been moved by this posthumous article, which could be described as "Dick Hamming's mathematical credo," I should like to use my space to comment on the article and then to take
off from one point made at its end.
Richard Hamming (1915-1998) got a BS from the University of Chicago and, in 1942, a PhD in mathematics from the University of Illinois. In 1943 he was at the Los Alamos Lab, and in 1946 he joined
Bell Telephone Labs, where he remained until his retirement, thirty years later. (Alta Vista, searching the Web for "Hamming codes," produced 787 citations, and this is surely an inadequate measure
of the importance of Hamming codes to information technology.) Thereafter, and until recently, he was a professor of computer science at the Naval Post-graduate School in Monterey, California.
I knew Hamming only slightly, but from the few times I met him, and from various of his writings, I should call him a no-nonsense kind of guy. Sharp-witted, often acerb, direct, forceful, holding
strong opinions that often coincided and occasionally conflicted with my own, he was a technological realist. His books are as free as one can imagine from unnecessary mathematical maneuvering and
jargon, from the vain generalizations---largely in the service of "I'll show you how smart I really am"---that can make mathematical writing tremendously obscure.
I found his philosophy of mathematics quite agreeable, but, as it happens, I diverged on many points. Though it may be a disservice to reduce to a few lines what a person has concluded over a
lifetime, I would sum up his view this way: If a piece of mathematics works in the real world, why bother inquiring why (in the philosophical sense). And if a piece of mathematics isn't useful in the
real world, you have the option of ignoring it completely.
There is no reason to rely on my words, however, when Hamming himself was a great phrase-maker, as in the following well-known instance:
"The purpose of computation is insight, not numbers."
Placed as an epigraph for his book Numerical Methods for Scientists and Engineers (1962), this short sentence has been repeated over and again by hundreds of authors. Discussions of it can be found
on the Web. It is so well known by now that if there were an Academy of Computation in Athens, it would surely be carved on the lintel.
But what does it mean? Like all good slogans-cogito ergo sum, for example-it requires explication. I have the impression that when Hamming himself was asked to explain what he meant, he wiggled
around and came up with changing interpretations and constant modifications.
More Hammingisms:
"For more than forty years I have claimed that if whether an airplane would fly or not would depend on whether some function that arose in its design was Lebesgue but not Riemann integrable, then I
would not fly in it. Would you? Does Nature recognize the difference? I doubt it!"
"Hermite said 'We are not the master of Mathematics, we are the servant.' I have often said the opposite: 'We are the master of Mathematics not the servant; it shall do as we want it to do.' In
truth, I seem to believe in a blend of the two remarks; at times we are driven and at times we are in control of mathematics."
"I know that the great Hilbert said 'We will not be driven out of the paradise Cantor has created for us,' and I reply 'I see no reason for walking in!'"
"Do not dismiss the outsider too abruptly as is generally done by the insiders."
"Mathematics is not merely an idle art form, it is an essential part of our society."
"In science and mathematics we do not appeal to authority, but rather you are responsible for what you believe."
These quotations all call for discussion, and some have indeed been discussed ever since the Greeks invented philosophy. The last two, for example, when properly interpreted, may come as close to
expressing social concern as Hamming was able to bring himself to do publicly.
For the remainder of this article, I would like to comment on yet another quotation:
"With the enormous growth of results at well over 100,000 (new?) theorems every year . . . the chance of a new piece of pure mathematics being spotted by you and also being at hand when you need it,
and not have to be recreated when needed, is increasingly small. . . . Regeneration is increasingly easier than retrieval."
Now here is a thought that diffuses rapidly into the related but separate questions of research strategies, of originality, plagiarism, rights of intellectual property, royalties.
When I was a graduate student I used to hear horror stories of PhD candidates who, in the final throes of their candidacy, were informed that others had anticipated their major results: Your result,
Candidate Doe, can be found in a 1923 paper in the ABC Journal of Mathematics; so go back to the drawing board, and this time come up with something original.
Were these stories based on true occurrences, or were they fairy tales put forward to frighten aspirants into proper standards of accomplishment?
Not so many years after my PhD was awarded, I was working alongside old Professor PDQ, a mathematician of great international reputation. He chanced to read one of my published papers and afterward
pulled me by my ear (so to speak) over to the library, hauled down a certain journal dated 1933, blew off the dust, and pointed out an article of his that contained the same result. "You should have
known about it," he cautioned me.
As a graduate student, I had relied on my thesis adviser to certify the originality of my results. But now I was on my own, and for a few days I trembled in my boots. On study, it turned out that
while my theorem and Professor PDQ's were located in the same specific area of mathematics, my hypotheses as well as the methods I employed were quite different. Saved from professional ignominy!
Much less traumatic was the following experience of several months ago. I gave a talk in which I mentioned theorem T, commonly known as Y's theorem. After my talk, Professor Z came up to me and said,
"You know, T is referred to as Y's theorem, but R got it a generation before Y."
I was inclined to reply to Professor Z, "Are you absolutely sure that there wasn't an M who got it before R?" But I refrained because I didn't care to lose Z's friendship.
Fairly recently I worked up a theorem in a very popular area. I wanted to check its originality. I looked in a few appropriate books: I couldn't find it. I e-mailed a few authorities: They'd never
heard of it. I went on the computerized mathematical databases and inquired. I was confronted with the "zero or infinity" phenomenon. When I searched in the specific area, the computer came up with
more papers than I could deal with in ten lifetimes, half of which may have contained errors. When I added a few qualifiers, I came up with zero hits. Maybe I should have posted a notice on the Web:
"I am claiming priority on such and such a theorem. If you have any reason to dispute this, come forward or forever hold your peace."
Back to Hamming's quote. We seem to be in a period of transition, in which claims of priority and originality, on the one hand, make a difference and, on the other hand, being in some cases
impossible to validate but easy to invalidate, ought not to make a difference.
One final point. A prospective reader, seeing the title "Mathematics on a Distant Planet," may be led to think that Hamming has written a piece of science fiction. Not at all. He has set up
hypothetical mathematicians on a distant planet in order to speculate on which portions of the assumptions, definitions, modes of reasoning, and assignment of values that characterize our
mathematical world are universal and which might not be.
In this spirit, I wonder how the scientists of the distant planet are handling the dilemmas of information overload and intellectual property.
Philip J. Davis, professor emeritus of applied mathematics at Brown University, is an independent writer, scholar, and lecturer. He lives in Providence, Rhode Island. | {"url":"http://www.siam.org/news/news.php?id=893","timestamp":"2014-04-17T10:39:27Z","content_type":null,"content_length":"15792","record_id":"<urn:uuid:cdf6ef19-0d74-4822-8560-a39bdf1c946c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Colton, Simon - Department of Computing, Imperial College, London
• On The Notion Of Interestingness In Automated Mathematical Discovery
• The Use of Classification in Automated Mathematical Concept Formation Simon Colton, Stephen Cresswell and Alan Bundy
• Artificial Intelligence and Scientific Creativity Simon Colton and Graham Steel
• A Gridbased Application of Machine Learning to Model Generation
• Automated Reformulation of Constraint Satisfaction Problems John Charnley
• Evaluating Machine Creativity Alison Pease, Daniel Winterstein, Simon Colton
• Lakatos-style Reasoning Alison Pease, Simon Colton, Alan Smaill, John Lee
• Automatic Invention of Integer Sequences Simon Colton, Alan Bundy
• Employing Theory Formation to Guide Proof Planning
• Automatic Identification of Mathematical Concepts Simon Colton simonco@dai.ed.ac.uk
• The NumbersWithNames Program Simon Colton
• Automated Discovery in Pure Mathematics Simon Colton, Alan Bundy and Toby Walsh,
• HR A System for Machine Discovery in Finite Algebras Alan Bundy, Simon Colton 1 and Toby Walsh 2
• Agent Based Cooperative Theory Formation in Pure Mathematics Simon Colton, Alan Bundy and Toby Walsh ?
• Lakatos-style Automated Theorem Modification Simon Colton
• Automatic Generation of Implied Constraints: Initial Progress Simon Colton, Lyndon Drake, Alan M. Frisch, Ian Miguel and Toby Walsh
• Automated Puzzle Generation Simon Colton
• Mathematics -a new Domain for Datamining? Simon Colton
• The Use of Classification in Automated Mathematical Concept Formation Simon Colton, Stephen Cresswell and Alan Bundy
• Assessing Exploratory Theory Formation Programs Simon Colton
• HR Automatic Concept Formation in Finite Algebras Simon Colton
• Automated Theory Formation for Tutoring Tasks in Pure Mathematics
• Semi-Automated Discovery in Zariski Spaces (A Proposal)
• Automatic Invention of Integer Sequences Simon Colton, Alan Bundy
• Article Submitted to Journal of Symbolic Computation Automated Conjecture Making in Number
• A Grand Challenge of Theorem Discovery Geo Sutcli e 1 , Yi Gao 1 , and Simon Colton 2
• Lakatosstyle Automated Theorem Modification Simon Colton 1 and Alison Pease 2
• The HR Program for Theorem Generation Simon Colton ?
• Automatic Construction and Verification of Isotopy Invariants
• Experiments in Metatheory Formation Simon Colton
• Automatic Generation of Classification Theorems for Finite Algebras
• Making Conjectures about Maple Functions Simon Colton
• Integrating HR and tptp2X into MathWeb to Compare Automated Theorem Provers
• Managing Automatically Formed Mathematical Theories
• Crossdomain Mathematical Concept Formation Graham Steel, Simon Colton, Alan Bundy and Toby Walsh #
• Automated Theory Formation in Bioinformatics Simon Colton
• Agent Based Cooperative Theory Formation in Pure Mathematics Simon Colton, Alan Bundy and Toby Walsh
• On The Notion Of Interestingness In Automated Mathematical Discovery
• Journal of Integer Sequences, Vol. 2 (1999), Article 99.1.2
• ILP for Mathematical Discovery Simon Colton and Stephen Muggleton
• Automated Puzzle Generation Simon Colton
• On The Notion Of Interestingness In Automated Mathematical Discovery
• A Grid-based Application of Machine Learning to Model Generation
• Creative Logic Programming Simon Colton
• Automated Theory Formation in Bioinformatics Simon Colton
• HR -A System for Machine Discovery in Finite Algebras Alan Bundy, Simon Colton
• The Effect of Input Knowledge on Creativity Simon Colton, Alison Pease, Graeme Ritchie
• Semantic Negotiation: Modelling Ambiguity in Dialogue Alison Pease Simon Colton
• Creative Logic Programming Simon Colton
• Theory Formation Applied to Discovery, Learning and Problem Solving
• Cross-domain Mathematical Concept Formation Graham Steel, Simon Colton, Alan Bundy and Toby Walsh
• Assessing Exploratory Theory Formation Programs Simon Colton
• Towards a General Framework for Program Generation in Creative Domains Department of Computing
• The Homer System Simon Colton 1 and Sophie Huczynska 2
• Lakatos-style Methods in Automated Reasoning Simon Colton and Alison Pease
• Automated `Plugging and Chugging' Simon Colton
• The Effect of Input Knowledge on Creativity Simon Colton, Alison Pease, Graeme Ritchie
• Automated Theorem Discovery: A Future Direction for Theorem Provers
• Automatic Concept Formation in Pure Mathematics Simon Colton and Alan Bundy
• Automatic Generation of Implied Constraints: Initial Progress Simon Colton, Lyndon Drake, Alan M. Frisch, Ian Miguel and Toby Walsh
• DOI 10.1007/s10994-006-8259-x Mathematical applications of inductive logic
• Constraint Generation via Automated Theory Formation
• Automatic Generation of Benchmark Problems for Automated Theorem Proving Systems
• New Directions in Automated Conjecture Making Alan Bundy , Simon Colton , Sophie Huczynska and Roy McCasland
• Using Model Generation in Automated Concept Formation Pedro Torres
• Machine Learning Case Splits for Theorem Proving Simon Colton , Ferdinand Hoermann , Geo Sutcli e and Alison Pease
• The Homer System Simon Colton1 and Sophie Huczynska2
• Evaluating Machine Creativity Alison Pease, Daniel Winterstein, Simon Colton
• Automatic Generation of Implied Constraints John Charnley, Simon Colton1 and Ian Miguel2
• A Multiagent Approach to Modelling Interaction in Human Mathematical Reasoning
• Experiments in Meta-theory Formation Simon Colton
• Linkoping Electronic Articles in Computer and Information Science | {"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/21/375.html","timestamp":"2014-04-16T19:31:24Z","content_type":null,"content_length":"18253","record_id":"<urn:uuid:e5c9bb68-197f-483c-8d09-bb414514d9d6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generate a random matrix given a condition number
January 1st 2010, 12:56 AM #1
Jan 2010
Generate a random matrix given a condition number
If I am given the condition number , how can i generate a random matrix with that condition number. The matrix can have any number of rows and columns. please help...
The "condition number" of a matrix, A, is defined as [tex]||A||||A^{-1}|| For symmetric matrices, this is equal to $\frac{\lambda_{max}}{\lambda_{min}}$ where $\lambda_{max}$ is the largest
eigenvalue and $\lambda_{min}$ is the smallest.
So the simplest thing to do to take a diagonal matrix with appropriate numbers on the diagonal. For example, if the condition number is to be 3/2, a matrix with that condition number is
$\begin{bmatrix}3 & 0 \\ 0 & 2\end{bmatrix}$.
If you wanted 3 by 3 or 4 by 4 matrices just put numbers between those on the diagonal. To get a non-diagonal matrix (so it looks like you've done more work!) take $A^{-1}DA$ of your diagonal
matrix D by some invertible matrix A.
January 1st 2010, 06:10 AM #2
MHF Contributor
Apr 2005
January 1st 2010, 08:24 AM #3
Jan 2010 | {"url":"http://mathhelpforum.com/advanced-algebra/122090-generate-random-matrix-given-condition-number.html","timestamp":"2014-04-20T02:50:37Z","content_type":null,"content_length":"38204","record_id":"<urn:uuid:04d68a9c-b75d-4acc-b0b5-7aa9fd830f76>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jonathan Haskel's Blog
There’s an old joke about two men in a forest trying to get away from a tiger. One puts on an old pair of running shoes. “You can’t go very fast in those” says the other. “I only have to go faster
than you” says the first.
Regrettably, much of life is being faster than the next guy. So here’s a lesson in how to get ahead, drawn from the report into the West Coast Mainline bidding fiasco, which I just saw here.
As you remember, the bidding process collapsed in ignominy and will have to be rerun following the admission of errors in the process by the Department for Transport (DfT) who conducted the bids. The
inquiry into the fiasco has a preliminary report out. I read the following lessons.
1. The bidding works like this. The winner of the bid gets all the revenues from the line they operate. But, they have to pay a per year fee to the DfT to operate the line. The DfT recognizes,
correctly, that revenues might rise or fall depending on general economic activity which cannot be foreseen. Thus there is an adjustment formula that adjusts the fee in line with GDP, in particular
reducing it if GDP falls. So you can win the bid by offering a large fee, knowing that the fee will fall if unexpected bad times come along.
2. At the same time, the DfT does not want the franchiesees to go bust. So you can also win the bid by offering to hold a lump sum of money, which payable to the DfT in bad times.
3. What’s key for the bidders is to know whether they can win by offering:
a. to hold a large lump sum, but bid a low fee
b. bid a large fee, but hold a small lump sum
4. The DfT have an economic model, which tells them the answer i.e. if a bidder decides to adjust the lump sum, how much they can vary the fee.. Here’s what went wrong.
a. they were unwilling to show the model to the bidders who were just given a number telling them the trade off between lump sum and fee. As it turned out, that number was not based on the model at
all, but on some other procedure kept secret from the bidders.
b. The model worked out payments in real terms, in 2010 prices. But the DfT thought the model results were in nominal terms. So bidders were given a figure for what they were told was an adjustment
in nominal terms (see paras 5.14.3). This gave them the wrong price. This matters since the bids last for five years. So if you are told to make an adjustment of £X in 2015 and that mistakenly in
real terms, it can drastically understate the correct nominal figure (by the compounded price change over 5 years). As the report says 5.15. “Had they been converted into the nominal terms, which
they should have been, significantly increased [adjustments] would have been required”
So here’s how to get ahead by knowing more than the next guy.
1. A nominal number is in pounds, e.g. my salary was £400 per week last year and £420 this year, a rise of 20/400=5%
2. A price index tells you how much the average basket of goods costs from year to year e.g. the basket costs £200 last year and £210 this year, a rise of 10/200=5%.
3. Since wage and prices have risen by the same, the “real” wage is unchanged. Since a numbers in real terms is the nominal number divided by the price index and index of the real wage is £400 last
year and £400 this year.
1. Distinctive capabilities.
A marvellous example from the FT today using Kay's capability analysis on the failure of the London black cab manufacturers Manganese Bronze. Their two capabilities: regulation and reputation, ran
out. Interesting articles here, pointing out for example:
"2007 was the last year the company turned a profit. While Manganese Bronze had updated its cabs over the years, they were still being built on a basic structure that traced its heritage back to the
first black taxi, dating from 1948. Despite its outdated product, the company enjoyed a protected market because of a rule that London taxis have a 28ft turning circle.
But by 2008, the TX4 faced serious competition from Mercedes-Benz’s Vito, which met the 28ft circle rule but was more fuel-efficient and cheaper to run. In just over four years, the car has
captured 38 per cent of the London market"
2. Adjustment along many margins
In our supply and demand models, price and quantity adjust to restore equilibrium. But what if they, due to regulation for example, cannot adjust and are not at the equilibruim point? The lessson of
economics is that in such a market there is an opportunity to trade. So its likely that there will be some other form of adjustment. Here's a great example from Tim Taylor on rent control.
The Economist thinks so: larger cities would raise learning, communication of ideas etc.
The SERC is not so sure:
"City size and diversity, however, provide an economic payoff: a critical mass of people, resources and ideas help produce agglomeration economies
(Glaeser 2011). Increasing that critical mass helps raise productivity, therefore: the consensus
from recent studies is that doubling employment in a city raises average labour productivity by
around six percent, although these effects are much more important for some types of economic
activity(Melo, Graham et al. 2009). They are much more important in precisely those sectors of
economic activity in which the British economy is specialised and our most prosperous cities – the Londons, Cambridges and Oxfords – are particularly specialised: skill intensive traded services.
Although urban density is strongly correlated with the effective or functional size of a city there is no evidence that density itself is a cause of these observed agglomeration economies. It
seems more likely that density is the outcome of agglomeration economies as both households and firms bid up the price of land to benefit from them thus causing development to be at higher
density. Indeed Cheshire and Magrini (2009) find that once all other factors including city size are controlled for, higher density is associated with slower urban economic growth."
And they have interesting examples:
Two examples illustrate the difficulty of separating out density effects. 1) Building CrossRail, for example, will likely reduce the density of the London region as a whole as people take
advantage of quicker travel to
move out to cheaper land. But it will still increase the effective size of London since with easier travel the costs of productive interactions between economic agents will fall and their
potential number will increase. 2) Take two cities with identical populations and borders: building more houses will increase density. But it is then hard to attribute any subsequent economic
changes to higher density, since population size has also gone up.
Nick Crafts gave the RES policy lecture on 18 October. He's right on almost everything: here's what he has to say
If fiscal consolidation continues and radical changes to monetary policy are ruled out, it is mainly ‘supply-side’ reform that can restart UK growth without doing longer-term damage to the
economy. Among other things, that means repairing infrastructure, improving education, reforming taxation and tackling the restrictive planning system.
And here's the answer
But one area that could deliver both short-term stimulus and long-term efficiency is private house-building – as happened in the 1930s recovery from recession. Today’s planning restrictions mean
that the stock of houses is three million below and real prices are 35% above what they would be if market forces operated freely.
Slides of Nick Crafts’ RES Policy Lecture.
The fantastic Tim Taylor has two very good posts of relevance to my Economics of Innovation students.
1. Can Agricultural Productivity Keep Growing?
The reference is to the Fuglie et al paper, which I have referred to in the slides. They have the following fascintating graph, showing that Agricultural productivity has been more and more dominated
by TFP growth in recent years. And that TFP is strongly correlated with country R&D. So sub-Saharn Africa, which needs the TFP, might not get it if its not doing any R&D.
2. Patents Tipping Too Far: Three Examples
His post explores the following.
The basic economics of patents as taught in every intro econ class is a balancing act: On one side, patents provide an incentive for innovation, by giving innovators a temporary monopoly over the
use of their invention. This temporary monopoly rewards innovation by allowing the inventor to charge higher prices, and thus the tradeoff is that consumers temporarily pay more--although
consumers of course also benefit from the existence of the innovation. Like any balancing act, patents can tip too far in one direction or the other. On one side, patents can fail to provide
providing sufficient incentive (that is, large enough profits) for inventors. But on the other side, patent protection that is too long or too rigid can lock profits for early innovators for an
extended period, both at the long-term expense of consumers and also in a way that can cut off possibilities for future innovators.
Economists think about things in ways that involve words like “Value”, “incentives” and “utilities”. Such words are often used by other subjects, so here is an attempt to explain WIHIH (what in hell
is happening). Mostly it is, I think, using the same word for different ideas.
1. Value
This is a source of endless confusion. Here’s the Economist’s notion.
a. Think of a demand curve. That curve traces out the willingness to pay for an additional item of the good. If consumers are different, it’s the cumulative willingness to pay (WTP) starting with the
person who values the good most, Nick, Harriet and Ali in the coffee example. If consumers are the same, its the WTP arising from diminishing marginal utility of the good.
b. Now suppose the price is set, by supply or law of something else: say £2.00. At a price of £2 the Economist says, that gives the willingness to pay of the marginal consumers. It does not give the
WTP of the inframarginal consumers, indeed they earn some consumer surplus precisely because they are not marginal and it’s the marginal consumer’s WTP that the price reveals.
c. So what can we say about value in the marketplace? To economists, all we can say is the value placed on the good by the marginal consumer, here £2. All the other consumers who are buying get
consumer surplus. We might, if we knew consumer surplus be able to work out the total WTP as well as the marginal WTP. And if we wanted to call it that, we might be able to call total WTP total
value. But in Economics we tend not to do that; we reserve value for “value added” which is an accounting term (revenue less cost of intermediate goods used up in production). Even if we did,
economists don’t tend to talk about value in the market place since we would have to be able to count up consumer surplus to measure it. At best, prices reveal the marginal customer’s WTP, which you
might call how much the marginal customers’ values the good. We need to know a lot more if we want to talk about every customer’s WTP or, if you so define it, their value.
d. The emphasis on prices as revealing marginal values is, I think, helpful, since it avoids the endless debate on what is “value”. Diamonds are very expensive and (in Western countries) water is
very cheap. But diamonds are useless baubles whereas water is vital for life. We can then argue about whether diamond are more or less “valuable” than water, but the Economics approach cuts through
this. All we need realize is that if the price of Diamonds are high, that suggests the WTP, or “value” if you like, of an extra Diamond is by the marginal diamond buyer is very high. Whereas with a
lot of water around, the WTP, or “value” of an extra bottle of water is very low. I think we can agree on marginal values and we can just avoid having to agree total values.
2. Prices as incentives
Some object to prices being an incentive. There are a number of objections.
a. Consumers/firms don’t care about prices. Economists: fine, that just says elasticities are low.
A related notion is that you need, as a business, to emphasize quality, customer loyalty etc. and this somehow is more profitable. The Economists' answer is that such things don’t come for free.
Getting loyal customers presumably means advertising, R&D etc. all of which costs. It might shift out the demand curve, or make it less elastic, but its still not a guarantee that it will be
profitable: that depends on the discounted benefits to incurring such costs. Economists are, IMHO, good at looking at correlations between, say advertising spend and future revenue. But the Black Box
of just what neurones in the brain are activated by advertising is not well studied (for more on intrinsic motivation, see below).
b. When Consumers/firms use prices to incentivise others, this might lead things that are not intended. Example. The firm rewards its workers for working quickly i.e. sets a good “price”, in this
case a wage, for fast work. But, then workers produce poor quality. Economists’ answer. But this certainly means that economic actors respond to prices! All that is happening here is that they have a
menu of actions of dimension N, but the menu of prices they face is less than N, say M.
c. A more subtle criticism. Consumers/firms/economic actors respond to “intrinsic motivation” e.g. fairness, responsibility. If prices are then used, that will “crowd out” such responsibility.
Example: setting rewards for good performance at work is not a good idea because it offends people’s sense of responsibility and makes them perform worse (a reward for good performance improves
behaviour due to extrinsic motivation, but worsens it since it weakens intrinsic motivation, perhaps because it signals that the employer does not trust the worker or if the worker tries due to a
concern for social status which is undermined when it is paid for). This is a key idea in Michael Sandel’s recent work and is discussed in a very interesting recent article by Gneezy, Meier,
Rey-Biel, Journal of Economic Perspectives, Fall 2011.
The famous example is Titmuss (1970), “who argued that paying people to donate blood broke established social norms about voluntary contribution and could result in a reduction of established social
norms about voluntary contribution and could result in a reduction of the fraction of people who wish to donate.the fraction of people who wish to donate”.
This is very controversial in education, with many programs now starting to pay High School students money to attend school, do exams, hand in work etc.
So what do we know about this area? My take on the piece is that it is a bit horses for courses. In some contexts intrinsic incentives are so weak that financial rewards are very helpful. “The
current evidence on the effects of financial incentives in education The current evidence on the effects of financial incentives in education indicates moderate short-run positive effects on some
subgroups of students, at indicates moderate short-run positive effects on some subgroups of students, at least while the incentives are in place.”
But the article also points out that incentive programmes must be clearly designed to be sure that such crowding out does not occur. The conclusions are very interesting:
“When explicit incentives seek to change behavior in areas like education, contributions to public goods, and forming habits, a potential confl ict arises between the direct extrinsic effect of
the incentives and how these incentives can crowd the intrinsic motivations in the short run and the long run. In education, such incentives seem to have moderate success when the incentives are
well-specifi ed and well-targeted (“read these books” rather than “read books”)…
In encouraging contributions to public goods, one must be very careful when designing the incentives to prevent adverse changes in social norms, image concerns, or trust.
Incentives to modify behavior can in some medial cases be cost effective. The medical and health economics literature intensely investigates whether, and when, prevention is cheaper than
treatment (for example, Russell, 1986). The question is economic rather than moral: certain prevention activities can cost more than than they save, For example, cholesterol-reducing drugs can
cost hundreds of dollars a month; simple exercising could, in some borderline cases, replace these drugs.
Our message is that when economists discuss incentives, they should broaden their focus. the effects of incentives depend on how they are designed, the form in which they are (especially monetary
or nonmonetary), how they interact with intrinsic motivations (especially monetary or nonmonetary), and what happens after they are withdrawn.”
The ONS released a new piece today on the UK Productivity Puzzle. Some details:
1. current UK output per hour compared with other recessions is a disaster (output per worker does not look much better)
2. That makes output per worker 10% lower than if it had continued on its pre-recession path:
3. The international comparison for the UK is not good either:
Chart 6: International comparisons of output per hour productivity growth since 2008 Q1, seasonally adjusted
As the graph shows
1. everyone is worse than the US
2. the UK fell initially by less, but then fell back in the 2011 period, so that it is now at the bottom of the pile relative to 2008Q1 at least.
As Peter Pattinson said at the seminar launching this document, the early productivity period is not so much of a puzzle, since barring the US, everyone seemed the same. What is very mysterious is
the post 2011 performance.
Could the GDP data be wrong? Walton and Brown look at this. Two crucial graphs:
1. mean revisions have got smaller since the 1960s and are negative in recent years (chart 3)
2. since 2005 the revision size is smaller than in the 1990s at 0.3% pa to growth. So a revision, if upwards would add on average this. Not enough to restore the 10% gap, but might be enough to
restore the UK relative to other non-US in the chart just above.
I wrote about the recession here for the UK. In the US, Krugman and Taylor are having an argument:
1. Taylor says the US recovery is very weak relative to that from other financial crises, and blames policy.
2. Krugman says this is politics and the US recession is just what you would expect from a financial crisis.
Reinhardt and Rogoff have this graph:
There are two questions:
Q1. is the US recovery, starting from the bottom of the cycle, currently slower than the 1930s?
Looking at the slope of the 2007 line starting from the bottom of the V, it grows slower than the wide dotted line, but about the same/faster as the small dotted line. As Jim Hess has correctly
pointed out to me, correcting an earlier mistake, the wide dotted line excludes the 1930s, and the narrow one includes it. So the 1930s must have been slower recovery and hence current recovery is
faster. Score one for Krugman.
Q2. are US recessions, starting from the peak, longer to get back to the peak, when they are financial crises? Yes they are, according to Reinhardt/Rogoff comparing different recessions according to
type. Score one for them and Krugman.
So its two different questions being compared.
Ed Balls says let's use prospective 4G licence fees to fund a stamp duty holiday. On this idea, we know what will happen since we have some evidence from when it was tried before: Chris Giles points
us to the paper. from HMRC that evaluates the effect of a temporary relief on stamp duty for first time buyers March 2010-March 2012.
As my Imperial MBA students will know, the effect of a tax reduction in a Supply and Demand model depends on the elasticities of demand and supply. Suppose as seems very likely due to planning, the
supply curve is very inelastic. Then tax falls have minimal impact on quantity. If demand curves are quite elastic then any falls in taxes translate into minimal falls in prices for buyers, but rises
in prices for sellers. Finally, falls in taxes are expensive for the state, since with little extra quantity the gain in revenue from the additional marginal quantity is small, but the loss from the
inframarginal is large.
So we can use some very simple economics to make some sharp predictions.
And the findings in the real world from the study? They show Economics right on the money. Here's an extract from the summary.
The effect on quantity is tiny:
The number of additional transactions is therefore estimated to be closer to 0‐1 per cent (1,000 additional transactions)
The effect on buyer prices is tiny:
Considering the impact on prices, ... This implies that the majority of the 1 per cent tax relief was capitalised in higher prices. It is equivalent to stating that the post‐tax
outlay for buying property is estimated to have decreased by 0.3‐0.5 percentage points. The relief therefore appears to have had a small impact on reducing the
overall outlay of buying a first home.
The costs to the state are huge:
The cost of the tax relief was around £150 million in 2010/11..[with].. 1,000 additional transactions across the 13‐month period April 2010‐April 2011, this represents an estimated cost to the
Exchequer of approximately £160,000 per additional transaction in tax relief. | {"url":"http://haskelecon.blogspot.co.uk/2012_10_01_archive.html","timestamp":"2014-04-20T13:30:06Z","content_type":null,"content_length":"213276","record_id":"<urn:uuid:ee04637c-db69-4b5b-aac4-2072bb146a84>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
The annual March Madness Half Marathon in Cary took place this morning. This is both one of Chicagoland's 'early races' to start the season as well as the classic Boston preparation due to the hilly
course. I have now run this consecutively for six y...
Converting Siemens MOSAIC
Siemens multi-slice EPI data may be collected as a "mosiac" image; i.e., all slices acquired in a single TR (repitition time) of a functional MRI run are stored in a single DICOM file. The images are
stored in an MxN array of images. The function create3D() will try to guess the number of images embedded within the single DICOM...
Converting Siemens MOSAIC
Siemens multi-slice EPI data may be collected as a "mosiac" image; i.e., all slices acquired in a single TR (repitition time) of a functional MRI run are stored in a single DICOM file. The images are
stored in an MxN array of images. The function create3D() will try to guess the number of images embedded within the single DICOM...
R: Add vertical line to a plot
If you have a plot open and want to add a vertical line to it:abline(v=20) #Add vertical line at x=20
The distribution of rho…
There was a post here about obtaining non-standard p-values for testing the correlation coefficient. The R-library SuppDists deals with this problem efficiently. library(SuppDists) plot(function(x)
dPearson(x,N=23,rho=0.7),-1,1,ylim=c(0,10),ylab="density") plot(function(x)dPearson(x,N=23,rho=0),-1,1,add=TRUE,col="steelblue") plot(function(x)dPearson(x,N=23,rho=-.2),-1,1,add=TRUE,col="green")
plot(function(x)dPearson(x,N=23,rho=.9),-1,1,add=TRUE,col="red");grid() legend("topleft", col=c("black","steelblue","red","green"),lty=1, legend=c("rho=0.7","rho=0","rho=-.2","rho=.9"))</pre> This is
how it looks like, Now, let’s construct a table of critical values for some arbitrary or not significance levels. q=c(.025,.05,.075,.1,.15,.2) xtabs(qPearson(p=q, N=23, rho
My Experience at ACM Data Mining Camp #DMcamp
My parents and I made plans to visit San Jose and Saratoga on my grandmother’s birthday, March 19, since that is where she grew up. I randomly saw someone tweet about the ACM Data Mining Camp
unconference that happened to be the next day, March 20, only a couple of miles from our hotel in Santa Clara. This was...
R: Geometric mean
gm(x)But this requires package heR.Misc so you might as well just use:exp(mean(log(x)))
Returns on Easter week and one week after
Inspired by CXO group report, I did a rerun of the same strategy on my data. Easter’s dates can be find at wikipedia. Overall, my results are similar to CXO group’s results. In the graph below, I
plotted daily returns on Easter week (Monday to Thursday) from 1982 to 2009. I prefer this way of showing
R annoyances
Readers returning to our blog will know that Win-Vector LLC is fairly “pro-R.” You can take that to mean “in favor or R” or “professionally using R” (both statements are true). Some days we really
don’t feel that way. Consider the following snippet of R code where we create a list with a single elementRelated posts:
R: remove all objects fromt he current workspace
rm(list = ls()) | {"url":"http://www.r-bloggers.com/2010/03/page/8/","timestamp":"2014-04-20T13:35:42Z","content_type":null,"content_length":"35918","record_id":"<urn:uuid:35ca86fa-6bc8-4dd2-800b-f4cfc8b4efe7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
JHR's home p
JHR's home plate
John Rickert
Associate Professor of Mathematics
Office: G-215A, Crapo Hall
Phone: (812) 877-8473
Department of Mathematics
Rose-Hulman Institute of Technology
Terre Haute, IN 47803
e-mail: john.rickert@rose-hulman.edu
My schedule is on-line.         I have pages for several of my classes.
Professional Background
I am an associate professor of mathematics at Rose-Hulman. I grew up in Wauwatosa, Wisconsin, graduated from Hawthorne Jr. High, from Wauwatosa West High School in 1980, got my B.A. in Mathematics
and Astronomy-Physics from The University of Wisconsin in 1984 and my Ph.D. from the University of Michigan in 1990.
My research interest is in diophantine equations, most recently dealing with Pell-type equations, and in restricted partition functions.
Coarse Information
During the Fall term I taught MA112 and MA351.
During the Winter term I taught MA113 and MA366.
During the Spring term I am teaching MA375 and MA479.
I spent academic year 1997-8 on sabbatical at Penn State University (where I had an office In McAllister Hall with windows!) did some work on partition functions and used this as a basis to get some
ideas for the NSF-REU program that I worked in the summers of 1999 and 2001.
During the summer of 2013 I was at RSI- 2013. My e-mail address there is jrickert@mit.edu.
Other Interesting? Sites
Events Schedule for Rose-Hulman students
Contest 2011-2012 2012-2013
Alfred R. Schmidt September 10, 2013 September, 2014
Freshman Mathematics Competition
Virginia Tech Math Contest October 26, 2013 October, 2014
High School Mathematics Contest Saturday, November 9, 2013 Saturday, November 8, 2014
Putnam Competition Saturday, December 7, 2013 Saturday, December 6, 2014
Mathematical Contest in Modeling February, 2014 February, 2015
Undergraduate Mathematics Conference April 11-12, 2014 March, 2015
IN MAA meeting, Spring Indiana MAA meeting,
Indiana College Math Competition IP - Fort Wayne IPFW
April 4, 2013 March, 2015
Other departments
Physics and Optical Engineering Mechanical Humanities and Social Science Environmental Eng. Management Electrical/computer Computer Science and Software Civil Chemistry Chemical Army ROTC Applied
Biology and Biomedical | {"url":"http://www.rose-hulman.edu/~rickert/","timestamp":"2014-04-18T13:07:06Z","content_type":null,"content_length":"7595","record_id":"<urn:uuid:c42b5b2e-60fc-43b1-9376-ffbbfd06d4b8>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00367-ip-10-147-4-33.ec2.internal.warc.gz"} |
Almost the greatest, but not quite...
Re: Almost the greatest, but not quite...
Hi Fruityloop;
How about the guy who came up with F=.99ma, or y = 16.01t^1.98?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=18888","timestamp":"2014-04-18T03:30:07Z","content_type":null,"content_length":"10773","record_id":"<urn:uuid:cabc2a7a-248c-4978-b9ce-d90e44d74449>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
function ccpdb = ccperdegest(G,bins,nsamples)
%CCPERDEGEST Estimate of mean clustering coefficient per degree bin.
% CCPD = CCPERDEGEST(G,B,N) computes the per-degree-bin clustering
% coefficient, i.e., CCPD(k) is the mean clustering coefficient for nodes
% in degree bin k. The graph G is assumed to be undirected, unweighted,
% and to contain no self edges. This is *not* checked by the code. The
% vector B gives the bin boundaries, see HISTC. The computation is
% approximate, using wedge sampling.
% NOTE: This is an interface to the MEX function provided by
% ccperdegest_mex.c.
% See also CCPERDEG, BINDATA.
% Tamara G. Kolda, Ali Pinar, and others, FEASTPACK v1.1, Sandia National
% Laboratories, SAND2013-4136W, http://www.sandia.gov/~tgkolda/feastpack/,
% January 2014
Copyright (c) 2014, Sandia National Laboratories All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the
Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's
National Nuclear Security Administration under contract DE-AC04-94AL85000.
ccpdb = ccperdegest_mex(G,bins,nsamples); | {"url":"http://www.sandia.gov/~tgkolda/feastpack/ccperdegest.html","timestamp":"2014-04-18T18:11:52Z","content_type":null,"content_length":"9950","record_id":"<urn:uuid:6b600b3e-e141-4547-849a-e8524a631430>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
whats the value of the digit 5 - WyzAnt Answers
whats the value of the digit 5
I don't understand he question or how to solve the problem
Tutors, please
to answer this question.
Is the 5 somewhere in a number, i.e. 2,564? In this case the value of 5 is actually 500 since it is in the hundreds place. | {"url":"http://www.wyzant.com/resources/answers/15053/whats_the_value_of_the_digit_5","timestamp":"2014-04-20T14:12:34Z","content_type":null,"content_length":"35534","record_id":"<urn:uuid:24292a56-53cb-4f06-86c7-b40c9ccbefcb>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Model and Algorithm for Efficient Verification of High-Assurance Properties of Real-Time Systems
March/April 2003 (vol. 15 no. 2)
pp. 405-422
ASCII Text x
Jeffrey J.P. Tsai, Eric Y.T. Juan, Avinash Sahay, "Model and Algorithm for Efficient Verification of High-Assurance Properties of Real-Time Systems," IEEE Transactions on Knowledge and Data
Engineering, vol. 15, no. 2, pp. 405-422, March/April, 2003.
BibTex x
@article{ 10.1109/TKDE.2003.1185842,
author = {Jeffrey J.P. Tsai and Eric Y.T. Juan and Avinash Sahay},
title = {Model and Algorithm for Efficient Verification of High-Assurance Properties of Real-Time Systems},
journal ={IEEE Transactions on Knowledge and Data Engineering},
volume = {15},
number = {2},
issn = {1041-4347},
year = {2003},
pages = {405-422},
doi = {http://doi.ieeecomputersociety.org/10.1109/TKDE.2003.1185842},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Knowledge and Data Engineering
TI - Model and Algorithm for Efficient Verification of High-Assurance Properties of Real-Time Systems
IS - 2
SN - 1041-4347
EPD - 405-422
A1 - Jeffrey J.P. Tsai,
A1 - Eric Y.T. Juan,
A1 - Avinash Sahay,
PY - 2003
KW - Composition verification
KW - state space explosion
KW - timed automata
KW - labeled transition systems
KW - IO-traces
KW - IOT-failures
KW - IOT-states
KW - state space condensation.
VL - 15
JA - IEEE Transactions on Knowledge and Data Engineering
ER -
Abstract—In this paper, we present a new compositional verification methodology for efficiently verifying high-assurance properties such as reachability and deadlock freedom of real-time systems. In
this methodology, each component of real-time systems is initially specified as a timed automaton and it communicates with other components via synchronous and/or asynchronous communication channels.
Then, each component is analyzed by a generation of its state-space graph which is formalized as a new state-space representation model called Multiset Labeled Transition Systems (MLTSs). Afterward,
the state spaces of the components are hierarchically composed and simplified through a composition algorithm and a set of condensation rules, respectively, to get a condensed state space of the
system. The simplified state spaces preserve equivalence with respect to deadlock and reachable states. Such equivalence is assured by our reduction theories called IOT-failure equivalence and
IOT-state equivalence. To show the performance of our methodology, we developed a verification tool RT-IOTA and carried out experiments on some benchmarks such as CSMA/CD protocol, a rail-road
crossing, an alternating bit-protocol, etc. Specifically, we look at the time taken to generate the state-space, the size of the state space, and the amount of reduction achieved by our condensation
rules. The results demonstrate the strength of our new technique in dealing with the state-explosion problem.
[1] R. Alur, C. Courcoubetis, and D. Dill, "Model-Checking in Dense Real-Time," Information and Computation, pp. 2-34, vol. 104, 1993.
[2] R. Alur and D. Dill, “A Theory of Timed Automata,” Theoretical Computer Science, vol. 126, pp. 183-235, 1994.
[3] K. Bartlett, R. Scantlebury, and P. Wilkinson, "A Note on Reliable Full-Duplex Transmission over Half-Duplex Links," Comm. ACM, vol. 12, no. 5, pp. 260-261, May 1969.
[4] T. Basten and M. Voorhoeve, "An Algebraic Semantics for Hierarchical P/T Nets," Proc. 16th Int'l Conf. Application and Theory of Petri Nets, pp. 45-65, Lecture Notes in Computer Science 935,
[5] J. Bengt, “Compositional Specification and Verification of Distributed Systems,” ACM Trans. Programming Languages and Systems, vol. 16, no. 2, pp. 259-303, Mar. 1994.
[6] J. Bengtsson, K.G. Larsen, F. Larsson, P. Pettersson, and W. Yi, “UPPAAL: A Tool-Suite for Automatic Verification of Real-Time Systems,” Hybrid Systems III, R. Alur, T.A. Henzinger, and E.D.
Sontag, eds., 1996.
[7] J.A. Bergstra and J.W. Klop, “Algebra of Communicating Processes with Abstraction,” Theoretical Computer Science, vol. 37, no. 1, pp. 77-121, 1985.
[8] O. Bernholtz, M.Y. Vardi, and P. Wolper, “An Automata-Theorectic Approach to Branching-Time Model Checking,” Computer Aided Verification, pp. 142-155, 1994.
[9] G. Berthelot, "Checking Properties of Nets Using Transformations," Advances in Petri Nets, vol. 222, Lecture Notes in Computer Science, pp. 19-40. Springer-Verlag, 1987.
[10] F.S. de Boer, J.N. Kok, C. Palamidessi, and J.J.M.M. Rutten, “The Failure of Failures in a Paradigm for Asynchronous Communication,” Proc. Concurrency '91, pp. 111-126, 1991.
[11] E. Brinksma and T. Bolognesi, "Introduction to the ISO Specification Language LOTOS," Computer Networks and ISDN Systems, vol. 14, pp. 25-59, 1987.
[12] A. Bouajjani, J. Fernandez, and N. Halbwachs, “Minimal Model Generation,” Proc. The Second Workshop Computer-Aided Verification, 1990.
[13] M. Bozga, O. Maler, A. Pnueli, and S. Yovince, “Some Progress in the Symbolic Verification of Timed Automata,” Proc. Int'l Workshop Computer-Aided Verification, 1997.
[14] G. Bucci and E. Vicario, “Compositional Validation of Time-Critical Systems Using Communicating Time Petri Nets,” IEEE Trans. Software Eng., vol. 21, no. 12, pp. 969–992, Dec. 1995.
[15] S. Campos, E. Clarke, and M. Minea, “Symbolic Techniques for Formally Verifying Industrial Systems,” Science of Computer Programming, vol. 29, pp. 79-98, 1997.
[16] Y. Chen, W. Tsai, and D. Chao, “Dependency Analysis—A Petri Net Based Techniques for Synthesizing Large Concurrent Systems,” IEEE Trans. Parallel and Distributed Systems, vol. 4, no. 4, 1993.
[17] S. Duri, U. Buy, R. Devarapalli, and S.M. Shatz, "Using State Space Reduction Methods for Deadlock Analysis in Ada Tasking," Proc. Int'l Symp. Software Testing and Analysis (ISSTA), Ostrand and
Weyuker, eds., [37], pp. 51-60,New York: ACM Press, June 1993.
[18] J.A. Feldman, “A Programming Methodology for Distributed Computing (Among Other Things),” Comm. ACM, vol. 22, pp. 353-368, 1979.
[19] C.A.R. Hoare, Communicating Sequential Processes, Prentice Hall, Englewood Cliffs, N.J., 1985.
[20] E. Juan, J.J.P. Tsai, and T. Murata, "Compositional Verification of Concurrent Systems Using Petri-Net-Based Condensation Rules," ACM Trans. Programming Languages and Systems, vol. 20, no. 5,
Sept. 1998.
[21] E.Y.T. Juan, J.J.P. Tsai, T. Murata, and Y. Zhou, “Reduction Methods for Real-Time Systems Using Delay Time Petri Nets,” IEEE Trans. Software Eng., vol. 27, no. 5, pp. 422-448, May 2001.
[22] E.Y.T. Juan and J.J.P. Tsai, Compositional Verification of Concurrent and Real-Time Systems. Kluwer Academic, 2002.
[23] I. Kang and I. Lee, “An Efficient State Space Generation for Analysis of Real-Time Systems,” Proc. Int'l Symp. Software Testing and Analysis, 1996.
[24] L. Lamport, “What Good is Temporal Logic?” Information Processing, pp. 657-668, 1983.
[25] L. Leonard and G. Leduc, “Revised Draft on Enhancements to LOTOS,” A Formal Definition of Time in LOTOS, 1994.
[26] N.A. Lynch and M.R. Tuttle, "Hierarchical Correctness Proofs for Distributed Algorithms," Proc. Sixth Symp. Principles of Distributed Computing, pp. 137-151, ACM, New York, 1987.
[27] R.E. Bryant, "Symbolic Simulation—Techniques and Applications," Proc. 27th ACM/IEEE Design Automation Conf. (DAC 90), IEEE Press, 1990, pp. 517-521.
[28] A.U. Shankar and S.S. Lam, “Distributed Computing,” Time-Dependent Distributed Systems: Proving Safety, Liveliness and Real-Time Properties, pp. 61-79, 1987.
[29] A.P. Sistla, L. Milliades, and V. Gyuris, “SMC: A Symmetry Based Model Checker for Verification of Liveness Properties,” Proc. CAV'97, pp. 464–467, 1998.
[30] R.E. Strom and N. Halim, “A New Programming Methodology for Long-Lived Software Systems,” IBM J. Research and Development, vol. 28, pp. 52-59, 1984.
[31] K.C. Tai and P.V. Koppol, “An Incremental Approach ro Reachability Analysis of Distributed Programs,” Proc. Seventh Int'l Workshop Software Specification and Design, pp. 141-150, 1993.
[32] A. Tanenbaum, Computer Networks. Prentice Hall, 1988.
[33] S. Tasiran, R. Alur, R.P. Kurshan, and R.K. Brayton, “Verifying Abstractions of Timed Systems,” Proc. Seventh Conf. Concurrency Theory, 1996.
[34] S. Tasiran and R. Brayton, “STARI: A Case Study in Compositional and Hierarchical Timing Verification,” Proc. Int'l Workshop Computer-Aided Verification, 1997.
[35] C. Daws, A. Olivero, S. Tripakis, and S. Yovine, “The Tool KRONOS,” Hybrid Systems III, R. Alur, T.A. Henzinger, and E.D. Sontag, eds., 1996.
[36] J. J. P. Tsai and S. J. H. Yang,Monitoring and Debugging of Distributed Real-Time Systems. Washington, DC: IEEE Computer Society Press, 1995.
[37] J.J.P. Tsai, S.J.H. Yang, and Y.H. Chang, ”Timing Constraint Petri Nets and their Application to Schedulability Analysis of Real-Time System Specification,” IEEE Trans. Software Eng., vol. 21,
no. 1, pp. 32-49, Jan. 1995.
[38] J.J.P. Tsai, ”Dependability of Artificial Intelligence Systems,” IEEE Trans. Knowledge and Data Eng., vol. 7, no. 1, pp. 1-3, Feb. 1995.
[39] J.J.P. Tsai, Y. Bi, S.J.H. Yang, and R.A.W. Smith, Distributed Real-Time Systems: Monitoring, Debugging, and Visualization. John Wiley and Sons, 1996.
[40] J.J.P. Tsai, Y. Bi, and S.J.H. Yang, ”Debugging for Timing Constraints Violation,” IEEE Software, pp. 88-99, Mar. 1996.
[41] J.J.P. Tsai and E.Y.T. Juan, “Modeling and Verification of High-Assurance Properties of Safety-Critical Systems,” The Computer J., vol. 44, no. 6, pp. 504-530, 2001.
[42] A. Valmari, "Compositional Analysis With Place-Bordered Subnets," Proc. 15th Int'l Conf. Application and Theory of Petri Nets, pp. 531-547, 1994.
[43] A. Valmari, "The Weakest Deadlock-Preserving Congruence," Information Processing Letters, pp. 341-346, vol. 53, 1995.
[44] M.Y. Vardi and P. Wolper, “An Automata-Theoretic Approach to Automatic Program Verification,” Proc. First Symp. Logic in Computer Science, pp. 322-331, 1986.
[45] M. Zhou, K. McDermott, and P.A. Patel, "Petri Net Synthesis and Analysis of a Flexible Manufacturing System Cell," IEEE Trans. Systems, Man, and Cybernetics, pp. 523-531, vol. 23, no. 2, Mar.
Index Terms:
Composition verification, state space explosion, timed automata, labeled transition systems, IO-traces, IOT-failures, IOT-states, state space condensation.
Jeffrey J.P. Tsai, Eric Y.T. Juan, Avinash Sahay, "Model and Algorithm for Efficient Verification of High-Assurance Properties of Real-Time Systems," IEEE Transactions on Knowledge and Data
Engineering, vol. 15, no. 2, pp. 405-422, March-April 2003, doi:10.1109/TKDE.2003.1185842
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tk/2003/02/k0405-abs.html","timestamp":"2014-04-18T22:02:59Z","content_type":null,"content_length":"66869","record_id":"<urn:uuid:9102dd50-057f-419d-b05b-ebad796e8a6a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00239-ip-10-147-4-33.ec2.internal.warc.gz"} |
3.6. Operations
Operations and Operators
Operations are “basic” functions with their own syntax.
They have a special Operator (a sign or a word) that is the same as a function name. Unary Operators, operations which take one argument, are followed by their argument, and secondary operators are
surrounded by their two arguments.
Here are some examples:
>>> 'GTnnAC' + 'GAATTC'
>>> 'GAATTC' * 3
>>> 'n' in 'GTnnAC'
This is only a simpler way of writing these functions provided by Python, because humans are in general more familiar with this syntax closely related to the mathematical formal language. | {"url":"http://www.pasteur.fr/formation/infobio/python/ch03s06.html","timestamp":"2014-04-20T06:00:04Z","content_type":null,"content_length":"3642","record_id":"<urn:uuid:afc89014-0b4d-41d8-b671-5486a687ef35>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability that a Random Integer...
Date: 7/18/96 at 2:42:49
From: don mcmahon&sheri anderson
Subject: Probability that a Random Integer...
I started trying to figure out what the probability that an integer
chosen randomly would be an integer multiple of a given integer "n"
(or perhaps of a prime "p")...and then the probability that it would
be a multiple of, say, the integers 2...7, and so on. I think I got
stuck on trying to figure out how to formulate the "or" probabilities
and add things up properly. I think my idea was to generate a sequence
P(n) where P(n) would be the probability a random integer would be a
multiple of at least one of the integers from 2 up to n. I think I got
tangled up in the lattice of integers and then had to go settle a
civil war in the sandbox in the backyard. Can you head me in the
right direction?
Sheri Anderson
Date: 7/18/96 at 19:58:55
From: Doctor Tom
Subject: Re: Probability that Random Integer...
Well, to be precise (and that's what we mathematicians have to be!),
I have to know exactly what you mean by "an integer chosen randomly".
If you're talking about the entire infinite set of integers, there is
no way to do this without some sort of a distribution function over
the integers, and there is no such function that gives an "equal
probability" for all integers.
To make things precise, here's what I'll assume you mean: If M is
a very large integer, and we randomly (with equal probability)
choose an integer between 0 and M, what's the probability that it
is divisible by n? For any fixed M, you can work this out, and then
if we take the limit of these probabilities as M goes to infinity,
I think that will be what you want.
So if your question just concerns a single number n, the answer is
that the limiting probability is 1/n. For large M, roughly 1/n of
the integers less than M are divisible by n, and the error is smaller
and smaller as M gets larger and larger.
If you ask, "What's the probability that a number is divisible by
n and m?", I think you begin to see the problem - for example, if
n = 2 and m = 4, it's 1/4, since anything divisible by 4 is
automatically divisible by 2.
If you ask the general question, "What's the probability that a number
is divisible by all of the numbers n1, n2, n3, ..., nx?" the answer is
"if z = the least common multiple of n1, n2, ..., nx, then the
probability is 1/z".
The least common multiple of a set of integers is the smallest integer
that all of them divide. For example, the least common multiple
of 4 and 6 is 12. Of 7 and 11 is 77. And so on.
To find the least common multiple of a set of numbers, factor all of
them into prime factors. For each prime that appears anywhere in the
list, find the number in which the highest power of that prime
appears, and include that power of the prime in a product.
The grand product is the LCM (least common multiple).
For example, to find the LCM of 24, 125, 32, 100, and 444, where
"*" indicates multiplication, and "^" exponentiation:
24 = 2^3*3^1
125 = 5^3
32 = 2^5
100 = 2^2*5^2
444 = 2^2*3^1*37^1
LCM = 2^5*3^1*5^3*37^1
I hope it's obvious why this works, and I won't bother to multiply
out the number in my example.
-Doctor Tom, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/56540.html","timestamp":"2014-04-17T19:33:20Z","content_type":null,"content_length":"8273","record_id":"<urn:uuid:beda2905-4d73-465a-aa54-e32feaff2aaa>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Figure 4.21. This figure shows the schematic symbols for six types of field effect transistors. Each of the 6 descriptions has been written to stand alone.
Number 1, the N channel junction FET. There is a circle. A line crosses the perimeter from the left outside. It would go through the center if it were that long, but it isn't. It ends at about one
third the way across the circle in an arrowhead. The arrowhead points to the right. There is a vertical line touching the point of the arrowhead. This line is almost as long as it can be without
touching the perimeter. At about two thirds the way across the circle a line begins below and outside and goes up crossing the perimeter. At about one third the way up it turns to the left and ends
when it touches the vertical line below its center. It does not touch at the lower end of the line but a little way from the bottom. Another line starts outside and above the circle and directly
above the one that started from below. It goes down and crosses the perimeter. It turns to the left at about two thirds of the height and ends when it touches the vertical line above its center. This
line does not touch at the end of the vertical line but a little way from the top end. The connection on the left is the gate, the one below is the source and the one above is the drain. End of
number 1.
Number 2, the P channel junction FET. There is a circle. A line crosses the perimeter from the left outside. It would go through the center if it were that long, but it isn't. It ends at about one
third the way across the circle. There is an arrowhead on this line with its point just inside the perimeter. The arrowhead points to the left. There is a vertical line touching the line that comes
in from the left. This line is almost as long as it can be without touching the perimeter. At about two thirds the way across the circle a line begins below and outside and goes up crossing the
perimeter. At about one third the way up it turns to the left and ends when it touches the vertical line below its center. It does not touch at the lower end of the line but a little way from the
bottom. Another line starts outside and above the circle and directly above the one that started from below. It goes down and crosses the perimeter. It turns to the left at about two thirds of the
height and ends when it touches the vertical line above its center. This line does not touch at the end of the vertical line but a little way from the top end. The connection on the left is the gate,
the one below is the source and the one above is the drain. End of number 2.
Number 3, the N channel depletion mode MOSFET. There is a circle. Imagine a capitol letter T turned on its side with the foot of the T to the left. The staff of the T crosses the perimeter of the
circle with the head inside the circle. There is a vertical line to the right of the T but not touching it. This line is slightly to the left of the center of the circle and almost as long as it can
be without touching the inside of the circle. At about two thirds the way across the circle a line begins below and outside and goes up crossing the perimeter. At about one third the way up it turns
to the left and ends when it touches the vertical line below its center.. It does not touch at the lower end of the line but a little way from the bottom. Another line starts outside and above the
circle and directly above the one that started from below. It goes down and crosses the perimeter. It turns to the left at about two thirds of the height and ends when it touches the vertical line
above its center. This line does not touch at the end of the vertical line but a little way from the top end. Another line starts at the center of the vertical line and goes to the right until it is
even with the lines coming up from below and down from above. It turns down and joins the line coming up from below. The horizontal part of this line has an arrowhead pointing to the left. The
connection on the left is the gate, the one below is the source and the one above is the drain. End of number 3.
Number 4, the P channel depletion mode MOSFET. There is a circle. Imagine a capitol letter T turned on its side with the foot of the T to the left. The staff of the T crosses the perimeter of the
circle with the head inside the circle. There is a vertical line to the right of the T but not touching it. This line is slightly to the left of the center of the circle and almost as long as it can
be without touching the inside of the circle. At about two thirds the way across the circle a line begins below and outside and goes up crossing the perimeter. At about one third the way up it turns
to the left and ends when it touches the vertical line below its center.. It does not touch at the lower end of the line but a little way from the bottom. Another line starts outside and above the
circle and directly above the one that started from below. It goes down and crosses the perimeter. It turns to the left at about two thirds of the height and ends when it touches the vertical line
above its center. This line does not touch at the end of the vertical line but a little way from the top end. Another line starts at the center of the vertical line and goes to the right until it is
even with the lines coming up from below and down from above. It turns down and joins the line coming up from below. The horizontal part of this line has an arrowhead pointing to the right. The
connection on the left is the gate, the one below is the source and the one above is the drain. End of number 4.
Number 5, the N channel enhancement mode MOSFET. There is a circle. Imagine a capitol letter T turned on its side with the foot of the T to the left. The staff of the T crosses the perimeter of the
circle with the head inside the circle. There is a dashed vertical line to the right of the T but not touching it. This line consists of 3 dashes, one at the top, one in the center, and one at the
bottom. This line is slightly to the left of the center of the circle and almost as long as it can be without touching the inside of the circle. At about two thirds the way across the circle a line
begins below and outside and goes up crossing the perimeter. At about one third the way up it turns to the left and ends when it touches the lower dash of the vertical line. Another line starts
outside and above the circle and directly above the one that started from below. It goes down and crosses the perimeter. It turns to the left at about two thirds of the height and ends when it
touches the upper dash of the vertical line. Another line starts at the center of the center dash of the vertical line and goes to the right until it is even with the lines coming up from below and
down from above. It turns down and joins the line coming up from below. The horizontal part of this line has an arrowhead pointing to the left. The connection on the left is the gate, the one below
is the source and the one above is the drain. End of number 5.
Number 6, the P channel enhancement mode MOSFET. There is a circle. Imagine a capitol letter T turned on its side with the foot of the T to the left. The staff of the T crosses the perimeter of the
circle with the head inside the circle. There is a dashed vertical line to the right of the T but not touching it. This line consists of 3 dashes, one at the top, one in the center, and one at the
bottom. This line is slightly to the left of the center of the circle and almost as long as it can be without touching the inside of the circle. At about two thirds the way across the circle a line
begins below and outside and goes up crossing the perimeter. At about one third the way up it turns to the left and ends when it touches the lower dash of the vertical line. Another line starts
outside and above the circle and directly above the one that started from below. It goes down and crosses the perimeter. It turns to the left at about two thirds of the height and ends when it
touches the upper dash of the vertical line. Another line starts at the center of the center dash of the vertical line and goes to the right until it is even with the lines coming up from below and
down from above. It turns down and joins the line coming up from below. The horizontal part of this line has an arrowhead pointing to the right. The connection on the left is the gate, the one below
is the source and the one above is the drain. End of number 6.
End verbal description.
Go back. | {"url":"http://www.angelfire.com/planet/funwithtransistors/FIG-4-21-gif.html","timestamp":"2014-04-20T11:01:44Z","content_type":null,"content_length":"9380","record_id":"<urn:uuid:60b94b4e-4c15-4605-b4a8-9020f678669b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
On emergence in gauge theories at the 't Hooft limit
Bouatta, Nazim and Butterfield, Jeremy (2012) On emergence in gauge theories at the 't Hooft limit. [Preprint]
PDF - Submitted Version
Download (317Kb) | Preview
Quantum field theories are notoriously difficult to understand, physically as well as philosophically. The aim of this paper is to contribute to a better conceptual understanding of gauge quantum
field theories, such as quantum chromodynamics, by discussing a famous physical limit, the 't Hooft limit, in which the theory concerned often simplifies. The idea of the limit is that the number N
of colours (or charges) goes to infinity. The simplifications that can happen in this limit, and that we will consider, are: (i) the theory's Feynman diagrams can be drawn on a plane without lines
intersecting (called `planarity'); and (ii) the theory, or a sector of it, becomes integrable, and indeed corresponds to a well-studied system, viz. a spin chain. Planarity is important because it
shows how a quantum field theory can exhibit extended, in particular string-like, structures; in some cases, this gives a connection with string theory, and thus with gravity. Previous philosophical
literature about how one theory (or a sector, or regime, of a theory) might be emergent from, and-or reduced to, another one has tended to emphasize cases, such as occur in statistical mechanics,
where the system before the limit has finitely many degrees of freedom. But here, our quantum field theories, including those on the way to the 't Hooft limit, will have infinitely many degrees of
freedom. Nevertheless, we will show how a recent schema by Butterfield and taxonomy by Norton apply to the quantum field theories we consider; and we will classify three physical properties of our
theories in these terms. These properties are planarity and integrability, as in (i) and (ii) above; and the behaviour of the beta-function reflecting, for example, asymptotic freedom. Our discussion
of these properties, especially the beta-function, will also relate to recent philosophical debate about the propriety of assessing quantum field theories, whose rigorous existence is not yet proven.
Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking: Share |
Actions (login required)
Document Downloads | {"url":"http://philsci-archive.pitt.edu/9288/","timestamp":"2014-04-17T12:52:51Z","content_type":null,"content_length":"32488","record_id":"<urn:uuid:97dab17f-7321-4322-a0e7-ed19247ea2ca>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Palo Alto ACT Tutor
Find a Palo Alto ACT Tutor
...I love teaching students at this age, where their critical thinking and reasoning skills have matured to the point where problem-solving can be exciting rather than intimidating. I have a
large library of enrichment material for those students who need/want additional challenge, and I also have ...
10 Subjects: including ACT Math, calculus, geometry, algebra 1
...I have also taught and tutored many students in this area of Discrete Mathematics. I am able to understand their difficulties and find a way to help them grasp the concepts. I have MS degree
in Computer Engineering from Case Western Reserve University.
23 Subjects: including ACT Math, calculus, geometry, physics
...I have Master's degrees in Mathematics (Stanford) and Economics (University of Santa Clara). I enjoy working with students and their teachers to ensure maximum benefit from our mutual
investment in each student's success. I have an Bachelor's degree in mathematics from the University of Santa Cl...
22 Subjects: including ACT Math, calculus, statistics, geometry
...I have taught test preparation for the CBEST for two years. I am CBEST certified, having passed on the first try with no problems on any of the three sections. The CBEST was designed to test
basic proficiency in math, reading, and writing.
32 Subjects: including ACT Math, reading, English, ADD/ADHD
...Because I have owned and operated college preparatory schools for several years, I have been able to accept and help students with ADD and ADHD. I have found that, with one-on-one instruction
and clear explanations, students with ADD and ADHD are able to achieve in line with their innate intelli...
35 Subjects: including ACT Math, chemistry, English, reading
Nearby Cities With ACT Tutor
Atherton ACT Tutors
East Palo Alto, CA ACT Tutors
Fremont, CA ACT Tutors
Los Altos ACT Tutors
Los Altos Hills, CA ACT Tutors
Menlo Park ACT Tutors
Mountain View, CA ACT Tutors
Redwood City ACT Tutors
San Jose, CA ACT Tutors
San Mateo, CA ACT Tutors
Santa Clara, CA ACT Tutors
Stanford, CA ACT Tutors
Sunnyvale, CA ACT Tutors
Union City, CA ACT Tutors
Woodside, CA ACT Tutors | {"url":"http://www.purplemath.com/Palo_Alto_ACT_tutors.php","timestamp":"2014-04-16T10:18:54Z","content_type":null,"content_length":"23633","record_id":"<urn:uuid:112ac9d5-94b5-4420-b3cd-f50984b3d7ac>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
2. Set Operations
Theorem 2.1: (Commutativity of Unions) Let A and B be two sets. then
Theorem 2.2: (Commutativity of Intersections) Let A and B be two sets. then
Theorem 2.3: (Associativity of Unions) Let A, B, and C be three sets. then
Theorem 2.4: (Associativity of Intersections) Let A, B, and C be three sets. then
Theorem 2.5: (Distributivity of Intersections across Unions) Let A, B, and C be three sets. then
Theorem 2.6: (Distributivity of Unions across Intersections) Let A, B, and C be three sets. then
Theorem 2.7: (Transitivity of Inclusion) Let A, B, and C be three sets. If
Theorem 2.8: (The Distributivity of Cartesian Products Across Unions). Let A, B, and C be three sets. If
Theorem 2.9: (The Distributivity of Cartesian Products Across Intersections). Let A, B, and C be three sets. If
Theorem 2.10: Let A and B be two sets. Then
Theorem 2.11: Let A and B be two sets. Then | {"url":"http://www.sonoma.edu/users/w/wilsonst/Papers/finite/2/default.html","timestamp":"2014-04-19T05:30:59Z","content_type":null,"content_length":"4252","record_id":"<urn:uuid:23c94cd6-10df-4ad9-9cb5-c628e1753799>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area of
Area of a parallelogram
From Latin: area - "level ground, an open space,"
The number of square units it takes to completely fill a parallelogram.
Formula: Base × Altitude
Try this Drag the orange dots to move and reshape the parallelogram. Drag point D to change the base.
Area formula
The area of a parallelogram is given by the formula
b is the length of any base
a is the corresponding altitude
See Derivation of the formula.
Recall that any of the four sides can be chosen as the base. You must use the altitude that goes with the base you choose. The altitude (or height) of a parallelogram is the perpendicular distance
from the base to the opposite side (which may have to be extended). In the figure above, the altitude corresponding to the base CD is shown.
Things to try
1. In the figure above, click on "hide details"
2. Drag the orange dots on the vertices to make a random-size parallelogram.
3. Estimate the area of the parallelogram just counting the squares inside it
4. Calculate the area using the formula
When you done, click "show details" to see how close you got.
Related polygon topics
Types of polygon
Area of various polygon types
Perimeter of various polygon types
Angles associated with polygons
Named polygons
(C) 2009 Copyright Math Open Reference. All rights reserved | {"url":"http://www.mathopenref.com/parallelogramarea.html","timestamp":"2014-04-18T16:57:08Z","content_type":null,"content_length":"13976","record_id":"<urn:uuid:f90e8909-cc82-4d59-ae19-538bed572ddb>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Modeling interactions
Dana Weiser posted on Monday, February 06, 2012 - 1:57 pm
I am trying to model interactions between an observed continuous variable (age) and 3 latent variables. I have read your Modeling Interactions Webnotes from March 2003. In the article, it states that
such an interaction is not supported by conventional SEM and you refer to additional readings focusing on the Joreskog-Yang approach, 2SLS, and the full-information maximum-likelihood approach. My
question is, does MPlus allow me to utilize such models, and if so, how? Thank you so much!
Linda K. Muthen posted on Monday, February 06, 2012 - 2:00 pm
Mplus takes the full-information maximum likelihood approach using the XWITH option.
Dana Weiser posted on Monday, February 06, 2012 - 3:19 pm
Thank you! So would I just model the interaction normally?
For a simplified version, say Z is the observed continuous variable:
MODEL: X BY x1-x3;
Y BY y1-y3;
INT | X XWITH Z;
X WITH Z;
Y ON X Z INT;
Linda K. Muthen posted on Monday, February 06, 2012 - 4:01 pm
Yes. I would not include the interaction between the covariates. The model is estimated conditioned on them and when you mention their means, variances, or covariances they are treated as dependent
variables and distributional assumptions are made about them. Model estimation does not fix them at zero.
Dana Weiser posted on Monday, February 06, 2012 - 4:27 pm
Great. Thank you!
Student 09 posted on Friday, February 10, 2012 - 1:20 pm
when testing interaction effects, variables are often centred around their mean.
When the variables are factors (= latent variables, e.g. f1 by x1 x2 x3), is centering still necessary?
Dana posted on Friday, February 10, 2012 - 4:38 pm
When testing an interaction between two latent variables you do not need to center.
Bengt O. Muthen posted on Friday, February 10, 2012 - 4:45 pm
Typically, latent variables have mean zero so centering is already implicit. Although not necessarily in growth and multi-group models.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=11&page=8832","timestamp":"2014-04-17T12:55:57Z","content_type":null,"content_length":"24756","record_id":"<urn:uuid:7ac679db-6c73-4fc6-ac22-052d65aab2e0>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: chess contest
Harvey Friedman friedman at math.ohio-state.edu
Thu Mar 19 22:00:04 EST 1998
Shipman 11:43PM 3/19/98 writes:
>Harvey, I would not regard the intuition that the initial position is a
>draw as
>anything close to either a chess "theorem" or a chess "axiom". However, the
>intuition of a Grandmaster about White's winning when Black's Queen is removed
>is strongly justified in a technical sense--any GM (indeed any master) could
>win this every time within 200 moves against all of the world's computers and
>GMs collectively ... Further, the winning
>technique can be explained in terms of chess strategy. If you are willing to
>accept as "axioms" statements that certain chess advantages suffice to win,
>then everything from there on is just like math.
Firstly, what do you think of the statement "the original chess position is
a win when black removes his queen" is not decidable
in ZFC using at most 2^100 symbols, even if ZFC is augmented in standard
ways to support abbreviations (which is required in order to make ZFC
capable of actual formalization)? I think it would be interesting to try to
give an upper bound on the number of symbols needed in ZFC with
abbreviations, to prove the original chess position is a win when black
removes his queen. In fact, it would be interesting to give an upper bound
on the number of symbols needed in ZFC with abbreviations to prove the
original position is a draw assuming that it is a draw; or needed to decide
the game value of any given chess position.
Naturally, the best one can expect to do with this now is to relate such
bounds to a bound on the size of the game tree from these initial
positions, or any given position, where the size of the game tree is
measured in some appropriate way for this kind of problem.
Do the upper bounds that one can get differ by much between the full
initial chess position and the initial chess position without black's
queen? Or the sizes of the game trees? Even here one might have the
frustrating situation that one cannot even get a substantial difference in
this kind of theory between the two cases!
Secondly, how are you going to give some interesting "axioms" of this kind
in order to prove that the initial position is a win with black's queen
removed? Of course, if you assume that under certain conditions if you are
a queen ahead, then it is immediate. Can you give an interesting sufficient
condition for winning if you are a queen ahead that you believe in?
Thirdly, let me state a contest. The winner is one who has a proof,
possibly computer-aided, that white has a win in the initial position where
black removes a number of pawns and/or pieces (not the king!), where the
total value of the removed items is smallest. Here pawns = 1, knights =
bishops = 3, rooks = 5, queen = 9. This should lead to some interesting
>On Thu, 19 Mar 1998 22:24:20 +0100, Harvey Friedman wrote:
>| The chess professionals are not turning these conjectures into chess
>| theorems merely by playing (even absolutely) correct moves against
>| particular defenses, unless you mean something unusual by "chess theorems."
Cabillon 1:45AM 3/20/98 writes:
>"chess theorems" = "winning positions".
Earlier, Cabillon wrote:
>These professionals are capable of showing
>how these conjectures can be turned into chess theorems just
>actually playing the proper moves against any defense chosen.
Under your definition of chess theorems, you are asserting that "these
chess professionals are capable of showing how these chess conjectures can
be turned into winning positions by playing the proper moves against any
defense chosen." But this doesn't make any sense to me.
I know that I may be wasting your time with word games here. But there may
be a point in talking this through. I don't quite know how you want to
describe what the chess professionals are doing when they play chess.
>Incidentally, "the original chess position is a draw" is not a "plausible
Not **plausible**? Why not? I do understand why it is not "obvious."
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-March/001651.html","timestamp":"2014-04-18T16:05:56Z","content_type":null,"content_length":"6531","record_id":"<urn:uuid:477d4dc5-6256-449f-9513-0e0d9354466b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help with 2 questions
December 13th 2009, 04:14 PM
Need help with 2 questions
Hi guys for some reason I'm having problems with these 2 questions, Any help would be great.
1. For the equation $60x^3-3x^5 = y$ where does the largest slope occur?
Ok here is how I thought I would find the answer for this problem. If I take the second derivative and equal it to 0 and find my critical points, then one of my points will be the the largest
slope. Is this correct?
$f(x) = 60x^3-3x^5$
$f'(x) = 180x^2 -15x^4$
$f''(x) = 360x -60x^3$
$0 = 60x(6-x^2)$
$x = 0, \sqrt{6}, -\sqrt{6}$
so I think my largest slope occurs at
$\sqrt{6}, -\sqrt{6}$
Is this correct?
2.A rectangular storage container with a top is to have a volume of $6 M^3$. The length of the base is 1.5 times the width. Material for the base costs $3 per square meter, material for the sides
cost $5 per square meter and the material for the top costs $8 per square meter. Find the dimensions of the container with minimal total cost.
I'm not sure how to do the second one.
I know
but what do I use for height?
Also I know I have to use surface area but can't come up with an equation without the height.
December 13th 2009, 04:41 PM
Ok so I tried this approach but not sure if it right
l = 1.5x
so for volume
$6 = (x)(1.5x)(y)$
Then I solve for y
$y = 4x^2$
$SA = 2(1.5x)(x) +2(1.5x)(y) + 2(x)(y)$
Then sub y in
$SA= 2(1.5x^2) +2(1.5x)(4x^2) +2(x)(4x^2)$
$3x^2 +20x^3$
Then I'm lost after that lol
December 13th 2009, 04:56 PM
Ok so I tried this approach but not sure if it right
l = 1.5x
so for volume
$6 = (x)(1.5x)(y)$
Then I solve for y
$y = 4x^2$
$SA = 2(1.5x)(x) +2(1.5x)(y) + 2(x)(y)$
Then sub y in
$SA= 2(1.5x^2) +2(1.5x)(4x^2) +2(x)(4x^2)$
$3x^2 +20x^3$
Then I'm lost after that lol
should it not be $y = \frac 4{x^2}$?
and remember, you want the cost equation, not the surface area equation to be your objective
after you get that, just find the minimum points of that function
December 13th 2009, 05:10 PM
lol yep it should be
$y =\frac{4}{x^2}$
So the derivative of the SA is the Cost function? If not how do I set up the cost function?
December 13th 2009, 05:12 PM
the cost for each "side" (can mean top or bottom) of the box is (cost of the side)*(area of the side)
think you can find the equation now?
December 13th 2009, 05:19 PM
Hi guys for some reason I'm having problems with these 2 questions, Any help would be great.
1. For the equation $60x^3-3x^5 = y$ where does the largest slope occur?
Ok here is how I thought I would find the answer for this problem. If I take the second derivative and equal it to 0 and find my critical points, then one of my points will be the the largest
slope. Is this correct?
$f(x) = 60x^3-3x^5$
$f'(x) = 180x^2 -15x^4$
$f''(x) = 360x -60x^3$
$0 = 60x(6-x^2)$
$x = 0, \sqrt{6}, -\sqrt{6}$
so I think my largest slope occurs at
$\sqrt{6}, -\sqrt{6}$
Is this correct?
by the way, this was correct. these are the x-values where you have the maximum slope.
December 13th 2009, 05:31 PM
Thanks so the x values of $\sqrt{6}, - \sqrt{6},$ but not the x value of 0 right?
The cost function would be
$C = 4.5x^2 +12x^2 +\frac{60}{x} + \frac{40}{x}$
Thanks for the help
December 13th 2009, 05:43 PM
that's right. x = 0 is a local minimum for the derivative function.
The cost function would be
$C = 4.5x^2 +12x^2 +\frac{60}{x} + \frac{40}{x}$
Thanks for the help
yes, that's correct. you can write it as $C = 16.5x^2 + \frac {100}x$ though
now find the minimum points of that function. you will be able to use that to find the dimensions
December 13th 2009, 05:45 PM
Thanks that was my next step. | {"url":"http://mathhelpforum.com/calculus/120298-need-help-2-questions-print.html","timestamp":"2014-04-19T19:57:47Z","content_type":null,"content_length":"19556","record_id":"<urn:uuid:a9982bb3-5d82-428b-b1fd-2242a363a558>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: July 2012 [00429]
[Date Index] [Thread Index] [Author Index]
Re: Mathematica as a New Approach to Teaching Maths
• To: mathgroup at smc.vnet.net
• Subject: [mg127512] Re: Mathematica as a New Approach to Teaching Maths
• From: W Craig Carter <ccarter at MIT.EDU>
• Date: Mon, 30 Jul 2012 22:15:37 -0400 (EDT)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
• Delivered-to: l-mathgroup@wolfram.com
• Delivered-to: mathgroup-newout@smc.vnet.net
• Delivered-to: mathgroup-newsend@smc.vnet.net
• References: <9573433.50612.1343288228908.JavaMail.root@m06> <jv01n2$hds$1@smc.vnet.net> <20120730074544.4B4916847@smc.vnet.net>
I teach a class on the subject of mathematics applied to materials science and engineering. I use mathematica as a tool to emphasize the application of mathematics, visualization, simulation, and programming. http://pruffle.mit.edu/3.016/
I find that it serves as excellent reinforcement to concepts that they have only learned once in their mathematics curriculum. Furthermore, believe mathematica is an excellent vehicle for doing a broad survey of many maths topics and encourages students to browse, explore, and experiment with "untaught" mathematics material.
I felt strongly that the "M"-like programs will play fundamental part in the students' future professions. Of the several choices, I picked mathematica because I felt that the mathematica skills were more natural with regard to mathematics. I believe that the learning curve being steeper was a net benefit as the students pick up other languages.
As a final observation, I note that students who have taken my class do use mathematica as a tool in their subsequent classes, and I do hear from graduated students that they appreciated the skills that they have learned.
W Craig Carter
Professor of Materials Science, MIT
On Jul 30, , at Mon Jul 30, 12 @3:45 AM, Richard Fateman wrote:
> On 7/27/2012 11:43 PM, Ralph Dratman wrote:
>> David,
>> I agree that Mathematica is quite difficult to learn, but I also think
>> a good teacher could make the learning process significantly easier by
>> going carefully over certain key points which are not emphasized in
>> the documentation.
> I think the point that is easy for enthusiasts to miss (and this
> pertains not only to Mathematica but its competitors) is that students,
> by and large, will (mostly correctly) view the introduction of
> Mathematica into their courses as "extra work", "more homework" and
> irrelevant to testing (will there be a computer on exams?).
> In point of fact, the difficulties of Mathematica are orthogonal to the
> material in courses in numerical analysis, calculus, and probably other
> topics. I have, for a number of years, been the "go-to" person for
> instructors teaching calculus at UC Berkeley, when students using a
> computer algebra system get what appear to be wrong answers from such
> systems. While the number of such instances has decreased, they have
> not disappeared, and some are "features".
> It is also important for enthusiasts to realize that many students do
> not want their classes to be "enriched" by the introduction of a
> peculiar computer program which they are forced to learn. Many students
> simply want to get a passing grade so they never have to take another
> math class.
> There are, of course, some set of students who are interested and
> curious and enthusiastic. For example, students who ask, "How do
> you write a computer program that does integrals?" These are perhaps
> the same students who realize that the calculus textbook fails to
> describe an algorithm for doing integrals, and may even understand
> that there are some integrals whose closed form does not exist.
> Publications which address the question of "how does the introduction
> of computer labs (etc) affect the student learning level" rather
> consistently come up with a conclusion that is roughly, "they learn
> about the same."
>> For example, one of the most fascinating features of the language --
>> its ability to mix symbolic and numeric quantities -- can also be a
>> source of great confusion for the neophyte.
> This is confusing only because they have learned another, more
> restricted, language first. Ab initio, why can't
> the mixture of symbolic and numeric calculations work? To a
> mathematical sophisticate totally unaware of programming, it is
> a big disappointment that BASIC (etc) cannot deal with uninitialized
> variable x by computing x+x resulting in 2*x.
>> I would also go very carefully over what one must do to keep the front
>> end and/or kernel from going out of control, and how to recover when
>> that does happen.
> This is of course totally irrelevant to the subject matter at hand, and
> disappointing in the extreme that one must spend any time on this.
>> I have a number of related ideas to help students get the feel of the
>> Mathematica environment before plunging in to accomplish serious work.
> Unfortunately, serious work encounters, fairly often, serious barriers.
>> I would love an opportunity to teach Mathematica to a small group of
>> smart high schoolers or beginning undergraduates.
> Tell us when you figure out how to market such a course.
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2012/Jul/msg00429.html","timestamp":"2014-04-20T06:15:20Z","content_type":null,"content_length":"30958","record_id":"<urn:uuid:13abfb5e-03a5-42a2-8709-0c731248ed80>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
a : array_like
Array to be sorted.
axis : int or None, optional
Axis along which to sort. If None, the array is flattened before sorting. The default is -1, which sorts along the last axis.
kind : {‘quicksort’, ‘mergesort’, ‘heapsort’}, optional
Sorting algorithm. Default is ‘quicksort’.
order : list, optional
When a is a structured array, this argument specifies which fields to compare first, second, and so on. This list does not need to include all of the fields. | {"url":"http://docs.scipy.org/doc/numpy-1.6.0/reference/generated/numpy.sort.html","timestamp":"2014-04-17T03:51:21Z","content_type":null,"content_length":"15481","record_id":"<urn:uuid:7e9a0ffe-b44a-43ca-8ff2-2229abaa8a5d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
First x which satisfies x-3>0
1. The problem statement, all variables and given/known data
What's the first x value which satisfies x-3>0?
2. Relevant equations
3. The attempt at a solution
3 + an infinitesimal number would satisfy this, but I have no idea how to write it down in a proper algebraic form. | {"url":"http://www.physicsforums.com/showthread.php?p=3780456","timestamp":"2014-04-19T02:25:10Z","content_type":null,"content_length":"29254","record_id":"<urn:uuid:84201825-e5d3-4cbb-8796-3ce45749b736>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
2.3.1 Arguments
2.3.1 Arguments
point-seq is a sequence of point objects.
coord-seq is a sequence of coordinate pairs, which are real numbers. It is an error if coord-seq does not contain an even number of elements.
The drawing functions take keyword arguments specifying drawing options. For information on the drawing options, see 3.2, Using CLIM Drawing Options. If you prefer to create and use point objects,
see 2.5.2, CLIM Point Objects.
draw-point [Function]
Arguments: sheet point &key ink clipping-region transformation line-style line-thickness line-unit
draw-point* [Function]
Arguments: sheet x y &key ink clipping-region transformation line-style line-thickness line-unit
Summary: These functions (structured and spread arguments, respectively) draw a single point on the sheet sheet at the point point (or the position ( x, y )).
The unit and thickness components of the current line style (see 3.2, Using CLIM Drawing Options) affect the drawing of the point by controlling the number of pixels used to render the point on the
display device.
draw-points [Function]
Arguments: sheet point-seq &key ink clipping-region transformation line-style line-thickness line-unit
draw-points* [Function]
Arguments: sheet coord-seq &key ink clipping-region transformation line-style line-thickness line-unit
Summary: These functions (structured and spread arguments, respectively) draw a set of points on the sheet sheet .
For convenience and efficiency, these functions exist as equivalents to
(map nil #'(lambda (point) (draw-point sheet point)) point-seq)
(do ((i 0 (+ i 2)))
((= i (length coord-seq)))
(draw-point* sheet (elt coord-seq i) (elt coord-seq (+ i 1))))
draw-line [Function]
Arguments: sheet point1 point2 &key ink clipping-region transformation line-style line-thickness line-unit line-dashes line-cap-shape
draw-line* [Function]
Arguments: sheet x1 y1 x2 y2 &key ink clipping-region transformation line-style line-thickness line-unit line-dashes line-cap-shape
Summary: These functions (structured and spread arguments, respectively) draw a line segment on the sheet sheet from the point point1 to point2 (or from the position ( x1 , y1 ) to ( x2 , y2 )).
The current line style (see 3.2, Using CLIM Drawing Options) affects the drawing of the line in the obvious way, except that the joint shape has no effect. Dashed lines start dashing at point1 .
draw-lines [Function]
Arguments: sheet point-seq &key ink clipping-region transformation line-style line-thickness line-unit line-dashes line-cap-shape
draw-lines* [Function]
Arguments: sheet coord-seq &key ink clipping-region transformation line-style line-thickness line-unit line-dashes line-cap-shape
Summary: These functions (structured and spread arguments, respectively) draw a set of disconnected line segments. These functions are equivalent to
(do ((i 0 (+ i 2)))
((= i (length point-seq)))
(draw-line sheet (elt point-seq i) (elt point-seq (1+ i))))
(do ((i 0 (+ i 4)))
((= i (length coord-seq)))
(draw-line* sheet
(elt coord-seq i) (elt coord-seq (+ i 1))
(elt coord-seq (+ i 2))
(elt coord-seq (+ i 3))))
draw-polygon [Function]
Arguments: sheet point-seq &key (filled t ) (closed t ) i nk clipping-region transformation line-style line-thickness line-unit line-dashes line-joint-shape line-cap-shape
draw-polygon* [Function]
Arguments: sheet coord-seq &key (filled t ) (closed t ) ink clippin g-region transformation line-style line-thickness line-unit line-dashes line-joint-shape line-cap-shape
Summary: Draws a polygon or polyline on the sheet sheet . When filled is nil , this draws a set of connected lines; otherwise, it draws a filled polygon. If closed is t (the default) and filled is
nil , it ensures that a segment is drawn that connects the ending point of the last segment to the starting point of the first segment. The current line style (see 3.3, CLIM Line Styles for details)
affects the drawing of unfilled polygons in the obvious way. The cap shape affects only the "open" vertices in the case when closed is nil . Dashed lines start dashing at the starting point of the
first segment, and may or may not continue dashing across vertices, depending on the window system.
If filled is t , a closed polygon is drawn and filled in. In this case, closed is assumed to be t as well.
Arguments: sheet point1 point2 &key (filled t ) ink clipping-region transformation line-style line-thickness line-unit line-dashes line-joint-shape
draw-rectangle* [Function]
Arguments: sheet x1 y1 x2 y2 &key (filled t ) ink clipping-region transformation line-style line-thickness line-unit line-dashes line-joint-shape
Summary: Draws either a filled or unfilled rectangle on the sheet sheet that has its sides aligned with the coordinate axes of the native coordinate system. One corner of the rectangle is at the
position ( x1 , y1 ) or point1 and the opposite corner is at ( x2 , y2 ) or point2. The arguments x1 , y1 , x2 , and y1 are real numbers that are canonicalized in the same way as for
make-bounding-rectangle . filled is as for draw-polygon* .
The current line style (see 3.2, Using CLIM Drawing Options) affects the drawing of unfilled rectangles in the obvious way, except that the cap shape has no effect.
Arguments: sheet points &key ink clipping-region transformation line-style line-thickness line-unit line-dashes line-joint-shape
draw-rectangles* [Function]
Arguments: sheet position-seq &key ink clipping-region transformation line-style line-thickness line-unit line-dashes line-joint-shape
Summary: These functions (structured and spread arguments, respectively) draw a set of rectangles on the sheet sheet . points is a sequence of point objects; position-seq is a sequence of coordinate
pairs. It is an error if position-seq does not contain an even number of elements.
Ignoring the drawing options, these functions are equivalent to:
(do ((i 0 (+ i 2)))
((= i (length points)))
(draw-rectangle sheet (elt points i) (elt points (1+ i))))
(do ((i 0 (+ i 4)))
((= i (length position-seq)))
(draw-rectangle* sheet
(elt position-seq i)
(elt position-seq (+ i 1))
(elt position-seq (+ i 2))
(elt position-seq (+ i 3))))
draw-ellipse [Function]
Arguments: sheet center-pt radius-1-dx radius-1-dy radius-2-dx radius-2-dy &key (filled t ) start-angle end-angle ink clipping-region transformation line-style line-thickness line-unit line-dashes
draw-ellipse* [Function]
Arguments: sheet center-x center-y radius-1-dx radius-1-dy radius-2-dx radius-2-dy &key (filled t ) start-angle end-angle ink clipping-region transformation line-style line-thickness line-unit
line-dashes line-cap-shape
Summary: These functions (structured and spread arguments, respectively) draw an ellipse (when filled is t , the default) or an elliptical arc (when filled is nil ) on the sheet sheet . The center of
the ellipse is the point center-pt (or the position ( center-x , center-y )).
Two vectors, ( radius-1-dx , radius-1-dy ) and ( radius-2-dx , radius-2-dy ) specify the bounding parallelogram of the ellipse as explained in 2.5, General Geometric Objects in CLIM All of the radii
are real numbers. If the two vectors are collinear, the ellipse is not well-defined and the ellipse-not-well-defined error will be signaled. The special case of an ellipse with its major axes aligned
with the coordinate axes can be obtained by setting both radius-1-dy and radius-2-dx to 0.
start-angle and end-angle are real numbers that specify an arc rather than a complete ellipse. Angles are measured with respect to the positive x axis. The elliptical arc runs positively
(counter-clockwise) from start-angle to end-angle . The default for start-angle is 0; the default for end-angle is 2π.
In the case of a "filled arc" (that is, when filled is t and start-angle or end-angle are supplied and are not 0 and 2π), the figure drawn is the "pie slice" area swept out by a line from the center
of the ellipse to a point on the boundary as the boundary point moves from start-angle to end-angle .
When drawing unfilled ellipses, the current line style (see 3.2, Using CLIM Drawing Options) affects the drawing in the obvious way, except that the joint shape has no effect. Dashed elliptical arcs
start dashing at start-angle .
draw-circle [Function]
Arguments: sheet center-pt radius &key (filled t ) start-angle end-angle ink clipping-region transformation line-style line-thickness line-unit line-dashes line-cap-shape
draw-circle* [Function]
Arguments: sheet center-x center-y radius &key (filled t ) start-angle end-angle ink clipping-region transformation line-style line-thickness line-unit line-dashes line-cap-shape
Summary: These functions (structured and spread arguments, respectively) draw a circle (when filled is t , the default) or a circular arc (when filled is nil ) on the sheet sheet . The center of the
circle is center-pt or ( center-x , center-y ) and the radius is radius . These are just special cases of draw-ellipse and draw-ellipse* . filled is as for draw-ellipse* .
start-angle and end-angle allow the specification of an arc rather than a complete circle in the same manner as that of the ellipse functions.
The "filled arc" behavior is the same as that of an ellipse.
draw-text [Function]
Arguments: sheet string-or-char point &key text-style (start 0 ) end (align-x :left ) (align-y :baseline ) toward-point transform-glyphs ink clipping-region transformation text-style text-family
text-face text-size
draw-text* [Function]
Arguments: sheet string-or-char x y &key text-style (start 0 ) end (align-x :left ) (align-y :baseline ) toward-x toward-y transform-glyphs ink clipping-region transformation text-style text-family
text-face text-size
Summary: The text specified by string-or-char is drawn on the sheet sheet starting at the position specified by the point point (or the position ( x, y )). The exact definition of "starting at"
depends on align-x and align-y . align-x is one of :left , :center , or :right . align-y is one of :baseline , :top , :center , or :bottom . align-x defaults to :left and align-y defaults to
:baseline ; with these defaults, the first glyph is drawn with its left edge and its baseline at point .
text-style defaults to nil , meaning that the text will be drawn using the current text style of the sheet's medium.
start and end specify the start and end of the string, in the case where string-or-char is a string. If start is supplied, it must be an integer that is less than the length of the string. If end is
supplied, it must be an integer that is less than the length of the string, but greater than or equal to start .
Normally, glyphs are drawn from left to right no matter what transformation is in effect. toward-x or toward-y (derived from toward-point in the case of draw-text ) can be used to change the
direction from one glyph to the next one. For example, if toward-x is less than the x position of point , then the glyphs will be drawn from right to left. If toward-y is greater than the y position
of point , then the glyphs' baselines will be positioned one above another. More precisely, the reference point in each glyph lies on a line from point to toward-point , and the spacing of each glyph
is determined by packing rectangles along that line, where each rectangle is "char-width" wide and "char-height" high.
transform-glyphs is not supported in this version of CLIM.
Common Lisp Interface Manager 2.0 User's Guide - 20 Sep 2011 | {"url":"http://www.lispworks.com/documentation/lw61/CLIM/html/climuser-39.htm","timestamp":"2014-04-20T13:57:23Z","content_type":null,"content_length":"26186","record_id":"<urn:uuid:8bb0f1a2-cc04-4f24-bdee-b601099c11bb>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
Important Formulae & Rules- Part - 2Important Formulae & Rules- Part - 2
Median formula This is also called the length of the median formula. Let AM be a median in triangle ABC. Then
Minimal Polynomial p<>. We call a polynomial p(x) with integer coefficients irreducible if p(x) cannot be written as a product of two polynomials with integer coefficients neither of which is a
constant. Suppose that the number minimal polynomial of
Orthocenter of a Triangle p<>. The point of intersection of the altitudes.
Periodic Function p<>. A function f (x) is periodic with period T > 0 if T is the smallest positive real number for which
Pigeonhole Principle p<>. If n objects are distributed among k < n boxes, some box contains at least two objects.
Rearrangement Inequality p<>. Let with equality if and only if
Schur’s Inequality p<>. Let x, y, z be non-negative real numbers. Then for any r > 0,
Equality holds if and only if x = y = z or if two of x, y, z are equal and the third is equal to 0. The proof of the inequality is rather simple. Because the inequality is symmetric in the three
variables, we may assume without loss of generality that
and every term on the left-hand side is clearly nonnegative. The first term is positive if x > y, so equality requires x = y, as well as
Sector p<>. The region enclosed by a circle and two radii of the circle.
Stewart’s Theorem p<>. In a triangle ABC with cevian AD, write a = |BC|, b = |CA|, c = |AB|, m = |BD|, n = |DC|, and d = |AD|. Then
This formula can be used to express the lengths of the altitudes and angle bisectors of a triangle in terms of its side lengths.
iit jee iit iitjee iit result iit 2009 indian institute of technology iit gate jee iit engineering iit institute jee results iit kgp iit exam jee result iit paper iit entrance iit papers iit coaching
iit physics iit chemistry iit math iit mathematics iit mba jee 2009 iit question aieee 2009 aieee aieee test aieee solution aieee forms aieee exams aieee colleges bitsat 2009 bitsat bitsat test
bitsat solution bitsat forms bitsat exams bitsat colleges | {"url":"http://mathematics.learnhub.com/lesson/6405-important-formulae-and-rules-part-2","timestamp":"2014-04-23T19:55:02Z","content_type":null,"content_length":"63082","record_id":"<urn:uuid:5e3084a3-4b1f-4c0a-beac-c0c18ba88eb4>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How do you know a triangle is a Right triangle if only the side measurements are given? Would you use the Pythagorean theorem?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/508a84c6e4b0d596c4607d6f","timestamp":"2014-04-17T03:50:16Z","content_type":null,"content_length":"44088","record_id":"<urn:uuid:d80a0140-3ad3-44d4-8808-42f7b0eb44f0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
the definition of subset
Use Subset in a sentence
that is a part of a larger
consisting of elements of a given
that can be the same as the given
or smaller.
World English Dictionary
subset (ˈsʌbˌsɛt)
1. maths
a. a set the members of which are all members of some given class: A is a subset of B is usually written A⊆B
b. proper subset A⊂B one that is strictly contained within a larger class and excludes some of its members
2. a set within a larger set
Collins English Dictionary - Complete & Unabridged 10th Edition
2009 © William Collins Sons & Co. Ltd. 1979, 1986 © HarperCollins
Publishers 1998, 2000, 2003, 2005, 2006, 2007, 2009
Cite This Source
Word Origin & History
"subordinate set," 1902, from
American Heritage
Science Dictionary
subset (sŭb'sět') Pronunciation Key
A set whose members are all contained in another set. The set of positive integers, for example, is a subset of the set of integers.
The American Heritage® Science Dictionary
Copyright © 2002. Published by Houghton Mifflin. All rights reserved.
Cite This Source
Example sentences
Each time a subset migrated onward, genetic diversity narrowed.
Living four-legged creatures rest and sleep in various postures, but only birds
and a subset of mammals rest on folded limbs.
Within each of these broad questions is a subset of secondary questions waiting
to be explored.
As it happens, only a subset of prisoners currently locked away for long
periods of isolation would be considered truly dangerous.
It's a broad appeal to a range of xenophobic fears of which race per se is a mere subset.
Even within that subset there weren't a lot that explained how and why you might want to use some of these things.
Even within that subset, there weren't a lot that explained how and why you might want to use some of these things.
In fact, economic behavior is only a subset of human behavior.
The odds were against our even detecting the plagiarism, since each committee member read an alphabetical subset of applications.
However, laying that groundwork will take more than text mining, and certainly more than the subset of textual materials on offer. | {"url":"http://dictionary.reference.com/browse/subset","timestamp":"2014-04-21T02:58:43Z","content_type":null,"content_length":"99030","record_id":"<urn:uuid:5f13532e-0e76-4890-86fb-49482d290d1c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
explain quadratic equations
Author Message
b0wn Posted: Friday 05th of Jan 07:40
Hello friends. I am badly in need of some assistance. My explain quadratic equations homework has started to get on my nerves. The classes proceed so quickly, that I never get a chance
to clarify my confusion. Is there any resource that can help me cope with this homework mania?
From: Making
a cheese
Back to top
ameich Posted: Friday 05th of Jan 16:04
I have a solution for you and it might just prove to be a better one than buying a new textbook. Try Algebra Buster, it covers a pretty comprehensive list of mathematical topics and is
highly recommended. It’ll help you solve various types of problems and it’ll also address all your enquiries as to how it came up with a particular solution. I tried it when I was having
difficulty solving questions based on explain quadratic equations and I really enjoyed using it.
Back to top
cufBlui Posted: Friday 05th of Jan 21:34
I checked out each one of them myself and that was when I came across Algebra Buster. I found it particularly apt for hypotenuse-leg similarity, graphing parabolas and side-side-side
similarity. It was actually also effortless to operate this. Once you feed in the problem, the program carries you all the way to the answer elucidating every step on its way. That’s
what makes it splendid. By the time you arrive at the answer, you already know how to work out the problems. I enjoyed learning to solve the problems with Algebra 1, Basic Math and Basic
Math in algebra. I am also positive that you too will appreciate this program just as I did. Wouldn’t you want to check this out?
Back to top
Damnon Posted: Saturday 06th of Jan 17:07
Wow. I did not realize that there could be a solution for me. May be I should take this on. I am already comforted to know that the answer to my problems is at hand. I am willing to try
this out. Can you tell me where I can buy this program?
Back to top
TihBoasten Posted: Monday 08th of Jan 07:53
A truly piece of algebra software is Algebra Buster. Even I faced similar difficulties while solving ratios, equivalent fractions and like denominators. Just by typing in the problem
workbookand clicking on Solve – and step by step solution to my math homework would be ready. I have used it through several algebra classes - Pre Algebra, Basic Math and Intermediate
algebra. I highly recommend the program.
Back to top
Bet Posted: Tuesday 09th of Jan 07:30
You can order it online through this link – http://www.algebra-online.com/order-algebra-online.htm. I personally think it’s quite good for a math software and the fact that they even
offer an unconditional money back guarantee makes it a deal, you can’t miss.
From: kµlt
øƒ Ø™
Back to top | {"url":"http://www.algebra-online.com/algebra-online-help/quadratic-formula/explain-quadratic-equations.html","timestamp":"2014-04-19T01:47:07Z","content_type":null,"content_length":"29811","record_id":"<urn:uuid:e3f46abc-8928-4ff3-bfaa-e7390b30fb77>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
Massive row HELP - Determining arithmeti
bump have some time for this but still struggling anyone?
How can i get the arithmetic mean from all the positive numbers in each row.
Last edited on
How would you do it on paper? If I gave you a list of numbers and asked you to tell me the average value of all the positive numbers in the list.
id just get all the positive numbers out of each row and then get them in the correct order, but i cant quite do that in here since i cant pick each one out. Or at elast i think so
Break it down further. How are you getting all the positive numbers? It sounds silly but this is programming. All that stuff about syntax is just learning the notation. Programming is the art and
craft of thinking about problems in a way that the solution lends itself to a programmatical implementation. This is a very simple problem and just telling you what to do won't help you at all.
Why do you want to put them into an order. I didn't ask you to put them in order. I asked you to give me the average value of them.
Here are some numbers:
2 -4 5 -6 -3 3
What's the average value of the positive numbers? Watch yourself answering the question. How did you do it?
obviously but how do i do get sum from all the positive numbers in a row and then can divide by the amount of numbers there were?
Last edited on
So you LOOKED at each number.
You DECIDED if the number was positive or not.
IF it was positive, you ADDED it to the total.
When you had read all the numbers in the row, you then divided the total by the number of positive values you found.
Now turn each step into code.
oh i think i got it
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/84667/","timestamp":"2014-04-16T22:00:12Z","content_type":null,"content_length":"13883","record_id":"<urn:uuid:09d7f96a-f67f-41a3-b566-f9f96ae3ed79>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using imaginary number " i ". Dev C++ vs. VS2008!
August 13th, 2013, 03:47 PM #1
Join Date
Jun 2012
Hi There!
I created an algorithm that uses imaginary numbers. It is fine on Dev C++, and now I am trying to port to VS2008. I figured out most things, including how to declare complex numbers. However,
I've been having an incredible hard time trying to figure how to use the " i " number! For example:
In Dev C++:
z_cmplx = cexp(I * f1/Fs * 2 * PI);
Where "I" is a macro from the library!
In VS2008:
z_cmplx = std::exp(I * f1/Fs * 2 * PI);
Although I DID include <complex> library just like I did before, the compiler gives me: error C2065: 'I' : undeclared identifier.
This is a pretty time sensitive matter, if anyone could help I'd REALLY appreciate!
Thank you in advance.
Best wishes,
Re: Using imaginary number " i ". Dev C++ vs. VS2008!
I'm not an expert on using the complex class, but I believe that for complex numbers, exp works on a complex variable where a complex variable is defined to have a real and imaginary part.
So assuming fl/Fs and PI are real then you would define a complex number
complex<double> com1(0, fl/Fs * 2 * PI); //This gives an imaginary number ie 0 + i*fl/Fs * 2 * PI
complex<double> com2 = exp(com1);
All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on.
Re: Using imaginary number " i ". Dev C++ vs. VS2008!
Where "I" is a macro from the library!
You have to be joking.
Do you know the number of naming conflicts a 1 letter name would have in a large program? Or how about searching your source files for the usage of the I macro?
Macro names should be named in a descriptive manner, so that conflicts and confusion doesn't occur.
Paul McKenzie
Re: Using imaginary number " i ". Dev C++ vs. VS2008!
Well the problem comes from an even more basic level...
I am not finding a way to set values to real and imaginary parts of the number separately.
I found that the function real(complex_number) returns the real part of the number, and so does the function imag(complex_number) returns the imaginary part. However I can't set them.
In Dev C++ I do that by using:
__real__ complex_number = desired_real_value
__imag__ complex_number = desired_imag_value
However in VC++ 2008 I've been trying for hours to find a way and I still havent found! MSDN documentation only shows how to RETURN the values, not to set them. Been having a hard time here...
Anyone knows it??
Re: Using imaginary number " i ". Dev C++ vs. VS2008!
I found no problem setting the values when declaring the complex numbers btw. But after has been declared, I didn't find a way to change the imaginary part of it. In therms of computational
efficiency I don't think that it's a good idea do declare it all the time to be able to set values...
Re: Using imaginary number " i ". Dev C++ vs. VS2008!
I found no problem setting the values when declaring the complex numbers btw. But after has been declared, I didn't find a way to change the imaginary part of it. In therms of computational
efficiency I don't think that it's a good idea do declare it all the time to be able to set values...
Do you mean setting real and imaginary parts separately like this?
#include <iostream>
#include <complex>
using namespace std;
int main()
const double fl = 2, Fs = 3;
const double PI = 3.1416;
typedef complex<double> Compnum;
Compnum com1(0, fl/Fs * 2 * PI); //This gives an imaginary number ie 0 + i*fl/Fs * 2 * PI
Compnum com2 = exp(com1);
Compnum com3 (3, 4);
cout << com3 << endl << com2 << endl;
com3 = Compnum(real(com3), 6);
com2 = Compnum(2, imag(com2));
cout << com3 << endl << com2 << endl;
return 0;
This produces the output
where the imaginary part of com3 has been set directly keeping the real part and the real part of com2 has been set directly keeping the imaginary part.
All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on.
Re: Using imaginary number " i ". Dev C++ vs. VS2008!
>> Where "I" is a macro from the library!
"I" is part of C99, which VC2008 does not support.
Perhaps you could create a global complex<double> I.
Re: Using imaginary number " i ". Dev C++ vs. VS2008!
>> Where "I" is a macro from the library!
"I" is part of C99, which VC2008 does not support.
Perhaps you could create a global complex<double> I.
I hope that "I" is a keyword and not a macro, as the OP suggested.
Paul McKenzie
Re: Using imaginary number " i ". Dev C++ vs. VS2008!
Re: Using imaginary number " i ". Dev C++ vs. VS2008!
Yikes. I wonder if the language authors were tired that day?
Paul McKenzie
Re: Using imaginary number " i ". Dev C++ vs. VS2008!
Do you mean setting real and imaginary parts separately like this?
#include <iostream>
#include <complex>
using namespace std;
int main()
const double fl = 2, Fs = 3;
const double PI = 3.1416;
typedef complex<double> Compnum;
Compnum com1(0, fl/Fs * 2 * PI); //This gives an imaginary number ie 0 + i*fl/Fs * 2 * PI
Compnum com2 = exp(com1);
Compnum com3 (3, 4);
cout << com3 << endl << com2 << endl;
com3 = Compnum(real(com3), 6);
com2 = Compnum(2, imag(com2));
cout << com3 << endl << com2 << endl;
return 0;
This produces the output
where the imaginary part of com3 has been set directly keeping the real part and the real part of com2 has been set directly keeping the imaginary part.
Exactly! It worked perectly now. Thanks a lot!!!
August 13th, 2013, 04:58 PM #2
Senior Member
Join Date
Dec 2012
August 13th, 2013, 06:49 PM #3
Elite Member Power Poster
Join Date
Apr 1999
August 13th, 2013, 08:35 PM #4
Join Date
Jun 2012
August 13th, 2013, 08:56 PM #5
Join Date
Jun 2012
August 14th, 2013, 05:11 AM #6
Senior Member
Join Date
Dec 2012
August 14th, 2013, 09:43 AM #7
Senior Member
Join Date
Nov 2003
August 14th, 2013, 12:35 PM #8
Elite Member Power Poster
Join Date
Apr 1999
August 14th, 2013, 02:31 PM #9
Senior Member
Join Date
Nov 2003
August 14th, 2013, 02:40 PM #10
Elite Member Power Poster
Join Date
Apr 1999
August 15th, 2013, 12:01 PM #11
Join Date
Jun 2012 | {"url":"http://forums.codeguru.com/showthread.php?539117-Using-imaginary-number-quot-i-quot-Dev-C-vs-VS2008!&p=2126729","timestamp":"2014-04-18T09:17:42Z","content_type":null,"content_length":"135630","record_id":"<urn:uuid:cec9382f-fc5c-4082-95f5-172ea582dc4d>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
Commerce, GA ACT Tutor
Find a Commerce, GA ACT Tutor
...If my students do not understand the way I am teaching I will adjust my teachings to be more suitable for my students. I am flexible with my schedule and I am always punctual. I am a strong
believer that practice makes perfect and that homework is necessary for students to achieve greatness.
14 Subjects: including ACT Math, chemistry, geometry, biology
...I have had great results with GED, ACT, and SAT students, and I have experience grading the Georgia State Writing Test for public high school students. I am very patient and soft spoken. I
help students understand their material by whatever means works for them.I learned to read through phonics.
33 Subjects: including ACT Math, reading, English, writing
...I have degrees in Mathematics and Mathematics education and several-year math teaching experiences at colleges. I can help high school and college students who need help with algebra,
geometry, pre-calculus and calculus. I can make mathematics easier than you think and help you make it sense.
8 Subjects: including ACT Math, calculus, geometry, algebra 1
...I have a Bachelor degree in Economics (Business Administration) and am specialized in Accounting and foreign languages, especially in grammar. I speak fluent Spanish, too. However, most pupils
have problems in maths, so this is what I tutored most.
29 Subjects: including ACT Math, reading, Spanish, English
...Played second base and shortstop in college. Helped set the NCAA season record for double plays by a team. Career .342 batting average in 4 years of college. 12 years of coaching baseball
teams and 4 years working as a tutor & head coach for the Las Vegas Baseball Academy.
29 Subjects: including ACT Math, reading, GED, English
Related Commerce, GA Tutors
Commerce, GA Accounting Tutors
Commerce, GA ACT Tutors
Commerce, GA Algebra Tutors
Commerce, GA Algebra 2 Tutors
Commerce, GA Calculus Tutors
Commerce, GA Geometry Tutors
Commerce, GA Math Tutors
Commerce, GA Prealgebra Tutors
Commerce, GA Precalculus Tutors
Commerce, GA SAT Tutors
Commerce, GA SAT Math Tutors
Commerce, GA Science Tutors
Commerce, GA Statistics Tutors
Commerce, GA Trigonometry Tutors | {"url":"http://www.purplemath.com/commerce_ga_act_tutors.php","timestamp":"2014-04-18T11:09:25Z","content_type":null,"content_length":"23605","record_id":"<urn:uuid:1284fab4-f159-4070-a7dd-fb4680cc9c87>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Diagonalisation of Matrices..
March 22nd 2013, 11:53 PM
Diagonalisation of Matrices..
So i am understanding that any M matrix can be rewritten such that M= UDU^-1, where U consists of the matrix consisting of the eigenvectors and D is the diagonal matrix consisting of eigenvalues.
Is there another way to think about this idea through transformations?
1. How does the matrix U relate to the matrix M in terms of transformations? Are they related in any way?
2. The matrix D is clearly the enlargement with the scale factors of eigenvalues in x & y directions?.So is it possible to think of the matrix M as a series of three transformation and can
someone give me an example where all this makes sense?
Fundamentally, i want to understand why a matrix can be broken up in this way?
Also, and other than the idea of powers is there another application of this result?
March 22nd 2013, 11:59 PM
Re: Diagonalisation of Matrices..
Hey rodders.
You want to look at rotation groups and algebras that deal with this specific AXA^(-1)
March 23rd 2013, 12:54 PM
Re: Diagonalisation of Matrices..
Thanks! More reading required then! | {"url":"http://mathhelpforum.com/advanced-algebra/215354-diagonalisation-matrices-print.html","timestamp":"2014-04-24T16:54:56Z","content_type":null,"content_length":"4601","record_id":"<urn:uuid:611a9c2e-31ca-456d-88d9-348bc7ca8194>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monte-Carlo integration
Problem 179. Monte-Carlo integration
Write a function that estimates a d-dimensional integral to at least 1% relative precision.
• d: positive integer. The dimension of the integral.
• fun: function handle. The function accepts a row-vector of length d as an argument and returns a real scalar as a result.
• I: is the integral over fun from 0 to 1 in each direction.
/ / /
I = |dx_1 |dx_2 ...| dx_d fun([x_1,x_2,...,x_d])
/ / /
fun = @(x) x(1)*x(2)
d = 2
The result should be 0.25. An output I=0.2501 would be acceptable, because the relative deviation would be abs(0.25-0.2501)/0.25 which is smaller than 1%.
The functions in the test-suite are all positive and generally 'well behaved', i.e. not fluctuating too much. Some of the tests hav a relatively large d.
Problem Comments
4 Comments
Show 1 older comment
I'm confused by the 3rd test case. Can the integral inside an n-dimensional hypercube really be greater than 1?
In my comment, I mean an n-dimensional UNIT hypercube, which is what you integration limits impose.
on 1 Feb 2012
Of course. It depends on the integrand. Even in 1d, if the integrand is e.g. 10x, the result will be 5.
I was confused. Thanks for clarifying.
Solution Comments
2 Comments
on 30 Jan 2012
just found this "cheat". don't hate! I've reported it to TMW
on 30 Jan 2012
I adjusted the test suite, thanks for the hint... | {"url":"http://www.mathworks.com/matlabcentral/cody/problems/179-monte-carlo-integration","timestamp":"2014-04-17T10:03:16Z","content_type":null,"content_length":"33820","record_id":"<urn:uuid:018dd98d-61f9-497e-997b-ff4aec27dfe8>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shrink numbers within a range preserving distance ratio
April 16th 2010, 02:42 AM #1
Apr 2010
Shrink numbers within a range preserving distance ratio
I am not sure whether this problem would belong to Algebra or set Theory, hopefully it belongs here... I am not a mathematician, so its hard for me to come to a concrete category. I am trying to
solve a problem at hand, and I need to get this necessary step done in order to find a complete solution.
Problem Description:
Suppose I have some points spread (unequally) on a line within some range. I want to shrink the points within a smaller range, so that the old distance ratio between points remain preserved.
I have points a,b,c,d,e,f,g spread between Range X & Y
X = 0.25 Y = 0.90
a = 0.25 , b=0.32 , c=0.45, , d=0.50, e=0.60, f=0.80, , g=0.90
As we can see above that the distances are uneuqal between the spreaded points. Now, I want to shrink these points to a smaller range X' & Y',
X' = 0.35 Y' = 0.65
How would I calculate values of a',b',c',d',e',f',g' , such that the distance ration between all these points remain same for the smaller range.
a' = ? , b'=? , c'=?, , d'=?, e'=?, f'=?, , g'=?
I have been trying to come up with a formula that would help me solve this problem, but am unable to formulate one.
I hope someone here can help me.
A simple way to do that is to regard the points as fractions of your original interval.
Call $L=Y-X$ and the 'ratio to the interval' $F_i = \frac{a_i-X}{L}$ your new coordinates will then be $a_i' = X' + F_iL'$
April 16th 2010, 02:49 AM #2
Junior Member
Apr 2010
April 16th 2010, 03:28 AM #3
Apr 2010 | {"url":"http://mathhelpforum.com/algebra/139477-shrink-numbers-within-range-preserving-distance-ratio.html","timestamp":"2014-04-21T15:20:28Z","content_type":null,"content_length":"36977","record_id":"<urn:uuid:d1a4636d-8788-492d-8926-2c910d1422b1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cooper City, FL Prealgebra Tutor
Find a Cooper City, FL Prealgebra Tutor
...Mathematical logic is the study of logic applied to different mathematical subjects often divided into the fields of set theory, model theory, recursion theory, and proof theory. My
fascination for films is an understatement. I have taken several University courses on films in California.
48 Subjects: including prealgebra, reading, statistics, chemistry
...I also show them examples of how to use linear equations to solve real-life problems. Most times they are amazed. I am a certified teacher with Broward County Public Schools and have been
teaching Algebra I for 9 years.
6 Subjects: including prealgebra, French, geometry, algebra 1
...I have worked directly with children for 8 years. I have experience tutoring students in math, English, science, as well as social skills for 2 going on 3 years. All math's from calculus and
lower would be ideal areas for me to assist students with.
13 Subjects: including prealgebra, geometry, algebra 1, algebra 2
I have been tutoring in Boca Raton for the last 10 years and references would be available on request. I basically tutor Math, mostly junior high and high school subjects and also tutor college
prep, both ACT and SAT. I have also tutored SSAT.
10 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...It was one of my duties to help these students pass the GED. I worked at a technical institute for adult students and assisted some of adults with the General Science, Arithmetic Reasoning,
Word Knowledge, Paragraph Comprehension, and Mathematics Knowledge sections. They were skilled in the technical parts of the test but needed help with the the general sections.
55 Subjects: including prealgebra, reading, Spanish, English
Related Cooper City, FL Tutors
Cooper City, FL Accounting Tutors
Cooper City, FL ACT Tutors
Cooper City, FL Algebra Tutors
Cooper City, FL Algebra 2 Tutors
Cooper City, FL Calculus Tutors
Cooper City, FL Geometry Tutors
Cooper City, FL Math Tutors
Cooper City, FL Prealgebra Tutors
Cooper City, FL Precalculus Tutors
Cooper City, FL SAT Tutors
Cooper City, FL SAT Math Tutors
Cooper City, FL Science Tutors
Cooper City, FL Statistics Tutors
Cooper City, FL Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Dania prealgebra Tutors
Dania Beach, FL prealgebra Tutors
Davie, FL prealgebra Tutors
Fort Lauderdale prealgebra Tutors
Hollywood, FL prealgebra Tutors
Lauderdale Lakes, FL prealgebra Tutors
Lauderhill, FL prealgebra Tutors
Miramar, FL prealgebra Tutors
Oakland Park, FL prealgebra Tutors
Pembroke Park, FL prealgebra Tutors
Pembroke Pines prealgebra Tutors
Plantation, FL prealgebra Tutors
Southwest Ranches, FL prealgebra Tutors
Sunrise, FL prealgebra Tutors
Weston, FL prealgebra Tutors | {"url":"http://www.purplemath.com/Cooper_City_FL_prealgebra_tutors.php","timestamp":"2014-04-21T14:58:00Z","content_type":null,"content_length":"24356","record_id":"<urn:uuid:9f75ebe1-b410-4724-8a4d-60323b21afca>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
11.3 Physical Optimization
Next: An Improved Method Up: 11 Load Balancing and Previous: Applications and Extensions
C There were several reasons for this, one of which was, of course, natural curiosity. Another was the importance of load balancing and data decomposition which is, as discussed previously in this
chapter, ``just'' an optimization problem. Again, we already mentioned in Section 6.1 our interest in neural networks as a naturally parallel approach to artificial intelligence. Section 9.9 and
Section 11.1 have shown how neural networks can be used in a range of optimization problems. Load balancing has the important (optimization) characteristic of NP completeness, which implies that it
would take an exponential time to solve completely. Thus, we studied the travelling salesman problem (TSP) which is well known to be NP-complete and formally equivalent to other problems with this
property. One important contribution of CSimic:91a]
Simic derived the relationship between the neural network [Hopfield:86a] and elastic net [Durbin:87a;89a], [Rose:90f;91a;93a], [Yuille:90a] approaches to the TSP. This work has been extensively
reviewed [Fox:91j;92c;92h;92i] and we will not go into the details here. A key concept is that of physical optimization which implies the use of a physics approach of minimizing the energy, that is,
finding the ground state of a complex system set up as a physical analogy to the optimization problem. This idea is illustrated clearly by the discussion in Section 11.1.3 and Section 11.2. One can
understand some reasons why a physics analogy could be useful from two possible plots of the objective function to be minimized, against the possible configurations, that is, against the values of
parameters to be determined. Physical systems tend to look like Figure 11.1(a), where correlated (i.e., local) minima are ``near'' global minima. We usually do not get the very irregular landscape
shown in Figure 11.1(b). In fact, we do find the latter case with the so-called random field Ising model, and here conventional physics methods perform poorly [Marinari:92a], [Guagnelli:92a]. Ken
Rose showed how these ideas could be generalized to a wide class of optimization problems as a concept called deterministic annealing [Rose:90f], [Stolorz:92a]. Annealing is illustrated in Figure
11.23 (Color Plate). One uses temperature to smooth out the objective function (energy function) so that at high temperature one can find the (smeared) global minimum without getting trapped in
spurious local minima. Temperature is decreased skillfully initializing the search at Temperature Kirkpatrick:83a] as in Sections 11.1 and 11.3 or with a deterministic iteration. Neural and elastic
networks can be viewed as examples of deterministic annealing. Rose generalized these ideas to clustering [Rose:90a;90c;91a;93a];
vector quantization used in coding [Miller:92b], [Rose:92a]; tracking [Rose:89b;90b]; and electronic packing [Rose:92b]. Deterministic annealing ;has also been used for robot path planning with many
degrees of freedom [Fox:90k], [Gandhi:90b] (see also Figure 11.22 (Color Plate)), character recognition [Hinton:92a], scheduling problems [Gislen:89a;91a], [Hertz:92a], [Johnston:92a], and quadratic
assignment [Simic:91a].
Figure 11.23: Annealing tracks global minima by initializing search at one temperature by minima found at other temperatures .
Neural networks have been shown to perform poorly in practice on the TSP [Wilson:88a], but we found them excellent for the formally equivalent load-balancing problem in Section 11.1. This is now
understood from the fact that the simple neural networks used in the TSP [Hopfield:86a] used many redundant neural variables, and the difficulties reported in [Wilson:88a] can be traced to the role
of the constraints that remove redundant variables. The neural network approach summarized in Section 11.1.6 uses a parameterization that has no redundancy and so it is not surprising that it works
well. The elastic network can be viewed as a neural network with some constraints satisfied exactly [Simic:90a]. This can also be understood by generalizing the conventional binary neurons to
multistate or Potts variables [Peterson:89b;90a;93a].
Moscato developed several novel ways of combining simulated annealing with genetic algorithms [Moscato:89a;89c;89d;89e] and showed the power and flexibility of these methods.
Next: An Improved Method Up: 11 Load Balancing and Previous: Applications and Extensions
Guy Robinson
Wed Mar 1 10:19:35 EST 1995 | {"url":"http://www.netlib.org/utk/lsi/pcwLSI/text/node259.html","timestamp":"2014-04-19T06:52:23Z","content_type":null,"content_length":"8607","record_id":"<urn:uuid:ba9254c1-6b63-4975-a237-081c94508c0a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fort Mcdowell Math Tutor
Find a Fort Mcdowell Math Tutor
...I love being a part of that process!I am a certified fifth grade teacher, working currently in the Mesa school district. I have taught students from second through sixth grade in my time with
the district. My degree from NAU included a dual major with both elementary and special education.
16 Subjects: including prealgebra, algebra 1, geometry, reading
...I spent the last year as a teacher at Scottsdale Preparatory Academy, a 5-12 grade public charter school in northern Scottsdale. I have tutored students of all ages. I have a bachelor's degree
in music education from Arizona State University (summa cum laude). Upon graduation I was recognized ...
29 Subjects: including prealgebra, piano, GED, English
...My philosophy on tutoring-teaching is to make sure students are learning to think logically and for themselves.Qualifications for tutoring algebra students include: 1. Master's Degree in
Mathematics from Youngstown State University. 2. Have taught children at all levels from 8th grade through 12th, and also college level courses at YSU. 3.
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...Singapore uses visual methods to teach why math works. I feel this helps a student to understand the math behind the problem and helps them to learn. I live in the North Phoenix area by the
Paradise Valley mall.
11 Subjects: including discrete math, algebra 1, algebra 2, prealgebra
...I love tutoring statistics and have had the opportunity of helping many students succeed in their stats courses, and will continue to do so. I am a registered nurse and successfully completed
my Bachelors of Science in Nursing program. I am fully aware of the material and content covered in the BSN undergraduate program.
10 Subjects: including probability, statistics, public speaking, nursing | {"url":"http://www.purplemath.com/Fort_Mcdowell_Math_tutors.php","timestamp":"2014-04-16T10:23:06Z","content_type":null,"content_length":"23845","record_id":"<urn:uuid:10baf795-04bb-431b-9cda-64c0370bb456>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
coin - prob distribution
May 22nd 2006, 06:50 PM #1
May 2006
coin - prob distribution
I need help with the following question:
A coin is biased to show heads twice as often as it shows tails. You toss this coin 3 times and win 1 dollar each time it shows tails but lose 50 cents each time it shows heads.
a) What is the probability distribution of X: the amount of money you could win or lose in 3 tosses?
b) On average, how much money would you win or lose each time you toss this coin 3 times?
c) Express the maximum amount of money you could win in 3 tosses of this coin as a Z-score?
a) X{-1.50, 0, 1.50, 3.00}, p(X) {0.2963, 0.4444, 0.2222, 0.037}
b) mu = 0
c) Z=2.45
I need help with the following question:
A coin is biased to show heads twice as often as it shows tails. You toss this coin 3 times and win 1 dollar each time it shows tails but lose 50 cents each time it shows heads.
a) What is the probability distribution of X: the amount of money you could win or lose in 3 tosses?
Let $x$ represent the number of heads. Then,
$\left\{ \begin{array}{cc}x&P(x)\\0&.0037\\1&.2222\\2&.4444 \\3&.2963$
probability of
losing 1.50 is .0037
gaining nothing is .2222
winning 1.50 is .4444
wining 3 is .2963
but how are those numbers/probabilities calculated?
but how are those numbers/probabilities calculated?
you deal with binomial distribution. The probability that the result X happens exactly k-times is with your problem:
$P(X=k)={3\choose k} \cdot \left({2\over3}\right)^k \cdot \left(1-{2\over3}\right)^{(3-k)}$
Plug in the values 0, 1, 2, 3 for k and you'll get the probabilities which ThePerfectHacker had already calculated.
For instance: k = 0:
$P(X=0)={3\choose 0} \cdot \left({2\over3}\right)^0 \cdot \left(1-{2\over3}\right)^{(3-0)}$ = $1 \cdot 1\cdot \left({1\over3}\right)^{3}={1 \over 27} \approx .037$
May 22nd 2006, 07:38 PM #2
Global Moderator
Nov 2005
New York City
May 22nd 2006, 08:03 PM #3
May 2006
May 23rd 2006, 05:38 AM #4 | {"url":"http://mathhelpforum.com/statistics/3073-coin-prob-distribution.html","timestamp":"2014-04-19T19:44:51Z","content_type":null,"content_length":"41374","record_id":"<urn:uuid:6b92f3d6-6931-4a31-87fb-9c9ed3114467>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
Washington Precalculus Tutor
...I believe that there is a way to learn math for everyone and I look forward to finding out which way works best for you. Even if you just need a little reminder of math you used to know, I'm
happy to help you remember the fundamentals. I feel very strongly about help students succeed in math be...
22 Subjects: including precalculus, calculus, geometry, GRE
Dear Prospective Tutee, Get ready to learn, have fun, and gain confidence in your ability to do math—and raise your grades too! I offer tutoring sessions for all high school math subjects—from
pre-algebra to AP calculus. I have helped to significantly improve students' scores and grades (as much as from an F to an A) in high school math subjects for three years now.
15 Subjects: including precalculus, chemistry, calculus, geometry
...This gives me the strong background necessary to teach precalculus. I have also been a math tutor through college, teaching up to Calculus-level classes. My tutoring style can adapt to
individual students and will teach along with class material so that students can keep their knowledge grounded.
11 Subjects: including precalculus, chemistry, geometry, algebra 2
...I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a
system of self-learning and studying that allowed me to efficiently learn all the required materials whil...
15 Subjects: including precalculus, calculus, physics, GRE
...Source: www.nsf.gov. This means that the importance of understanding Mathematics can lead to more opportunities! Imagine, hiring a tutor that cares about your success and the money in your
13 Subjects: including precalculus, chemistry, physics, algebra 2 | {"url":"http://www.purplemath.com/washington_navy_yard_precalculus_tutors.php","timestamp":"2014-04-20T07:04:16Z","content_type":null,"content_length":"24202","record_id":"<urn:uuid:fea6a3e9-f2a2-474a-b26c-5925daa48e91>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
Motion problem - Shock drop ride
1. The problem statement, all variables and given/known data
a ride at a them park is a vertically ascending/decending ride. The total mass of the 10 person carriage is 750kg (people included). the ride rises up to the top of the ride until stationary waits
for 2 seconds and then drops at a rate of 10ms^-2 for 3.5m. when near the bottom of the ride, the carriages brakes are applied for 0.8m.
a) how long is the carriage in freefall until the brakes are applied?
2. Relevant equations
not too sure, but i think you'll probably require:
3. The attempt at a solution
to be honest, i have no idea where to begin | {"url":"http://www.physicsforums.com/showthread.php?t=297365","timestamp":"2014-04-18T08:19:07Z","content_type":null,"content_length":"22796","record_id":"<urn:uuid:9a170865-d014-4ada-8d4c-3b50d5dd9d80>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |