content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Mike McIlveen on 01 Apr 10
Teaching the world mathematics for free.
My blog. Contains intuitive explanation of mathematics topics ranges from elementary to undergraduate. Discussess integration of computers in teaching math. Lots of Tutorials: GeoGebra, Wordpress
blogging, Geometer's Sketchpad, etc.
Maggie Verster on 13 Oct 09
Great directory of tutorial textbook style lessons to point your learners to.
Darren Kuropatwa on 20 Jan 10
Interactive Mathematics uses LiveMath, Flash and Scientific Notebook to enhance mathematics lessons.
Topics range from grade 8 algebra to college-level Laplace Transformations.
maiteg on 26 Aug 11
Didn't find what you are looking for on this page? Try search: This algebra solver can solve a wide range of math problems. (Please be patient while it loads.) Easy to understand math lessons on
DVD. See samples before you commit. (Well, not really a math game, but each game was made using math...)
Rashmi Kathuria on 28 May 09
Interactive, animated maths dictionary for kids with over 600 common math terms explained in simple language. Math glossary with math definitions, examples, math practice interactives,
mathematics activities and math calculators. © Jenny Eather 2007.
Maggie Verster on 28 Aug 09
Lots of real life interactive examples in between.... | {"url":"https://groups.diigo.com/group/math-links/content/tag/school2.0%20algebra%20Interactive","timestamp":"2014-04-18T16:55:21Z","content_type":null,"content_length":"56207","record_id":"<urn:uuid:4367b209-fddb-4997-a77c-3837e6ea5c81>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptology ePrint Archive: Report 2011/553
Publicly Verifiable Proofs of Sequential WorkMohammad Mahmoody and Tal Moran and Salil VadhanAbstract: We construct a publicly verifiable protocol for proving computational work based on
collision-resistant hash functions and a new plausible complexity assumption regarding the existence of ``inherently sequential'' hash functions. Our protocol is based on a novel construction of
time-lock puzzles. Given a sampled ``puzzle'' P \gets D_n, where n is the security parameter and D_n is the distribution of the puzzles, a corresponding ``solution'' can be generated using N
evaluations of the sequential hash function, where N>n is another parameter, while any feasible adversarial strategy for generating valid solutions must take at least as much time as \Omega(N) \emph
{sequential} evaluations of the hash function after receiving P. Thus, valid solutions constitute a ``proof'' that \Omega(N) parallel time elapsed since P was received. Solutions can be publicly and
efficiently verified in time poly(n).polylog(N). Applications of these ``time-lock puzzles'' include noninteractive timestamping of documents (when the distribution over the possible documents
corresponds to the puzzle distribution D_n) and universally verifiable CPU benchmarks.
Our construction is secure in the standard model under complexity assumptions (collision-resistant hash functions and inherently sequential hash functions), and makes black-box use of the underlying
primitives. Consequently, the corresponding construction in the random oracle model is secure unconditionally. Moreover, as it is a public-coin protocol, it can be made non-interactive in the random
oracle model using the Fiat-Shamir Heuristic.
Our construction makes a novel use of ``depth-robust'' directed acyclic graphs---ones whose depth remains large even after removing a constant fraction of vertices---which were previously studied for
the purpose of complexity lower bounds. The construction bypasses a recent negative result of Mahmoody, Moran, and Vadhan (CRYPTO `11) for time-lock puzzles in the random oracle model, which showed
that it is impossible to have time-lock puzzles like ours in the random oracle model if the puzzle generator also computes a solution together with the puzzle.
Category / Keywords: cryptographic protocols / Publication Info: combinatorial cryptography, hash functions, random oracles, Date: received 6 Oct 2011, last revised 17 Mar 2013Contact author:
mahmoody at gmail comAvailable format(s): PDF | BibTeX Citation Note: Polished the proofs a bit (with combining the two hash properties). Version: 20130318:035942 (All versions of this report)
Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ] | {"url":"http://eprint.iacr.org/2011/553/20130318:035942","timestamp":"2014-04-17T06:46:14Z","content_type":null,"content_length":"4074","record_id":"<urn:uuid:ba934768-878e-425b-b462-5b44d0a34805>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00333-ip-10-147-4-33.ec2.internal.warc.gz"} |
... by: Sherman
Shum and Andrew - the number of cells in the chain does NOT have to be even. See Figure 22.1 on page 62 of "The Logic of Sudoku" for an example with 5 cells: 78, 86, 65, 53, 37 which removes 7 from
any cell that sees both ends of this chain. Also, all Y-Wing chains, which are a subset of XY chains, have an odd number of cells.
... by: Shum
This isn't stated, but it seems to me a necessary condition is the length of the chain must be even (referring to the number of cells, not the number of links). All the examples here have length 4
and the alternate coloring seems to imply this restriction. Agree?
Andrew Stuart writes:
That is a good observation but I fail in my documentation to include longer examples. I think I was hung up on finding short simple ones and now I realise that I should show they can be several
lengths. I also believe they will contain an even number of cells in total. I will have a hunt in my 2014 stock for longer ones and also try and find a crazy super length one, there has to be a
... by: suneet shrotri
I have discovered a new method to eliminate a possibility by 2 chains with one common end & other 2 ends are part of a set then if common end is not X then if in some cases other ends come to have XY
possibilities then possibility X can be eliminated from other remaining members of the set. I am sure that no other method on this website can do this.
... by: gg fuller
I'm unsure about the description as it mentions it is a more encompassing version of "Y-Wing Chains" and also refers to "Y-chains". Since there are no such strategies described on this site, I don't
know if that refers to "Y-wing" or "X-chains" or both. If I just look for XY-chains will that catch all Y-Wings and X-chains, or is X-chains separate? X-chains seems to require a loop to make any
Andrew Stuart writes:
Hi, yes there is a page, click here but I subsumed this into the more general chaining strategies and that opening line needs to be edited. This goes way way back to the early days.
... by: Guy Renauldon
The Exclamation Mark Method
This is to answer to François Tremblay and John Robinson question.
Yes François, there is a trick to discover an XY chain.
Recently, I settled a kind of method which I name the “Exclamation Mark Method”
That is what it is:
At first I select a bi value cell. Not anyone, but rather one containing two candidates regarding the digits missing the most.
Let’s say the cell I select contain “a” and “b” as candidates.
Then I make a double bet.
First bet: “a” true,
Second bet: “b” true.
Regarding the first bet, I draw a small vertical line “I” just under the “a” of the bet cell. Then I consider the other bi values cells only and I mark all the candidates deducted by this first bet
by an “I”, drawn just under the candidates.
Regarding the second bet “b” I do the same but the mark will be a small point “.”under the “b” in the bet cell.
Most of the time, soon or later, if a XY chain exists, I obtain a cell containing a candidate marked with an exclamation mark “!” (This is the reason of the name of my “method”, although the
exclamation mark can be reverse, the point on the top)
So a candidate , let’s say “k”, will be marked with an exclamation mark. It means it is true in both bets. So “k” is a true digit. (Although I can’t say anything on “a” or “b”). Most of the time,
this true digit will solve the Sudoku, when the Sudoku is not difficult too much.
But I do not consider the job as finished. I have to find my XY chain.
Let’s see the simplest case at first: the cell where we have the “!” is a bi value cell too. I consider the other candidate in that cell, the one without the “!”, let’s say “m”. “m” have to be
eliminated, so “m” must see two other bi value cells containing an “m”, which will be the two ends of my XY chain (to understand that, you have to know what an XY chain is exactly). Then I consider
each end of my chain and I follow the reverse itinerary from each end. I draw the links between the bi value cells really, with a pen. Doing so I’ll automatically reach my double bet cell. And that
it is, I’ve drawn my XY chain! Very easily, without the headaches y suffered before I applied that method. And doing so, you can find very long XY chains, containing ten links for example, or more.
Note that you can find a shorter chain which does not go through the double bet cell, but with the same ends. The reason is that more than one XY chain exists on a same grid most of the time.
If the cell which sees the two ends of the chain is not a bi value cell, it is a little bit more complicated. But the “method” works too. If the cell contains three candidates, you will have two
candidates marked with an exclamation mark. Then the third candidate can be removed. If a cell contains four candidates, you’ll find three exclamation marks in that cell. Generally speaking, if the
cell contains N candidates, you’ll find N-1 candidates with an exclamation mark. The other one left without the “!” can be removed. This chain does not give a true digit directly.
Sorry, I’ve been little bit long in my explanation, and please excuse my English, being French (perhaps François Tremblay will understand me better, his name sounding very French …)
I’d like to add some remarks, if Andrew Stuarts give me some more place.
1/An xy chain is in fact a multi value X Cycle in three dimension, containing bi values cells only from one end to the other. The strong links are located inside the bi value cells. The other links
can be either strong or weak, except the links between “m”, which must be weak. In fact it is a special AIC, with bi value cells only.
3/ This “method” is not mine…in fact it is the well known Forcing Chain Strategy. But the exclamation mark trick is mine, I think.
Many thanks to Andrew Stuart if he accepts to publish my comment, which maybe he can find a bit too long and not very clear (it is to me, but may be not to all the readers due to my bad English…).
Guy Renauldon
... by: John Robinson
I have the same question as Francois Trembly on 28-Oct-2009
... by: S.Monta
First i would like to thank you very much for your great website witch made me dramatically improve my Sudoku skills.
I have a question : could the chain 56-67-67-78-83-89-96 be an XY-Chain?
... by: Marshal L. Merriam
I think I have another type of xy chain. In a sudoku I'm doing now (not one of these examples), I find the sequence 28-87-73-36-68-82 where the first pair and the last are the same cell! If its value
were 8, then xy chain logic demands that it also be 2, a contradiction. No such contradiction prevents it being 2.
Alternatively we could argue that the xy chain demands that at least one of the endpoints be a 2. Since they are the same, the cell must be a 2.
... by: Marshal L. Merriam
I now understand Anton's confusion. It stems from the explanation of example 1. I would add one more bullet:
If C2 is not 5 then it must be 6. A1 cannot be 6, A5 cannot be 2 and A7 cannot be 9. Ergo, if C2 is not 5, then A7 must be 5. As noted in the first bullet, if A7 is not 5 then C2 must be 5. So either
A7 or C2 must be 5.
When stated this way, the chain does not require locked sets. If C2 is not 5, then no other cell in box 1 can be 6, so even if there were a 6 at B3(say), all of the bullets would still hold.
... by: Matt Lala
I read something on a different site (if I understand this correctly) is that when you reach the end of the chain, you are using the "leftover" value to make your elimination. In the first example,
the last green arrow is to a 6, and the leftover in that cell is a 5, which is what gets eliminated. If the last link had been to the 5, then the leftover is 6, and you cannot use that to end the
chain and eliminate the 5's. Not sure what conditions are needed for the start. I may be wrong on this.
Francois, I don't think you need a solver, but it does help to have a program that highlights all of them. I glance at the 'busiest' groupings of bivalue cells and then pick one and just start
driving. If a fork presents itself I'll choose whichever option seems to steer me back towards the start of the chain.
Anton, I don't think they have to be locked, they just need to be bivalue. Try plugging in a 2 at the start of that chain in the 2nd example, and follow the links, and you'll see how the green cell
at the end becomes a 6 (despite the unlocked cells used). And plugging in a 6 at the start makes the eliminations pretty obvious.
... by: Francois Tremblay
I understand the logic but my problem is how do you spot this without the help of a solver? Visually, the start and end of such a chain are tough to find. Do you go through all the grid to start the
chain at all possible bi-value cells? In the "simple" cases above, you have respectively 21 & 22 bi-value cells (even your book's example on page 62, figure 22.1 shows 21 of them) which would
represent a lot of permutations that only a computer could run through. As a human solver, is there a trick to find those chains?
... by: Anton Delprado
Maybe I am missing something but it looks like 4E and 8E are not locked for the value 6 because 9E is potentially 6 as well.
An XY-Chain is possible from this although it would have the pivot chain below:
3A - 3C - 3H - 6H - 6F - 5F - 9F | {"url":"http://www.sudokuwiki.org/XY_Chains","timestamp":"2014-04-16T13:23:36Z","content_type":null,"content_length":"33114","record_id":"<urn:uuid:afd1f932-1825-4138-b69f-9bee7bc15450>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00326-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - tutorial on the Christoffel equation used in relativity
It's kind of a matter of definition. It has been traditional to say that the Christoffel symbols are not tensors, because they don't transform like tensors. If you're of the Wald school, however, you
will say that the Christoffel symbols are tensors; it's just that there is a different Christoffel tensor for each coordinate system.
In more detail: the Christoffel symbols are defined as the difference between the Levi-Civita connection and a fiducial connection given by the partial derivatives in some coordinate system, ∇
= ∂ + Γ. The difference of two connections is a connection. However, the fiducial connection ∂ depends on the coordinate system (since it's the partial derivatives of some
coordinates), so the Christoffel symbols do too. For this reason, they don't obey the ordinary transformation law for tensors, because that assumes you are transforming the components of a single
tensor field; with Christoffel symbols, if you change the coordinates, you also change which tensor field you're working with. | {"url":"http://www.physicsforums.com/showpost.php?p=95849&postcount=5","timestamp":"2014-04-19T04:35:13Z","content_type":null,"content_length":"7759","record_id":"<urn:uuid:32364bea-f6c2-4c01-832d-6ae2299adfff>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Bafflers?
Good! This problem was interesting, but it didn't feel like when I solve one of your probability questions.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
Probability questions are much harder. Just imagine all those schleps over at the [deleted by admin] stumbling over problems everyday. Having no way to check their answers.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
Argggh. Didn't look at the post early enough.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
No why they have no way to check their work?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
All those schleps over at the [deleted by admin] stumbling over problems everyday.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
You should really tell me what the [deleted by admin] is replacing. Email?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
A phoney forum.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
Now why the heck would the admin delete it if he wanted everyone to see it?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
Delete what?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
[deleted by admin]
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
Well, I really don't know. Maybe the admin can explain.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
I think he meant some other forum. But that was not the point of the question.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
Well, by what you told me earlier, they don't like using CASs there.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
That is one problem. They are unable to check their results because they are unable to program.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
Intersting people. Interesting people, indeed.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
New problem:
An ion cannon can shatter meteors that weigh 2000 kg or more into 4 pieces each of equal weight. It can shatter meteors that are smaller than 2000 kg into 5 pieces of equal weight. Meteors that are
equal to or smaller than 10 kg are vaporized. How many shots are required to vaporize a 20000 kg meteor?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
Hi bobbym
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
Are you sure?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
New Problem:
C has this problem, he has the curve x^2-x+1 and a point A on it . He also has two other points on C at (0,0) and B at (0,6). He would like to maximize the ratio of the distance of AC and AB. In
other words he is trying to maximize b/a , see the diagram. Where should A and what is the maximum? Of course A must remain on the curve and be in the first quadrant.
A says) 2
B says) How did you get that? I got something entirely different.
C says) Thanks A.
D says) What does it take to do this problem.
E says) I got it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bafflers?
Hi bobbym
Last edited by anonimnystefy (2012-09-19 06:48:24)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Bafflers?
Hi anonimnystefy;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=229176","timestamp":"2014-04-16T07:41:41Z","content_type":null,"content_length":"40024","record_id":"<urn:uuid:8623fcc9-e48e-4fb0-8b05-f1baeaf284f9>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lionville Statistics Tutor
Find a Lionville Statistics Tutor
...I hold a B.S. in Mathematics from Rensselear Polytechnic Institute (RPI), and I offer tutoring in all math levels as well as chemistry and physics. My credentials include over 10 years tutoring
experience and over 4 years professional teaching experience. I received 800/800 on the GRE math sect...
58 Subjects: including statistics, reading, geometry, biology
...Most recently, I have added a Masters Degree in Applied Statistics to my repertoire, so I am also qualified to tutor statistics at any level. I have acted as a "study buddy", a homework helper,
and also a provider of mathematical enrichment. I can help you, or your child with the simple, weekly assignments, or I can prepare additional materials that we can work on together.
16 Subjects: including statistics, French, calculus, algebra 2
I have taught middle school and high school mathematics in northern Virginia for 8 years. I have tutored privately most of that time as well. I know that everyone learns in a different way and I
try to use real world objects, models and examples to help students understand abstract concepts with which they may be struggling.
28 Subjects: including statistics, calculus, ASVAB, algebra 1
...Solve problems involving basic geometry, rectangles and triangles. 10. Solve probability and statistics problems I successfully obtained B.S. in Business Administration. Related classes
include: 1.
27 Subjects: including statistics, calculus, geometry, algebra 1
...I am a good communicator, and can easily convey 1) what the problem is asking 2) the methodology to solve it, and 3) why. I will be happy to work with students that need help, want help, and
are willing to work to gain the understanding of how to proceed.I hold a Ph.D. in physics and well versed...
11 Subjects: including statistics, chemistry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/lionville_pa_statistics_tutors.php","timestamp":"2014-04-17T15:50:20Z","content_type":null,"content_length":"24102","record_id":"<urn:uuid:a55b7b9d-2d29-4f9a-8a18-dee372a3db28>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math::VarRate - deal with linear, variable rates of increase
version 0.100000
Math::VarRate is a very, very poor man's calculus. A Math::VarRate object represents an accumulator that increases at a varying rate over time. The rate may change, it is always a linear, positive
rate of change.
You can imagine the rate as representing "units gained per time." You can then interrogate the Math::VarRate object for the total units accumulated at any given offset in time, or for the time at
which a given number of units will have first been accumulated.
my $varrate = Math::VarRate->new(\%arg);
Valid arguments to new are:
rate_changes - a hashref in which keys are offsets and values are rates
starting_value - the value at offset 0 (defaults to 0)
This method returns the value of the accumulator at offset 0.
my $offset = $varrate->offset_for($value);
This method returns the offset (positive, from 0) at which the given value is reached. If the given value will never be reached, undef will be returned.
my $value = $varrate->value_at($offset);
This returns the value in the accumulator at the given offset.
Ricardo SIGNES <rjbs@cpan.org>
This software is copyright (c) 2013 by Ricardo SIGNES.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | {"url":"http://search.cpan.org/~rjbs/Math-VarRate/lib/Math/VarRate.pm","timestamp":"2014-04-20T01:06:18Z","content_type":null,"content_length":"13898","record_id":"<urn:uuid:fc34a9e2-0774-410b-b371-72b64039c1ac>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: J. LOGIC PROGRAMMING 1994:19, 20:1--679 1
. A characterization of Static Discontinuity Grammars (SDGs), a logic gram
mar formalism due to Dahl, is given in this paper. A substructural logic
sequent calculus proof system is given which is shown to be equivalent to
SDGs for parsing problems, in the sense that a string of terminal symbols is
accepted by a grammar if and only if the corresponding sequent is derivable
in the calculus. One calculus is given for each of the two major interpre
tations of SDGs; the two calculi differ by only a small restriction in one
rule. Since SDGs encompass other major grammar formalisms, including
DCGs, the calculi serve to characterize those formalisms as well. /
It is the authors' wish that no agency should ever derive military benefit from the publication
of this paper. Authors who cite this work in support of their own are requested to qualify similarly
the availability of these results.
Address correspondence to School of Computing Science, Simon Fraser University, Burnaby,
BC, Canada V5A 1S6. | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/634/3711368.html","timestamp":"2014-04-20T16:22:43Z","content_type":null,"content_length":"8337","record_id":"<urn:uuid:4e297a58-1d1b-4fb0-a59f-75af807b9447>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating VaR on the Jorion GBM Excel Sheet
Hi David,
When computing VaR on the Jorion-GBM Excel sheet, you used an Excel function to find the percentiles. There seem to be some discrepancies as to how this should be done if we do not have access to
Excel. In this case, for instance, how exactly would you use these 40 final prices to find the 90th, 95th and 99th percentile VaR? Should we use some sort of interpolation or just use
If there are 10 paths, I believe the 95th and 99th percentiles would both be the lowest price. Is this correct? Would the 90th percentile also be the lowest price or would it be the 9th out of
the 10 ranked prices?
Any advice on this topic would be greatly appreciated.
Hi Mike,
Again, you've hit on a topic with a history (um, congrats?). As I understand, there is no single superior method when the quantile/percentile draws from a discrete distribution. Dowd says this,
among others. In the case of loss n = 40, then technically valid answers to the question, what is the 90th percentile VaR? (assuming simple unweighted historical simulation), include:
* The 5th worst loss (36th best; Dowd's method), or
* The 4th worst loss (37th best, which I believe Jorion is still using), or
* Interpolation between the 4th and 5th (between 36th and 37th)
I like Dowd for this simple intuition: here the 90% VaR implies a 10% tail such that 40 * 10% = 4 losses are "in the tail." (i.e., worst, 2nd worst, 3rd worst, 4th worst = 4% of the total). Then,
the 5th worst allows us to emphasize the "worse than" aspect of VaR phrasing with "10% of the time we expect the loss to EXCEED the VaR."
So, my pref is to follow Dowd: [(significance% * n) + 1]th worst lost; e.g.,
if n = 40, the 90% VaR = 10%*4 + 1 = 5th worst (i.e., 36th best)
In the case of n=10 and 95% and 99% VaR, in my opinion, that is not ambiguous: any VaR with confidence ABOVE 90% would return the lowest price (ie., I totally agree with you); e.g., with n = 10,
even 91% VaR returns the lowest! Because unlike the above where the 10% quantile is "falling exactly in the crack between" two loss points, under n = 10, 91% or 95% or 99% all fall "on top of" or
"within" the lowest DISCRETE point.
What about the exam? GARP is very well aware of the inexactitude of 36th or 37th or interpolation between in the case of 90% and n = 40 (you may note they issued a revision to the practice
question per our input on this issue). So if they ask about the first case, they will allow a 36th or 37th answer. (I've requested they "settle" on Dowd's methodology and synchronize Jorion to
Dowd but not yet). Bottom line on the ambiguous case: GARP knows to recognize either 36th or 37th as valid so they won't force your choice.
I hope that helps, David
That makes perfect sense.
Hi David,
Again, this is from Schwesser, but apparently it is a question and answer from an old exam and I was hoping you could explain the answer.
They state that there are 300 returns and want the 99% VaR. The chose the 3rd from the bottom of the list instead of the 4th from the bottom of the list. From our discussion above, it sounds like
we should have taken the 4th.
I have emailed GARP about this previously and they do not want to answer my questions. They simply say that the question will provide all of the information needed.
For one of the largest topics in the exam this is a pretty crappy answer.
Can you think of any way they they could subtly tell us which one of these answers would be considered correct?
Hi Mike,
Right, I have repeatedly asked GARP to show some leadership on this methodological point (just use Dowd's method). Per Dowd, I prefer 4th from bottom if n=300 and 99% VaR.
GARP is well aware of the discrepancy; as i understand, they are going to craft questions to allow for either answer. So, going forward, under this type of question, you would not see both 3rd
and 4th. Hopefully, you would just see 4th from bottom.
(as i've argued, Dowd is the assigned reading and Dowd would give 4th in this case).
Thanks, David | {"url":"https://www.bionicturtle.com/forum/threads/calculating-var-on-the-jorion-gbm-excel-sheet.4659/","timestamp":"2014-04-20T14:43:59Z","content_type":null,"content_length":"39741","record_id":"<urn:uuid:1ad060b9-edc1-4727-a7a9-2edcd3f7c2d3>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00252-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math problems of the week: Circles in 6th grade Everyday Math vs. Singapore Math
In honor of Pi day, I wanted to find some comparison problems involving circumferences and areas of circles. 6th grade Sinapore Math has a whole chapter devoted to "Circles," so I turned next to the
6th grade Everyday Math curriculum. To my surprise, there isn't a single problem in the entire curriculum involving circle area and circumference. Only within the sections on "data" does the circle
make a brief cameo--but without its transcendental straight man:
I. The only circle-related problems in the 6th grade Everyday Math workbooks (Student Math Journal, Volume 1, pp. 41-42; p. 190):
II. The final problem set in the Circles chapter in the Singpoare Math Primary Mathematics 6B Workbook
(pp. 36-37):
III. Extra Credit
Is it right to prefer "data" to Pi?
Should circles in 6th grade be restricted to straightforward, single-step Percent Circle and protractor problems?
5 comments:
EM does areas of circles in 5th grade (and I think 4th. Our kid had it introduced mid-second semester 4th, they spiraled back to in 5th.) It's in Lesson 10.9 of EM's 5th grade Student Math
Journal. And volumes of circular cylinders and of cones are in 11.3 and 11.4.
But, they don't have anything like the problems in the Singapore curriculum. Just a couple simple area problems, a couple simple cylinder problems and a couple cones and done.
I'm surprised they don't spiral back to it in 6th.
How old is the EM book? Aren't we approaching our seventh billion soon?
California's 6th grade math standards specifically call for students to be able to calculate areas and circumferences of circles, so I'm surprised that EDM won state approval without the topic
being in their 6th grade book.
EM's 5th grade Math Journal starts off pretty well, I think, by having kids measure various circles and then calculating C/d to get pi. I remember suggesting to the 4th grade teachers that this
would be a good thing to do in the gym, since there are three large circles painted on the floor, and it would only take some string for the kids to do the project.
Then, EM has one page with one circle on a grid which kids use to calculate pi (and do data crunching with the circles all the class has measured in the previous task.)
They do something similar for the area, where they trace circles onto graph paper and count squares to estimate area. Then the formula is presented and a problem is offered. Kids then compare the
calculation to their graph-paper estimates. They calculate 3 more circles before being asked whether graph paper or the formula are easier.
I suppose if a student says that tracing onto graph paper and estimating the answer by counting squares is better, the teacher will validate that response. I wonder if you can design an airplane
or a pipeline by calculating areas with tracings on graph paper?
The most ironic thing is that the Singapore problems look interesting and fun to work on, while the EM questions are deathly boring. | {"url":"http://oilf.blogspot.com/2013/03/math-problems-of-week-circles-in-6th.html","timestamp":"2014-04-16T19:01:35Z","content_type":null,"content_length":"108851","record_id":"<urn:uuid:23e7c6af-8b9e-4781-975a-eebf8db41090>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graduate Programs
Graduate Programs in Computational Science
Chapman University MS in Computational Science
EPFL, Lausanne, Switzerland Master Program in Computational Science and Engineering
ETH, Zurich, Switzerland Rechnergestützte Wissenschaften (CSE)
George Mason University Computational and Data Sciences
Georgia State University M.S. in Scientific Computation
Helsinki University of Technology Computational Science and Engineering
MSc program for most fields of Engineering
KTH, Stockholm, Sweden International Programme in Scientific Computing MS program
Massachusetts Institute of Technology Computation for Design and Optimization
Mississippi State University Computational Engineering
National Singapore University Computational Science
New York University (NYU) Masters Degree Program in Scientific Computing.
Nice Sophia Antipolis University Master of Science Program in Computational Biology and Biomedicine
Ohio University M.S. in Mathematics - Computational Track
Old Dominion University Certificate in Computational Science & Engineering
Pennsylvania State University High Performance Computing
HPC Graduate Minors in High Performance Computing, at the M.S. and Ph.D. levels.
Princeton University Program in Applied and Computational Mathematics
Purdue University Computational Science and Engineering.
MS and PhD programs (Drop-down menu text is white and hard to see, but not broken).
Aachen Institute for Advanced Study in Computational Engineering Science (AICES)
RWTH Aachen University Master/Bachelor in Computational Engineering Science (CES)
Master Simulation Sciences (MS SiSc)
San Diego State University Computational Science
Seoul National University Computational Science and Technology
Stanford University Institute for Computational and Mathematical Engineering (ICME)
Computational Sciences
State University of New York Brockport This is an interdisciplinary independent degree - granting program, with participation of several departments. Both undergraduate and graduate degrees offered.
Program has core faculty plus members from other departments. The program offers access to several parallel supercomputers.
Technische Fachhochschule Berlin, Computational Engineering
University of Applied Sciences Master's degree in Computational Engineering
Technische Universität Darmstadt Computational Engineering
Technische Universität München International Masters Program in CSE
Technischen Universität Braunschweig Computational Sciences in Engineering
Master's degree in CSE. PhD in participating departments.
Universität Erlangen-Nürnberg Computational Engineering
Bachelor's and Master's degrees
University of California, San Diego Computational Science, Mathematics, and Engineering
University of California, Santa Computational Science and Engineering
University of Colorado, Denver PhD Degree in Applied Mathematics with a Computational Mathematics Option
University of Houston Computational Sciences Initiative
Interdisciplinary graduate certificate program
University of Illinois, Chicago Computional Science and Applied Mathematics program.
PhD in Mathematics with Major in Computational Science cluster of Mathematical Computer Science program or any cluster in Applied Math program
University of Iowa Applied Mathematics and Computational Sciences Program
PhD in Interdiciplinary AMCS Program
Applied Mathematics and Scientific Computation Program (AMSC)
University of Maryland, College Park Interdisciplinary MA and PhD programs, with concentrations in Applied Mathematics and in Scientific Computation. Certificate in Scientific Computation also
University of Minnesota Graduate Program in Scientific Computation
University of Texas, Austin Computational Science, Engineering, and Mathematics (CSEM)
University of Utah Computational Engineering & Science Graduate Program (CE&S)
University of Waterloo Master's in Computational Mathematics
Uppsala University Master Programme in Computational Science
Computational Science Undergraduate Programs
Australian National University Bachelor of Computational Science
George Mason University Computational and Data Sciences
National Singapore University Computational Science
Oregon State University Computational Physics
State University of New York Brockport Computational Sciences
SUNY-Brockport is currently the only full-scale undergraduate program.
Technische Universität Darmstadt Computational Engineering
Universität Erlangen-Nürnberg Computational Engineering
University of Waterloo Computational Mathematics
Computational Finance
Cornell University Financial Engineering Option.
Purdue University Computational Finance Program
University of Chicago Master of Science in Financial Mathematics
University of Michigan MS in Financial Engineering
University of Toronto Mathematical Finance
Listed from http://www.siam.org/students/resources/cse_programs.php | {"url":"http://hpcuniversity.org/students/universityPrograms/","timestamp":"2014-04-21T02:01:19Z","content_type":null,"content_length":"33094","record_id":"<urn:uuid:98e410b6-9095-4fac-8e58-9bf4bab9afdf>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
Higher-dimensional word problems with applications to equational logic
Results 11 - 20 of 32
"... Abstract – We generalize the notion of identities among relations, well known for presentations of groups, to presentations of n-categories by polygraphs. To each polygraph, we associate a track
n-category, generalizing the notion of crossed module for groups, in order to define the natural system o ..."
Cited by 3 (1 self)
Add to MetaCart
Abstract – We generalize the notion of identities among relations, well known for presentations of groups, to presentations of n-categories by polygraphs. To each polygraph, we associate a track
n-category, generalizing the notion of crossed module for groups, in order to define the natural system of identities among relations. We relate the facts that this natural system is finitely
generated and that the polygraph has finite derivation type. Support – This work has been partially supported by ANR Inval project (ANR-05-BLAN-0267).
"... Game semantics describe the interactive behavior of proofs by interpreting formulas as games on which proofs induce strategies. Such a semantics is introduced here for capturing dependencies
induced by quantifications in firstorder propositional logic. One of the main difficulties that has to be fac ..."
Cited by 3 (2 self)
Add to MetaCart
Game semantics describe the interactive behavior of proofs by interpreting formulas as games on which proofs induce strategies. Such a semantics is introduced here for capturing dependencies induced
by quantifications in firstorder propositional logic. One of the main difficulties that has to be faced during the elaboration of this kind of semantics is to characterize definable strategies, that
is strategies which actually behave like a proof. This is usually done by restricting the model to strategies satisfying subtle combinatorial conditions, whose preservation under composition is often
difficult to show. Here, we present an original methodology to achieve this task, which requires to combine advanced tools from game semantics, rewriting theory and categorical algebra. We introduce
a diagrammatic presentation of the monoidal category of definable strategies of our model, by the means of generators and relations: those strategies can be generated from a finite set of atomic
strategies and the equality between strategies admits a finite axiomatization, this equational structure corresponding to a polarized variation of the notion of bialgebra. This work thus bridges
algebra and denotational semantics in order to reveal the structure of dependencies induced by first-order quantifiers, and lays the foundations for a mechanized analysis of causality in programming
languages. Denotational semantics were introduced to provide useful abstract invariants of proofs and programs modulo cutelimination or reduction. In particular, game semantics, introduced in the
nineties, have been very successful in capturing precisely the interactive behaviour of programs. In these semantics, every type is interpreted as a game (that is as a set of moves that can be played
during the game) together with the rules of the game (formalized by a partial order on the moves of the game indicating the dependencies between them). Every move is to be played by one of the two
players, called Proponent and Opponent, who should be thought respectively as the program and its environment. A program is characterized by the sequences of moves that it can exchange with its
environment during an
"... In a seminal article, Kahn has introduced the notion of process network and given a semantics for those using Scott domains whose elements are (possibly infinite) sequences of values. This model
has since then become a standard tool for studying distributed asynchronous computations. From the beginn ..."
Cited by 2 (0 self)
Add to MetaCart
In a seminal article, Kahn has introduced the notion of process network and given a semantics for those using Scott domains whose elements are (possibly infinite) sequences of values. This model has
since then become a standard tool for studying distributed asynchronous computations. From the beginning, process networks have been drawn as particular graphs, but this syntax is never formalized.
We take the opportunity to clarify it by giving a precise definition of these graphs,
, 2009
"... We introduce an explicit diagrammatic syntax for PROs and PROPs, which are used in the theory of operads. By means of diagram rewriting, we obtain presentations of PROs by generators and
relations, and in some cases, we even get convergent rewrite systems. This diagrammatic syntax is useful for prac ..."
Cited by 1 (0 self)
Add to MetaCart
We introduce an explicit diagrammatic syntax for PROs and PROPs, which are used in the theory of operads. By means of diagram rewriting, we obtain presentations of PROs by generators and relations,
and in some cases, we even get convergent rewrite systems. This diagrammatic syntax is useful for practical computations, but also for theoretical results. Moreover, rewriting is strongly related to
homotopy theory. For instance, it can be used to compute homological invariants of algebraic structures, or to prove coherence results.
, 2009
"... The primary aim of this work is an intrinsic homotopy theory of strict ω-categories. We establish a model structure on ωCat, the category of strict ω-categories. The constructions leading to the
model structure in question are expressed entirely within the scope of ωCat, building on a set of generat ..."
Cited by 1 (0 self)
Add to MetaCart
The primary aim of this work is an intrinsic homotopy theory of strict ω-categories. We establish a model structure on ωCat, the category of strict ω-categories. The constructions leading to the
model structure in question are expressed entirely within the scope of ωCat, building on a set of generating cofibrations and a class of weak equivalences as basic items. All object are fibrant while
free objects are cofibrant. We further exhibit model structures of this type on n-categories for arbitrary n ∈ N, as specialisations of the ω-categorical one along right adjoints. In particular,
known cases for n = 1 and n = 2 nicely fit into the scheme.
- In Workshop on Computer Algebra Methods and Commutativity of Algebraic Diagrams (CAM-CAD , 2009
"... Polygraphs generalize to 2-categories the usual notion of equational theory, by describing them as quotients, modulo equations, of freely generated 2-categories on a given set of generators. In
order to work with morphisms modulo the equations, it is often convenient to orient the equations into a c ..."
Cited by 1 (1 self)
Add to MetaCart
Polygraphs generalize to 2-categories the usual notion of equational theory, by describing them as quotients, modulo equations, of freely generated 2-categories on a given set of generators. In order
to work with morphisms modulo the equations, it is often convenient to orient the equations into a confluent rewriting system. In the case of a terminating system, confluence can be checked by
showing that critical pairs are joinable. However, the computation of the critical pairs is more complicated for polygraphs than for term rewriting systems: in particular, two left members of a rule
don’t necessarily have a finite number of unifiers. We advocate here that a more general notion of rewriting system should be considered instead, and introduce an operad of compact contexts in a
2-category, in which two rules have a finite number of unifiers. A concrete representation of contexts is proposed, as well as an unification algorithm for these.
"... Abstract. A wide variety of models for concurrent programs has been proposed during the past decades, each one focusing on various aspects of computations: trace equivalence, causality between
events, conflicts and schedules due to resource accesses, etc. More recently, models with a geometrical fla ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract. A wide variety of models for concurrent programs has been proposed during the past decades, each one focusing on various aspects of computations: trace equivalence, causality between
events, conflicts and schedules due to resource accesses, etc. More recently, models with a geometrical flavor have been introduced, based on the notion of cubical set. These models are very rich and
expressive since they can represent commutation between any number of events, thus generalizing the principle of true concurrency. While they seem to be very promising – because they make possible
the use of techniques from algebraic topology in order to study concurrent computations – they have not yet been precisely related to the previous models, and the purpose of this paper is to fill
this gap. In particular, we describe an adjunction between Petri nets and cubical sets which extends the previously known adjunction between Petri nets and asynchronous transition systems by Nielsen
and Winskel. 1 1
"... Abstract – We introduce homotopical methods based on rewriting on higher-dimensional categories to prove coherence results in categories with an algebraic structure. We express the coherence
problem for (symmetric) monoidal categories as an asphericity problem for a track category and use rewriting ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract – We introduce homotopical methods based on rewriting on higher-dimensional categories to prove coherence results in categories with an algebraic structure. We express the coherence problem
for (symmetric) monoidal categories as an asphericity problem for a track category and use rewriting methods on polygraphs to solve it. The setting is generalized to more general coherence problems,
seen as 3-dimensional word problems in a track category. We prove general results that, in the case of braided monoidal categories, yield the coherence theorem for braided monoidal categories.
, 2007
"... We prove that for any monoid M, the homology defined by the second author by means of polygraphic resolutions coincides with the homology classically defined by means of resolutions by free
ZM-modules. 1 ..."
Cited by 1 (1 self)
Add to MetaCart
We prove that for any monoid M, the homology defined by the second author by means of polygraphic resolutions coincides with the homology classically defined by means of resolutions by free
ZM-modules. 1
"... We give a new combinatorial definition of a sort of weak !- category originally devised by J. Baez and J. Dolan in finite dimensional cases. Our definition is a mixture of both inductive and
coinductive definitions, and suitable for `computational category theory.' Keyword: weak n-category, bica ..."
Add to MetaCart
We give a new combinatorial definition of a sort of weak !- category originally devised by J. Baez and J. Dolan in finite dimensional cases. Our definition is a mixture of both inductive and
coinductive definitions, and suitable for `computational category theory.' Keyword: weak n-category, bicategory, tricategory, formalized mathematics 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=488544&sort=cite&start=10","timestamp":"2014-04-17T19:11:39Z","content_type":null,"content_length":"36258","record_id":"<urn:uuid:a5d8a2f9-54f9-4a61-989d-efbaf18831a3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
normal distribution
Monday, April 9th, 2012
A customer recently asked how to fit a normal (Gaussian) distribution to a vector of experimental data. Here’s a demonstration of how to do it.
Let’s start by creating a data set: 100 values drawn from a normal distribution with known parameters (mean = 0.5, variance = 2.0).
int n = 100;
double mean = .5;
double variance = 2.0;
var data = new DoubleVector( n, new RandGenNormal( mean, variance ) );
Now, compute y values based on the empirical cumulative distribution function (CDF), which returns the probability that a random variable X will have a value less than or equal to x–that is, f(x) = P
(X <= x). Here’s an easy way to do, although not necessarily the most efficient for larger data sets:
var cdfY = new DoubleVector( data.Length );
var sorted = NMathFunctions.Sort( data );
for ( int i = 0; i < data.Length; i++ )
int j = 0;
while ( j < sorted.Length && sorted[j] <= data[i] ) j++;
cdfY[i] = j / (double)data.Length;
The data is sorted, then for each value x in the data, we iterate through the sorted vector looking for the first value that is greater than x.
We’ll use one of NMath’s non-linear least squares minimization routines to fit a normal distribution CDF() function to our empirical CDF. NMath provides classes for fitting generalized one variable
functions to a set of points. In the space of the function parameters, beginning at a specified starting point, these classes finds a minimum (possibly local) in the sum of the squared residuals with
respect to a set of data points.
A one variable function takes a single double x, and returns a double y:
y = f(x)
A generalized one variable function additionally takes a set of parameters, p, which may appear in the function expression in arbitrary ways:
y = f(p1, p2,..., pn; x)
For example, this code computes y=a*sin(b*x + c):
public double MyGeneralizedFunction( DoubleVector p, double x )
return p[0] * Math.Sin( p[1] * x + p[2] );
In the distribution fitting example, we want to define a parameterized function delegate that returns CDF(x) for the distribution described by the given parameters (mean, variance):
Func<DoubleVector, double, double> f =
( DoubleVector p, double x ) =>
new NormalDistribution( p[0], p[1] ).CDF( x );
Now that we have our data and the function we want to fit, we can apply the curve fitting routine. We’ll use a bounded function fitter, because the variance of the fitted normal distribution must be
constrained to be greater than 0.
var fitter = new BoundedOneVariableFunctionFitter<TrustRegionMinimizer>( f );
var start = new DoubleVector( new double[] { 0.1, 0.1 } );
var lowerBounds = new DoubleVector( new double[] { Double.MinValue, 0 } );
var upperBounds =
new DoubleVector( new double[] { Double.MaxValue, Double.MaxValue } );
var solution = fitter.Fit( data, cdfY, start, lowerBounds, upperBounds );
var fit = new NormalDistribution( solution[0], solution[1] );
Console.WriteLine( "Fitted distribution:\nmean={0}\nvariance={1}",
fit.Mean, fit.Variance );
The output for one run is
Fitted distribution:
which is a reasonable approximation to the original distribution (given 100 points).
We can also visually inspect the fit by plotting the original data and the CDF() function of the fitted distribution.
ToChart( data, cdfY, SeriesChartType.Point, fit,
NMathStatsChart.DistributionFunction.CDF );
private static void ToChart( DoubleVector x, DoubleVector y,
SeriesChartType dataChartType, NormalDistribution dist,
NMathStatsChart.DistributionFunction distFunction )
var chart = NMathStatsChart.ToChart( dist, distFunction );
chart.Series[0].Name = "Fit";
var series = new Series() {
Name = "Data",
ChartType = dataChartType
series.Points.DataBindXY( x, y );
chart.Series.Insert( 0, series );
chart.Legends.Add( new Legend() );
NMathChart.Show( chart );
We can also look at the probability density function (PDF) of the fitted distribution, but to do so we must first construct an empirical PDF using a histogram. The x-values are the midpoints of the
histogram bins, and the y-values are the histogram counts converted to probabilities, scaled to integrate to 1.
int numBins = 10;
var hist = new Histogram( numBins, data );
var pdfX = new DoubleVector( hist.NumBins );
var pdfY = new DoubleVector( hist.NumBins );
for ( int i = 0; i < hist.NumBins; i++ )
// use bin midpoint for x value
Interval bin = hist.Bins[i];
pdfX[i] = ( bin.Min + bin.Max ) / 2;
// convert histogram count to probability for y value
double binWidth = bin.Max - bin.Min;
pdfY[i] = hist.Count( i ) / ( data.Length * binWidth );
ToChart( pdfX, pdfY, SeriesChartType.Column, fit,
NMathStatsChart.DistributionFunction.PDF );
You might be tempted to try to fit a distribution PDF() function directly to the histogram data, rather than using the CDF() function like we did above, but this is problematic for several reasons.
The bin counts have different variability than the original data. They also have a fixed sum, so they are not independent measurements. Also, for continuous data, fitting a model based on aggregated
histogram counts, rather than the original data, throws away information. | {"url":"http://www.centerspace.net/blog/tag/normal-distribution/","timestamp":"2014-04-20T13:19:29Z","content_type":null,"content_length":"64900","record_id":"<urn:uuid:bb49b6a8-97f4-4564-8cf8-782ce3874203>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
Histograms ( Read ) | Statistics
Remember how Jasper made a frequency table in the Make a Frequency Table to Organize and Display Data Concept? Well, now he is going to take this frequency table and try to make a histogram. Take a
Jasper is curious about how many days it takes a musher to finish the Iditarod. Looking online, he has discovered that the average is from 10 – 15 days, but that isn’t specific enough for him.
“I want to know more details about it,” he tells Mr. Hawkins first thing on Monday morning.
“Well, you have to narrow down your findings. I would suggest you look at the final standings from 2010. Then you can create a frequency table and a histogram.”
“Alright, that’s a good idea,” Jasper says.
Jasper begins his research on the Iditarod website. He makes notes on the number of days that it took the mushers in the 2010 Iditarod to finish. Here is the frequency table that he created with his
Days Tally Frequency
8 I I
9 I I I I I 18
I I I I I
I I I I I
I I I
10 I I I I I 16
I I I I I
I I I I I
I I I I I I I 6
12 I I I I I 9
I I I I
13 I I I I 4
Next, Jasper began making his histogram. But as soon as he started to draw it, something did not look right.
Jasper could use some help. In this Concept, you will learn how to take a frequency table and make a histogram out of it. Pay close attention and at the end of this Concept you will be able to help
Jasper create his visual display.
Frequency tables are a great way to record and organize data. Once you have created a frequency table, we can make a histogram to present a visual display of the information in the frequency table.
What is a histogram?
A histogram shows the frequency of data values on a graph. Like a frequency table, data is grouped in intervals of equal size that do not overlap. Like a bar graph, the height of each bar depicts the
frequency of the data values. A histogram differs from a bar graph in that the vertical columns are drawn with no space in between them.
Now let’s look at creating a histogram from a frequency table.
Create a histogram using the results on the frequency table below.
Hours Slept Each Night
Number of Hours Slept Tally Frequency
5 I 1
6 I I 2
7 I I I I 4
8 I I I 3
9 I I I 3
10 I I I 3
11 I I 2
12 I I 2
To create a histogram:
1. Draw the horizontal $(x)$$(y)$
2. Give the graph the title “Hours Slept Each Night.”
3. Label the horizontal axis “Hours.” List the intervals across the horizontal axis.
4. Label the vertical axis “Frequency.” Since the range in frequencies is not that great, label the axis by ones.
5. For each interval on the horizontal access, draw a vertical column to the appropriate frequency value. On a histogram, there is no space in between vertical columns.
Take a few minutes to copy down the steps for creating a histogram in your notebook.
Create a histogram to display the data on the frequency table below.
Number of Minutes on the Computer Tally Frequency
0 – 5 I I I 3
6 – 10 I I 2
11 – 15 I I I 3
16 – 20 I I 2
21 – 25 I 1
26 – 30 I 1
31 – 35 I 1
36 – 40 I 1
41 – 45 I I 2
46 – 50 I 1
51 – 55 I 1
56 – 60 I I 2
To create a histogram:
1. Draw the horizontal $(x)$$(y)$
2. Give the graph the title “Minutes Spent on the Computer.”
3. Label the horizontal axis “Minutes.” List the intervals across the horizontal axis.
4. Title the vertical axis “Frequency.” Label the axis by halves (0.5).
5. For each interval on the horizontal access, draw a vertical column to the appropriate frequency value. Recall that on a histogram, there are no spaces in between vertical columns.
Sometimes, you will be given a set of data that you will need to organize. This data will be unorganized. To work with it, you will have to organize it by creating a frequency table. Then you can use
that frequency table to create a histogram.
Fifteen people were asked to state the number of hours they exercise in a seven day period. The results of the survey are listed below. Make a frequency table and histogram to display the data.
8, 2, 4, 7.5, 10, 11, 5, 6, 8, 12, 11, 9, 6.5, 10.5, 13
First arrange the data on a frequency table. Recall that a table with three columns needs to be drawn: one for intervals, one for tallied results, and another for frequency results. The range in
values for this set of data is eleven. Therefore, data will be tallied in intervals of three.
Hours of Exercise Tally Frequency
0 – 2 I 1
3 – 5 I I 2
6 – 8 I I I I I 5
9 – 11 I I I I I 5
12 – 14 I I 2
Next, the data needs to be displayed on a histogram. Recall that a horizontal $(x)$$(y)$
Now let’s make some conclusions based on the information displayed in the histogram.
Looking at the histogram above, you can that equal numbers of people reported that they exercise between six and eight and nine and eleven hours each week. Two people stated that they exercise
between three and five hours per week. Two people reported that they exercise between twelve and fourteen hours per week. Zero to two is the hours with the least frequency.
Look at this frequency table and use it to complete the following questions.
Number of Sodas Tally Frequency
0 – 3 I I I I I I I I 8
4 – 7 I I I I I I I 7
8 – 11 I I I 3
12 – 15 I I 2
Example A
Which category is the most popular?
Solution: 0 - 3 Sodas
Example B
Which category is the least popular?
Solution: 12 - 15 sodas
Example C
What is the difference between the greatest number of sodas and the least?
Solution: 8 - 3 = 5
Now back to Jasper and the histogram.
Here is the original problem once again. Reread it and then look at the histogram created from the frequency table.
Jasper is curious about how many days it takes a musher to finish the Iditarod. Looking online, he has discovered that the average is from 10 – 15 days, but that isn’t specific enough for him.
“I want to know more details about it,” he tells Mr. Hawkins first thing on Monday morning.
“Well, you have to narrow down your findings. I would suggest you look at the final standings from 2010. Then you can create a frequency table and a histogram.”
“Alright, that’s a good idea,” Jasper says.
Jasper begins his research on the Iditarod website. He makes notes on the number of days that it took the mushers in the 2010 Iditarod to finish. Here is the frequency table that he created with his
Days Tally Frequency
Next, Jasper began making his histogram. But as soon as he started to draw it, something did not look right.
Then Jasper began to notice that he needed to put the number of mushers on the $y$$x$
Here is Jasper’s final histogram.
Frequency Table
a table that keeps track of the number of times a data value occurs.
a type of bar graph that shows frequency and distribution of data. The bars in a histogram are not spaced apart, but they are found right next to each other.
Guided Practice
Here is one for you to try on your own.
The data on the table below depicts the height (in meters) a ball bounces after being dropped from different heights. Create a frequency table and histogram to display the data.
$6 \quad 9 \quad 4 \quad 12 \quad 11 \quad 5 \quad 7 \quad 9 \quad 13 \quad 5 \quad 6 \quad 10 \quad 14 \quad 7 \quad 8$
First arrange the data on a frequency table.
Recall that a table with three columns needs to be drawn: one for intervals, one for tallied results, and another for frequency results. The range in values for this set of data is nine. Therefore,
data will be tallied in intervals of two.
Bounce Height Tally Frequency
3 – 4 I 1
5 – 6 I I I I 4
7 – 8 I I I 3
9 – 10 I I I 3
11 – 12 I I 2
13 – 14 I I 2
Next, the data needs to be displayed on a histogram.
Recall that a horizontal $(x)$$(y)$
Now what conclusions can we draw from the frequency table and histogram?
You can see that the most frequent bounce heights were between five and six meters. The least frequent bounce heights were between three and four meters. Three balls bounced between seven and eight
meters and nine and ten meters. Two balls bounced between eleven and twelve meters and thirteen and fourteen meters.
Video Review
This is a video on frequency tables and histograms.
Directions: Use what you have learned to complete each dilemma.
1. Create a histogram to display the data from the frequency table below.
Monthly Internet Purchases
Data Values Tally Frequency
0 – 3 I I I 3
4 – 7 I I I I 4
8 – 11 I 1
12 – 15 I I 2
2. The data collected depicts the number of letters in the last names of twenty people. Create a frequency table to display the data.
$12 \quad 3 \quad 5 \quad 9 \quad 11 \quad 2 \quad 7 \quad 5 \quad 6 \quad 8 \quad 14 \quad 4 \quad 8 \quad 7 \quad 5 \quad 10 \quad 5 \quad 9 \quad 7 \quad 15$
3. Create a histogram to display the data.
4. The data collected depicts the number of hours twelve families traveled this summer to their vacation destination. Create a frequency table to display the data.
$7 \quad 3 \quad 10 \quad 5 \quad 12 \quad 9 \quad 8 \quad 4 \quad 3 \quad 11 \quad 3 \quad 9$
5. Create a histogram to display the data.
6. Write a few sentences to explain any conclusions that you can draw from the data.
7. Generate a question that you will use to survey twenty people.
8. Make a table to collect the answers.
9. Display the data on a frequency table
10. Create a histogram to display the data histogram.
Here is a list of the number of students who did not complete their homework in one month.
1, 1, 3, 3, 4, 3, 3, 5, 6, 1, 1, 1, 2, 2, 3
11. Create a frequency table of the data.
12. What is the most popular value?
13. What is the least popular value?
14. What is the range of values?
15. What is the average? | {"url":"http://www.ck12.org/statistics/Histograms/lesson/Histograms-Grade-7/","timestamp":"2014-04-18T03:11:38Z","content_type":null,"content_length":"133179","record_id":"<urn:uuid:bf599663-098d-4d5b-a555-a63088b59415>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oak Lawn Geometry Tutor
...Mathematics, Physics and Chemistry are strong areas of expertise. I have taught high school students back in India. I can also teach mechanical engineering and basic electrical engineering
subjects as well.
16 Subjects: including geometry, chemistry, physics, calculus
...Since that time, I have been working and tutoring on the side. I recently went back to school at North Central College to get my teaching certification. I just completed my student teaching
experience (teaching Algebra I and Algebra II) and will be certified June 2014.
7 Subjects: including geometry, algebra 1, algebra 2, trigonometry
...I taught the lab sections of three different courses. In addition, I presented classroom lectures, created lab exercises, and helped students master key concepts. In short, I have always loved
working with students and helping them to succeed.
10 Subjects: including geometry, writing, biology, algebra 1
...After graduating from Loyola University, I began tutoring in ACT Math/Science at Huntington Learning Center in Elgin. I took pleasure in helping students understand concepts and succeed. My
best students are those that desire to learn and I seek to cultivate that attitude of growth and learning through a zest and enthusiasm for learning.
26 Subjects: including geometry, chemistry, Spanish, reading
...I use both analytical as well as graphical methods or a combination of the two as needed to cater to each student. Having both an Engineering and Architecture background, I am able to explain
difficult concepts to either a left or right-brained student, verbally or with visual representations. ...
34 Subjects: including geometry, reading, writing, statistics | {"url":"http://www.purplemath.com/Oak_Lawn_Geometry_tutors.php","timestamp":"2014-04-18T00:29:46Z","content_type":null,"content_length":"23809","record_id":"<urn:uuid:8c3f3863-1615-4fdf-abd3-335035ba0007>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number theory
October 15th 2008, 11:54 PM #1
Oct 2008
Number theory
(i) Let p the smallest prime divisor of n. Show that there exist integers a, b, such that an+b(p-1)=1
(ii) For every n>1 show that n does not divide (2^n)-1
Please ANY help?
Hint: n and p-1 must be co-prime (because any common factor would have to be less than p, and therefore could not divide n).
(I was totally unable to see how to do this until I realised that you need to use part (i).)
Suppose that n does divide $2^n-1$. Let p be the smallest prime divisor of n, then p must divide $2^n-1$. In other words, $2^n\equiv1\!\!\pmod p$. Since p is prime, it follows from Fermat's
Little Theorem that $2^{p-1}\equiv1\!\!\pmod p$. Then by part (i), $2 = 2^{an+b(p-1)}= (2^n)^a(2^{p-1})^b\equiv1\!\!\pmod p$, contradiction!
Edit. I forgot to say that that proof doesn't work if p=2 (because 2 and p are not then co-prime, so Fermat's theorem goes wrong!). But the result is obvious when p=2, because n would then be
even, and (2^n)-1 is odd.
Last edited by Opalg; October 16th 2008 at 11:19 AM.
Thank you so much.
October 16th 2008, 07:16 AM #2
October 17th 2008, 02:44 AM #3
Oct 2008 | {"url":"http://mathhelpforum.com/number-theory/53986-number-theory.html","timestamp":"2014-04-18T16:40:10Z","content_type":null,"content_length":"37773","record_id":"<urn:uuid:96ae827c-1735-4298-8778-9b96e9ee8897>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to find r for the sum of GP series?
April 15th 2013, 07:16 AM #1
Apr 2013
How to find r for the sum of GP series?
Given such an equation, how do I find r? Thank you
(1-r^m)/ (1-r)= s
where m can be any positive whole number
s can also be any positive number
The equation is actually the summation for a GP series with a=1, so basically I want to know how to find r (ratio factor) for the sum of a GP .
Re: How to find r for the sum of GP series?
So you are basically just asking how to solve $\frac{1- r^m}{1- r}= s$ for r? You can multiply on both sides by 1- r to get $1- r^m= s-rs$ which can be reduced $r^m- sr+ (s-1)= 0$. That is an
'm-degree' polynomial. However, there are no formulas, using elementary functions, for solving polynomials of degree greater than 5.
Last edited by HallsofIvy; April 15th 2013 at 12:14 PM.
Re: How to find r for the sum of GP series?
Thanks for the reply.
Then how do u solve if m greater than 5? Is there an Excel function to do it?
April 15th 2013, 12:03 PM #2
MHF Contributor
Apr 2005
April 16th 2013, 06:02 AM #3
Apr 2013 | {"url":"http://mathhelpforum.com/advanced-algebra/217550-how-find-r-sum-gp-series.html","timestamp":"2014-04-19T22:40:59Z","content_type":null,"content_length":"36261","record_id":"<urn:uuid:a510145a-34cb-4fda-8474-e3f3f48a92be>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 8, Page 95
Let's talk about the row of boxes underneath where it says "resists." These boxes are for keeping track of various beneficial status effects you might get from equipment you're wearing. The effects
are, in order:
Haste: You get 1 extra attack; Double Attack: Your base number of attacks are doubled if your attack roll is an even number; Double Spell: You can cast 2 spells per turn; Double Item: You can use 2
items per turn; Double Ability: You can use 2 abilities per turn; Regen: You have some form of regeneration; Critical Fail Protection: You won't suffer an extra penalty if you roll a 1; Critical Hit
Protection: The enemy can't get a critical hit against you.
Then, we have weapons. I guess most of that is pretty self explanatory. You take the base damage, add bonuses from your multiplier and special abilities, add it together to get your total damage.
That is the total damage for one attack. You then take that and multiply it by your number of attacks to get the actual damage you do in a round of attacks. If you get a critical attack then all that
damage is multiplied again. It sounds complicated but those numbers never change, so it's a pretty quick and painless process. You just roll the attack dice to see if you hit, and if you hit you tell
the DM the number that's already written down.
There was also supposed to be bonus damage, where you'd add 1d10 x10 extra damage to the grand total. It's not per attack; that number is just added to the total damage of all your attacks. It's only
purpose is to add a little bit of quasi-randomness to the number, but since it's so insignificant I think people were forgetting to do it. If we were to keep playing under this system I would
probably just eliminate it.
Then there is the Belt. Depending on what belt your character wears, there is a different amount of potions that can be equipped for battle. The maximum is 12 slots for potions. You can have an
unlimited number of potions in your inventory, but you can only use what's in your belt during a battle, so you have to use a little planning ahead.
Then there are slots for Feats, Special Abilities, and Demonic Abilities. A new feat can be chosen by the player every 2 levels, and since everyone is level 20 everyone has 10 feats. Every character
has their own unique list of special abilities, which are also granted every 2 levels.
And there's the demonic abilities, which can be stored inside a demonic box and freely switched between players outside of combat. However, players can only have 3 equipped at a time, and all demonic
abilities cost HP to use. Some are more expensive than others.
I'll give more information about feats, special abilities, and demonic abilities in the next few pages. | {"url":"http://flipside.keenspot.com/comic/dnd/comic.php?i=145","timestamp":"2014-04-17T15:31:59Z","content_type":null,"content_length":"8145","record_id":"<urn:uuid:ff6683a2-fddb-433b-a089-5891d515e168>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
Miami Shores, FL Algebra 2 Tutor
Find a Miami Shores, FL Algebra 2 Tutor
...With the experience I have teaching Algebra 2, in high school and also one on one with my students, I am able to see exactly where the student is at. I am also able to show simple ways for
students to understand the material. I have developed techniques and methods that facilitate learning Algebra.
48 Subjects: including algebra 2, chemistry, reading, calculus
...What I call "skills" are those little tricks that can only been figured out with certain experience, and just mere exploration. I have plenty of these and want to pass them on to others, if
anyone ever needs help in that program. Prealgebra is one of my subjects of excellence, probably because Middle School students are the ones I am most accustomed to as of today.
20 Subjects: including algebra 2, English, reading, ESL/ESOL
...Additionally, I worked as a discussion leader for both general and organic chemistry where I led students through problem sets and answered any questions they may have had. Finally, I worked
as a chemistry laboratory teaching assistant (TA) for two years and was recognized for my work by receive...
14 Subjects: including algebra 2, chemistry, calculus, geometry
My name is Sasha, and I am currently working in the Finance industry. I graduated from Florida International University studying Biology with a minor in Anthropology. I speak both Spanish and
English and I have been tutoring students since I was 13 years old.
22 Subjects: including algebra 2, English, Spanish, reading
I don't like to talk about myself, but the rapid improvement of my students speak for me. My method is adjustable to the student's requirements, but infallible. Step-by-step learning is the
secret of my success.
8 Subjects: including algebra 2, calculus, prealgebra, geometry
Related Miami Shores, FL Tutors
Miami Shores, FL Accounting Tutors
Miami Shores, FL ACT Tutors
Miami Shores, FL Algebra Tutors
Miami Shores, FL Algebra 2 Tutors
Miami Shores, FL Calculus Tutors
Miami Shores, FL Geometry Tutors
Miami Shores, FL Math Tutors
Miami Shores, FL Prealgebra Tutors
Miami Shores, FL Precalculus Tutors
Miami Shores, FL SAT Tutors
Miami Shores, FL SAT Math Tutors
Miami Shores, FL Science Tutors
Miami Shores, FL Statistics Tutors
Miami Shores, FL Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Biscayne Park, FL algebra 2 Tutors
Doral, FL algebra 2 Tutors
El Portal, FL algebra 2 Tutors
Hialeah algebra 2 Tutors
Hialeah Gardens, FL algebra 2 Tutors
Hialeah Lakes, FL algebra 2 Tutors
Mia Shores, FL algebra 2 Tutors
Miami Beach algebra 2 Tutors
Miami Gardens, FL algebra 2 Tutors
Miami Lakes, FL algebra 2 Tutors
N Miami Beach, FL algebra 2 Tutors
North Bay Village, FL algebra 2 Tutors
North Miami Beach algebra 2 Tutors
North Miami, FL algebra 2 Tutors
Opa Locka algebra 2 Tutors | {"url":"http://www.purplemath.com/miami_shores_fl_algebra_2_tutors.php","timestamp":"2014-04-16T16:44:47Z","content_type":null,"content_length":"24399","record_id":"<urn:uuid:1438641b-9ada-49de-9182-398b631cd977>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Real Number Line
We say that a number x is greater than a number y, in symbols
if on the real number line x lies to the right of y. If we want to include the possibility that x is actually equal to y then we say that x is greater than or equal to y, and we denote this fact by
For example,
Similarly, we say that x is less than y, in symbols
if x lies to the left of y, and x is less than or equal to y, in symbols
if x may be equal to y but no greater than y. For example,
Now get ready for a bit of convoluted logic that often confuses students in Math 1010.
A true statement such as
might surprise you since it's obviously not true that or equal to or. A statement
Here is another example. It is a true statement that Napoleon was a man or a woman because he was a man. The fact that he was not a woman does not make the statement false.
The utility of the symbol
If x is less than y then we also say that x is smaller than y. For example, -25 is smaller than 2. Similarly we define the phrases larger, no smaller, and no larger.
The absolute value x is its distance from the origin. If x is positive then x is negative then
Slightly more subtle facts that you may want to ponder is that for all real numbers x and y
Hint: check this for all possible sign combinations of
The distance between two real numbers x and y is the absolute value of their difference. For example, the distance between 3 and 5 is d between 3 and -2. We have
Again, this is consistent with the usual notation of distance a long a line. | {"url":"http://www.math.utah.edu/online/1010/line/","timestamp":"2014-04-18T05:30:54Z","content_type":null,"content_length":"9978","record_id":"<urn:uuid:71b7a10a-46b9-42a9-b6ff-0acc132c2b9a>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wagering with Zeno
Wagering with Zeno
Vacationing in Italy, you wander into the coastal village of Velia, a few hours south of Naples. On the edge of town you notice an archaeological dig. When you go to have a look at the ruins, you
learn that the place now called Velia was once the Greek settlement of Elea, home to the philosopher Parmenides and his disciple Zeno. You stroll through the excavated baths and trace the city walls,
then climb a steep, cobbled roadway to an arch called the Porta Rosa. Perhaps Zeno formulated his famous paradoxes while pacing these same stones 900,000 days ago. Was there something special about
the terrain that led him to imagine arrows frozen in flight and runners who go halfway, then half the remaining half, but never get to the finish line?
That night, Zeno visits you in a dream. He brings along a sack of ancient coins, which come in denominations of 1, 1/2, 1/4, 1/8, 1/16, and so on. Evidently the Eleatic currency had no smallest
unit: For every coin of value 1/2 ^n , there is another of value 1/2 ^n+ 1 . Zeno's bag holds exactly one coin of each denomination.
He teaches you a gambling game. First the coin of value 1 is set aside; it belongs to neither of you but will be flipped to decide the outcome of each round of play. Now the remaining coins are
divided in such a way that each of you has a total initial stake of exactly 1/2. The distinctively Eleatic part of the game is the rule for setting the amount of the wager. Before each coin toss, you
and Zeno each count your current holdings, and the bet is one-half of the lesser of these two amounts. Thus the first wager is 1/4. Suppose you win that toss. After the bet is paid, you have 3/4, and
Zeno's fortune is reduced to 1/4; the amount of the next bet is therefore 1/8. Say Zeno wins this time; then the score stands at 5/8 for you and 3/8 for him, and the next amount at stake is 3/16. If
Zeno wins again, he takes the lead, 9/16 to 7/16.
In the morning you wake up wondering about this curious game. What is the likely outcome if you continue playing indefinitely? Is one player sure to win eventually, or could the lead be traded back
and forth forever? | {"url":"http://www.americanscientist.org/issues/pub/wagering-with-zeno/1","timestamp":"2014-04-20T11:05:57Z","content_type":null,"content_length":"125252","record_id":"<urn:uuid:0751ed36-02e4-44e1-ae97-8a235d77a29e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lindale, GA Geometry Tutor
Find a Lindale, GA Geometry Tutor
...Our time will be spent on explaining, understanding and developing study plans to insure the time we spend together is productive and useful. As a new student we will take the first 30 minutes
together to assess your independent strengths and weaknesses. This will be at no charge to you and will help me come up with the approach that will best help you.
29 Subjects: including geometry, reading, writing, ASVAB
...Basically music is my life, my calling and I would be more than happy to share what I know with anyone willing to listen. I have been playing guitar since I was 13 years old. Before I ever
picked up a guitar, I was playing violin with the Rome Symphony Orchestra and was well educated in music theory.
19 Subjects: including geometry, chemistry, physics, calculus
...I had potential but could not pay any attention, and when I did the material just didn't make any sense. I was unable to retain any information regarding math or science. In the end of
Freshman year, I was left with very few weeks left to go from a low "F" to a passing grade.
14 Subjects: including geometry, reading, biology, algebra 2
...I currently work in the math lab at school, which provides tutoring services to the Berry College community, but I would like to expand my tutoring to the rest of Rome and surrounding
communities. As a rising senior, I have taken my fair share of math classes, so almost all subjects are open for...
23 Subjects: including geometry, reading, calculus, statistics
...It is a fail safe approach because in only moving on once you have learned something properly you start to put down strong foundations and that is when the confidence comes.I have a first
class degree in electronics engineering and am in the process of finishing a PhD in telecommunication enginee...
15 Subjects: including geometry, calculus, algebra 1, algebra 2
Related Lindale, GA Tutors
Lindale, GA Accounting Tutors
Lindale, GA ACT Tutors
Lindale, GA Algebra Tutors
Lindale, GA Algebra 2 Tutors
Lindale, GA Calculus Tutors
Lindale, GA Geometry Tutors
Lindale, GA Math Tutors
Lindale, GA Prealgebra Tutors
Lindale, GA Precalculus Tutors
Lindale, GA SAT Tutors
Lindale, GA SAT Math Tutors
Lindale, GA Science Tutors
Lindale, GA Statistics Tutors
Lindale, GA Trigonometry Tutors | {"url":"http://www.purplemath.com/lindale_ga_geometry_tutors.php","timestamp":"2014-04-16T21:52:42Z","content_type":null,"content_length":"23982","record_id":"<urn:uuid:ced52aff-773e-4e87-97de-7176c64ce95c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
The QSIMP function performs numerical integration of a function over the closed interval [ A, B ] using Simpson's rule. The result will have the same structure as the smaller of A and B , and the
resulting type will be single- or double-precision floating, depending on the input types.
QSIMP is based on the routine qsimp described in section 4.2 of Numerical Recipes in C: The Art of Scientific Computing (Second Edition), published by Cambridge University Press, and is used by
A scalar string specifying the name of a user-supplied IDL function to be integrated. This function must accept a single scalar argument X and return a scalar result. It must be defined over the
closed interval [ A, B ].
For example, if we wish to integrate the fourth-order polynomial
we define a function SIMPSON to express this relationship in the IDL language:
RETURN, (X^4 - 2.0 * X^2) * SIN(X)
The upper limit of the integration. B can be either a scalar or an array.
Note: If arrays are specified for A and B , then QSIMP integrates the user-supplied function over the interval [ A [ i] , B [ i] ] for each i . If either A or B is a scalar and the other an array,
the scalar is paired with each array element in turn.
The desired fractional accuracy. For single-precision calculations, the default value is 1.0 ¥ 10^ -6 . For double-precision calculations, the default value is 1.0 ¥ 10^ -12 .
To integrate the SIMPSON function (listed above) over the interval [0, p /2] and print the result:
A = 0.0 ; Define lower limit of integration.
B = !PI/2.0 ; Define upper limit of integration.
The exact solution can be found using the integration-by-parts formula:
FB = 4.*B*(B^2-7.)*SIN(B) - (B^4-14.*B^2+28.)*COS(B)
FA = 4.*A*(A^2-7.)*SIN(A) - (A^4-14.*A^2+28.)*COS(A) | {"url":"http://www.astro.virginia.edu/class/oconnell/astr511/IDLresources/idl_5.1_html/idl15a.htm","timestamp":"2014-04-20T21:03:53Z","content_type":null,"content_length":"8376","record_id":"<urn:uuid:fc1c5d77-7c11-4e7a-a2b4-e830f6a300e5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
22.1.3.5 Printing Lists and Conses
Wherever possible, list notation is preferred over dot notation. Therefore the following algorithm is used to print a cons x:
Actually, the above algorithm is only used when *print-pretty* is false. When *print-pretty* is true (or when pprint is used), additional whitespace[1] may replace the use of a single space, and a
more elaborate algorithm with similar goals but more presentational flexibility is used; see Section 22.1.2 (Printer Dispatching).
Although the two expressions below are equivalent, and the reader accepts either one and produces the same cons, the printer always prints such a cons in the second form.
(a . (b . ((c . (d . nil)) . (e . nil))))
(a b (c d) e)
The printing of conses is affected by *print-level*, *print-length*, and *print-circle*.
Following are examples of printed representations of lists:
(a . b) ;A dotted pair of a and b
(a.b) ;A list of one element, the symbol named a.b
(a. b) ;A list of two elements a. and b
(a .b) ;A list of two elements a and .b
(a b . c) ;A dotted list of a and b with c at the end; two conses
.iot ;The symbol whose name is .iot
(. b) ;Invalid -- an error is signaled if an attempt is made to read
;this syntax.
(a .) ;Invalid -- an error is signaled.
(a .. b) ;Invalid -- an error is signaled.
(a . . b) ;Invalid -- an error is signaled.
(a b c ...) ;Invalid -- an error is signaled.
(a \. b) ;A list of three elements a, ., and b
(a |.| b) ;A list of three elements a, ., and b
(a \... b) ;A list of three elements a, ..., and b
(a |...| b) ;A list of three elements a, ..., and b
For information on how the Lisp reader parses lists and conses, see Section 2.4.1 (Left-Parenthesis).
The following X3J13 cleanup issue, not part of the specification, applies to this section:
Copyright 1996, The Harlequin Group Limited. All Rights Reserved. | {"url":"http://www.ai.mit.edu/projects/iiip/doc/CommonLISP/HyperSpec/Body/sec_22-1-3-5.html","timestamp":"2014-04-19T07:53:15Z","content_type":null,"content_length":"6911","record_id":"<urn:uuid:071000e3-f7e0-4e76-b1d0-f4e82b29b2f3>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gram-Schmidt process
The Gram–Schmidt process is an algorithm which takes as input an ordered basis of an inner product space and produces as output an ordered orthonormal basis.
In terms of matrices, the Gram–Schmidt process is a procedure of factorization of a invertible matrix $M$ in the general linear group $GL_n(\mathbb{R})$ (or $GL_n(\mathbb{C})$) as a product $M = U T$
where $T$ is an upper triangular matrix and $U$ is an orthonormal (or unitary) matrix; as such it is a special case of the more general Iwasawa decomposition? for a (connected) semisimple Lie group.
Since the factorization depends smoothly on the parameters, the Gram–Schmidt procedure enables the reduction of the structure group of an inner product bundle (e.g., the tangent bundle of a
Riemannian manifold or a Kähler manifold) from $GL_n$ to orthogonal group $O_n$ (or the unitary group $U_n$).
Gram–Schmidt process on Hilbert spaces
In this section, “basis” is understood to signify an ordered independent set whose linear span is dense in a Hilbert space $H$ seen as a metric space. We will describe the Gram–Schmidt process as
applied to a $d$-dimensional Hilbert space for some cardinal $d$ with a basis $v_0, v_1, \ldots$ consisting of $d$ vectors.
The orthonormal basis $u_0, u_1, \ldots$ produced as output is defined recursively by a) subtracting the orthogonal projection to the closed subspace generated by all previous vectors and b)
normalizing. We denote the orthogonal projection onto a closed subspace $A$ by $\pi_A\colon H\to A$ and the normalization $v/\|v\|$ of a vector $v \in H$ by $N(v)$. For ordinals $\alpha \lt d$ define
$u_\alpha := N\left(v_\alpha - \pi_\overline{\operatorname{span}\left(\left\{v_\beta \colon \beta \lt \alpha\right\}\right)}\left(v_\alpha\right)\right)$
where the projection is known to exist, since $H$ is complete. This can be rewritten more explicitly using transfinite recursion as
$u_\alpha = N\left(v_\alpha - \sum_{\beta \lt \alpha} \langle v_\alpha, u_\beta\rangle u_\beta\right)$
where the sum on the right is well defined by the Bessel inequality, i.e. only countably many coefficients are non-zero and they are square-summable. A simple (transfinite) inductive argument shows
that the $u_\alpha$ are unit vectors orthogonal to each other, and that the span of $\left\{u_\beta \colon \beta \lt \alpha\right\}$ is equal to the span of $\left\{v_\beta \colon \beta \lt \alpha\
right\}$ for $\alpha \leq d$. Therefore $u_0, u_1, \ldots$ is an orthonormal basis of $H$.
Example of Legendre polynomials
A classic illustration of Gram–Schmidt is the production of the Legendre polynomials.
Let $H$ be the Hilbert space $H = L^2([-1, 1])$, equipped with the standard inner product defined by
$\langle f, g\rangle = \int_{-1}^1 \bar{f(x)} g(x) d x$
By the Stone-Weierstrass theorem, the space of polynomials $\mathbb{C}[x]$ is dense in $H$ according to its standard inclusion, and so the polynomials $1, x, x^2, \ldots$ form an ordered basis of $H$
Applying the Gram–Schmidt process, one readily computes the first few orthonormal functions:
$u_1(x) = N(1) = 1/2$
$u_2(x) = N(x - 0) = \sqrt{3/2} x$
$u_3(x) = N(x^2 - \langle x^2, 1/2\rangle 1/2 - 0) = N(x^2 - 1/3) = 3\sqrt{5/2}/2(x^2 - 1/3)$
The classical Legendre polynomials $P_n(x)$ are scalar multiplies of the functions $u_n$, adjusted so that $P_n(1) = 1$; they satisfy the orthogonality relations
$\langle P_n, P_m\rangle = \frac{2}{2n + 1}\delta_{m, n}$
where $\delta_{m, n}$ is the Kronecker delta.
Application to non-bases
If we apply the Gram–Schmidt process to a well-ordered independent set whose closed linear span $S$ is not all of $H$, we still get an orthonormal basis of the subspace $S$. If we apply the
Gram–Schmidt process to a dependent set, then we will eventually run into a vector $v$ whose norm is zero, so we will not be able to take $N(v)$. In that case, however, we can simply remove $v$ from
the set and continue; then we will still get an orthonormal basis of the closed linear span. (This conclusion is not generally valid in constructive mathematics, since it relies on excluded middle
applied to the statement that $\|v\| eq 0$. However, it does work to discrete fields, such as the algebraic closure of the rationals, as seen in elementary undergraduate linear algebra.)
Categorified Gram–Schmidt process
Many aspects of the Gram–Schmidt process can be categorified so as to apply to 2-Hilbert spaces. We will illustrate the basic idea with an example that was suggested to us by James Dolan.
Consider the category of complex representations of the symmetric group $S_n$. (As a running example, we consider $S_4$; up to isomorphism, there are five irreducible representations
$U_{(4)}, \, U_{(3 1)}, \, U_{(2 2)}, \, U_{(2 1 1)}, \, U_{(1 1 1 1)}$
classified by the five Young diagrams of size 4. To save space, we denote these as $U_1$, $U_2$, $U_3$, $U_4$, $U_5$.) The irreducible representations $U_i$ of $S_n$ form a $2$-orthonormal basis in
the sense that any two of them $U_i, U_j$ satisfy the relation
$hom(U_i, U_j) \cong \delta_{i j} \cdot \mathbb{C}$
(where $n \cdot \mathbb{C}$ indicates a direct sum of $n$ copies of $\mathbb{C}$). In fact, the irreducible representations are uniquely determined up to isomorphism by these relations.
There is however another way of associating representations to partitions or Young diagrams. Namely, consider the subgroup of permutations which take each row of a Young diagram or Young tableau of
size $n$ to itself; this forms a parabolic subgroup of $S_n$, conjugate to one of type $P_{(n_1 \ldots n_k)} = S_{n_1} \times \ldots \times S_{n_k}$ where $n_i$ is the length of the $i^{th}$ row of
the Young diagram. The group $S_n$ acts transitively on the orbit space of cosets
$S_n/P_{(n_1 \ldots n_k)}$
and these actions give permutation representations of $S_n$. Equivalently, these are representations $V_i$ which are induced from the trivial representation along inclusions of parabolic subgroups.
We claim that these representations form a $\mathbb{Z}$-basis of the representation ring, and we may calculate their characters using a categorified Gram–Schmidt process.
Given two such parabolic subgroups $P$, $Q$ in $G = S_n$, the $2$-inner product
$hom_G(\mathbb{C}[G/P], \mathbb{C}[G/Q])$
may be identified with the free vector space on the set of double cosets $P\backslash G/Q$. One may count the number of double cosets by hand in a simple case like $G = S_4$. That is, for the 5
representations $V_1, \ldots, V_5$ induced from the 5 parabolic subgroups $P_i$ corresponding to the 5 Young diagrams listed above, the dimensions of the 2-inner products $hom(V_i, V_j)$ are the
sizes of the corresponding double coset spaces $P_i\backslash S_4 /P_j$. These numbers form a matrix as follows (following the order of the $5$ partitions listed above):
$\left( \array {1 & 1 & 1 & 1 & 1 \\ 1 & 2 & 2 & 3 & 4 \\ 1 & 2 & 3 & 4 & 6 \\ 1 & 3 & 4 & 7 & 12 \\ 1 & 4 & 6 & 12 & 24 }\right)$
To reiterate: this matrix is the decategorification (a matrix of dimensions) of a matrix of $2$-inner products where the $(i j)$-entry is of the form
$hom_G(V_i, V_j) \cong V_i^* \otimes_G V_j$
where the $V_i$ are induced from inclusions of parabolic subgroups. The $V_i$ are $\mathbb{N}$-linear combinations of irreducible representations $U_i$ which form a $2$-orthonormal basis, and we may
perform a series of elementary row operations which convert this matrix into an upper triangular matrix, and which will turn out to be the decategorified form of the 2-matrix with entries
$hom_G(U_i, V_j) \cong U_i^* \otimes_G V_j$
where $U_i$ is the irreducible corresponding to the $i$^th Young diagram (as listed above). The upper triangular matrix is
$\left( \array {1 & 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 2 & 3 \\ 0 & 0 & 1 & 1 & 2 \\ 0 & 0 & 0 & 1 & 3 \\ 0 & 0 & 0 & 0 & 1} \right)$
and we read off from the columns the following decompositions into irreducible components:
$V_1 \cong U_1$
$V_2 \cong U_1 + U_2$
$V_3 \cong U_1 + U_2 + U_3$
$V_4 \cong U_1 + 2 U_2 + U_3 + U_4$
$V_5 \cong U_1 + 3 U_2 + 2 U_3 + 3 U_4 + U_5$
The last representation $V_5$ is the regular representation of $S_4$ (because the parabolic subgroup is trivial). Since we know from general theory that the multiplicity of the irreducible $U_i$ in
the regular representation is its dimension, we get as a by-product the dimensions of the $U_i$ from the expression for $V_5$:
$dim(U_1) = 1, \, dim(U_2) = 3, \, dim(U_3) = 2, \, dim(U_4) = 3, \, dim(U_5) = 1$
(the first of the $U_i$ is the trivial representation, and the last $U_5$ is the alternating representation).
The row operations themselves can be assembled as the lower triangular matrix
$\left( \array {1 & 0 & 0 & 0 & 0 \\ -1 & 1 & 0 & 0 & 0 \\ 0 & -1 & 1 & 0 & 0 \\ 1 & -1 & -1 & 1 & 0 \\ 2 & -1 & -2 & 0 & 1 } \right)$
and from the rows we read off the irreducible representations as “virtual” (i.e., $\mathbb{Z}$-linear) combinations of the parabolically induced representations $V_i$:
$U_1 \cong V_1$
$U_2 \cong -V_1 + V_2$
$U_3 \cong -V_2 + V_3$
$U_4 \cong V_1 - V_2 - V_3 + V_4$
$U_5 \cong 2 V_1 - V_2 - 2 V_3 + V_5$
which can be considered the result of the categorified Gram–Schmidt process.
It follows from these representations that the $V_i$ form a $\mathbb{Z}$-linear basis of the representation ring $Rep(S_4)$. Analogous statements hold for each symmetric group $S_n$. | {"url":"http://www.ncatlab.org/nlab/show/Gram-Schmidt+process","timestamp":"2014-04-18T21:06:49Z","content_type":null,"content_length":"63463","record_id":"<urn:uuid:d0bfd1ad-a2f0-48b2-9c7f-e1b62b62f890>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is there a universal coefficient theorem for motivic cohomology?
up vote 2 down vote favorite
Is there some kind of universal coefficient theorem for motivic cohomology? In particular, suppose we have a ring morphism $R\to S$, then I would like to know when $$ H^{\star\star}(-,S)\simeq H^{\
star\star}(-,R)\otimes_{R}S\; ?$$ Does this for example hold when $R$ is a field? In particular, does it hold for $R=\mathbb{Q}$? Or do we need additional assumption on $S$ as well? E.g. that $S$ is
a semi-simple or Noetherian $R$-algebra?
add comment
1 Answer
active oldest votes
Yes, there is a universal coefficient theorem: the corresponding object of the derived category (of $S$-modules) could be obtained by tensoring by $S$. This is easy, since motivic
up vote 2 down vote cohomology is defined as the cohomology of a complex of free modules (over $R$ and $S$, respectively).
1 By the way, do you know of a place that gives a quick and dirty definition of motivic cohomology like you just gave above (albeit with a bit more background)? – Harry Gindi Feb
19 '11 at 11:15
1 For motivic cohomology of smooth varieties both the Bloch complex and the Suslin complex are 'quick'. I also believe that one can reduce the cohomology of motives to the one of
smooth varieties (though possibly here some work is needed). – Mikhail Bondarko Feb 19 '11 at 14:53
To WesleyT: if you don't want to bother with derived categories. you will have to assume that $S$ is flat over $R$. This is certainly true if $R$ is a field. – Mikhail Bondarko
Feb 19 '11 at 19:04
OK, thank you. I will go over it and maybe get back at you. – WesleyT Feb 19 '11 at 20:45
add comment
Not the answer you're looking for? Browse other questions tagged motivic-cohomology or ask your own question. | {"url":"http://mathoverflow.net/questions/55950/is-there-a-universal-coefficient-theorem-for-motivic-cohomology?sort=votes","timestamp":"2014-04-16T13:27:49Z","content_type":null,"content_length":"53853","record_id":"<urn:uuid:251ab3e9-4c2f-4c97-9e65-7e864c358f10>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
User Maurizio Monge
bio website poisson.dm.unipi.it/~monge
location Pisa (Italy)
age 32
visits member for 4 years, 2 months
seen Mar 17 at 17:45
stats profile views 599
Feb Is a left topological group which is a manifold a topological group?
25 comment That the idea does not work "as is" with $H=K$, where I was looking for a non-continuous $H\rightarrow{}K$ that could then be composed with the canonical $K\rightarrow{}Inn(K)$. But
actually I just realized that this can work very easily with $S^1$ and $SO(2)$, considering an isomorphical embedding $S^1\rightarrow{}SO(2)$ and a non-continuous homomorphism $S^1\
rightarrow{}S^1$ (which exists). In this way we easily have a non-continuous homomorphism $S^1\rightarrow{}Aut(SO(2))$, and a similar example (that is also compact).
Feb Is a left topological group which is a manifold a topological group?
24 comment It is interesting to point out that the same example cannot work with compact semisimple Lie groups (replacing $xe^{f(y)}$ with the conjugacy $yxy^{-1}$, that sends as well $G\rightarrow
{}Inn(G)\subseteq{}Aut(G)$), because in this case any automorphism is automatically continuous, as pointed out in mathoverflow.net/a/40700/3680
Feb Is a left topological group which is a manifold a topological group?
24 revised added 111 characters in body
Feb Is a left topological group which is a manifold a topological group?
24 comment Nice example, thanks!
24 accepted Is a left topological group which is a manifold a topological group?
Feb Is a left topological group which is a manifold a topological group?
23 comment yes, let's also assume $G$ paracompact, with countable atlas as topological manifold. @GeraldEdgar nice theorem, even if I don't see an immediate application to the present case, what is
a reference for it?
22 awarded Nice Question
Feb Is a left topological group which is a manifold a topological group?
22 revised added 195 characters in body
Feb Is a left topological group which is a manifold a topological group?
22 comment I see... thanks, if you write it as an answer I will accept it. In any case, I was not thinking about this kind of examples, I think I should add that $G$ is supposed to be connected.
Feb Is a left topological group which is a manifold a topological group?
22 comment If you make the coset of a a one-parameter subgroup open (and hence also closed), how can the group still be a topological manifold?
21 asked Is a left topological group which is a manifold a topological group?
27 answered Ergodicity of composition with a rotation
29 awarded Nice Answer
Oct A Differential Equation with Nested Functions
11 comment can you provide some background?
12 awarded Popular Question
31 awarded Yearling
May about the local ring of $\mathbb{Z}_p[T]/(pT^2+T+1)$ at the prime p
16 comment Another way to see immediately that the equation has a solution is via Newton's polygon: it is formed by two sides with slopes 0 and 1, and each corresponds to a non-trivial factor,
having degree 1.
1 awarded Yearling
Jan Examples of “Monster” groups
9 comment Perhaps their presentation is not the best way to understand them, in any case.
Jan accepted Is a profinite group with a finite number of simple quotients and Jordan-Hölder factors finitely generated? | {"url":"http://mathoverflow.net/users/3680/maurizio-monge?tab=activity","timestamp":"2014-04-21T13:06:14Z","content_type":null,"content_length":"46445","record_id":"<urn:uuid:19a6e4d8-718e-4bc8-938b-a43fbd0dc88c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parseval's identity: Real form
September 16th 2010, 02:46 PM #1
Nov 2009
Parseval's identity: Real form
$\|f\|^2 = \sum_{n=-\infty}^{\infty}|c_n|^2$ where $c_n$ are the fourier coefficients.
What does Parseval's identity look like if you use the coefficients from the real form of the fourier series, i.e. $a_n = c_n + c_{-n}$, $b_n = i(c_n - c_{-n})$ and $f \sim \frac{a_0}{2}+ \sum_{n
=1}^{\infty}(a_n\cos(n\Omega t) + b_n\sin(n\Omega t) )$
I've seen it used like this, but I can't figure out how you get here: $\|f\|^2 = \left|\frac{a_0}{2}\right|^2 +\frac{1}{2}\sum_{n=1}^{\infty}(|a_n|^2+|b_n|^2)$
$\|f\|^2 = \sum_{n=-\infty}^{\infty}|c_n|^2$ where $c_n$ are the fourier coefficients.
What does Parseval's identity look like if you use the coefficients from the real form of the fourier series, i.e. $a_n = c_n + c_{-n}$, $b_n = i(c_n - c_{-n})$ and $f \sim \frac{a_0}{2}+ \sum_{n
=1}^{\infty}(a_n\cos(n\Omega t) + b_n\sin(n\Omega t) )$
I've seen it used like this, but I can't figure out how you get here: $\|f\|^2 = \left|\frac{a_0}{2}\right|^2 +\frac{1}{2}\sum_{n=1}^{\infty}(|a_n|^2+|b_n|^2)$
As $c_n=\frac{1}{2}(a_n\pm ib_n)\Longrightarrow |c_n|^2=\frac{1}{4}(a_n^2+b_n^2)$ ...
Both $a_n$ and $b_n$ are complex numbers though, so I don't think what you posted holds in general. $a_n^2 eq |a_n|^2$ in general as well.
If you write out the square integral of what you call the real form in extensive form you have terms of the form:
$\int_{t=-\pi/\Omega}^{\pi/\Omega} A_{n,m} \cos(n\Omega t) \sin( m\Omega t)\;dt=0$
$\int_{t=-\pi/\Omega}^{\pi/\Omega} B_{n,m} \sin(n\Omega t) \sin( m\Omega t)\;dt=\frac{1}{2}B_{n,m}\delta_{n,m}$
$\int_{t=-\pi/\Omega}^{\pi/\Omega} C_{n,m} \cos(n\Omega t) \cos( m\Omega t)\;dt=\frac{1}{2}C_{n,m}\delta_{n,m}$
This seems to be a problem of notation: I meant the real coefficients (real and imaginary parts) of the complex ones, and because of the same reason the coefficient 1/2 may change, according as
what interval we're choosing to define our periodic functions on.
If you write out the square integral of what you call the real form in extensive form you have terms of the form:
$\int_{t=-\pi/\Omega}^{\pi/\Omega} A_{n,m} \cos(n\Omega t) \sin( m\Omega t)\;dt=0$
$\int_{t=-\pi/\Omega}^{\pi/\Omega} B_{n,m} \sin(n\Omega t) \sin( m\Omega t)\;dt=\frac{1}{2}B_{n,m}\delta_{n,m}$
$\int_{t=-\pi/\Omega}^{\pi/\Omega} C_{n,m} \cos(n\Omega t) \cos( m\Omega t)\;dt=\frac{1}{2}C_{n,m}\delta_{n,m}$
Sorry, I don't understand that notation.
It is just a statement that the trig functions form an orthonormal basis for the space of square integrable functions on the interval $(-\pi,pi]$.
In particular I use Kronecker's delta:
$\delta_{a,b}=\left\{ \begin{array}{ll}<br /> 1&, {\text{ if }} a=b\\<br /> 0&, {\text{ if }} a e b \end{array} \right.$
Last edited by CaptainBlack; September 17th 2010 at 08:35 PM.
September 16th 2010, 08:50 PM #2
Oct 2009
September 16th 2010, 11:16 PM #3
Nov 2009
September 16th 2010, 11:57 PM #4
Grand Panjandrum
Nov 2005
September 17th 2010, 06:01 AM #5
Oct 2009
September 17th 2010, 11:01 AM #6
Nov 2009
September 17th 2010, 08:07 PM #7
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/differential-geometry/156443-parseval-s-identity-real-form.html","timestamp":"2014-04-18T16:11:56Z","content_type":null,"content_length":"58574","record_id":"<urn:uuid:b5cef46d-fb50-411c-8246-deb8494778ab>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electronic Journal of ProbabilityInfinite dimensional forward-backward stochastic differential equations and the KPZ equationThermodynamic formalism and large deviations for multiplication-invariant potentials on lattice spin systemsVariance-Gamma approximation via Stein's methodOn the expectation of normalized Brownian functionals up to first hitting timesMüntz linear transforms of Brownian motion
http://ejp.ejpecp.org/ <p><strong> </strong></p>The <strong>Electronic Journal of Probability</strong> (EJP) publishes full-length research articles in probability theory. Short papers, those less
than 12 pages, should be submitted first to its sister journal, the <a href="http://ecp.ejpecp.org/" target="_blank">Electronic Communications in Probability</a> (ECP). EJP and ECP share the same
editorial board, but with different Editors in Chief.<p>EJP and ECP are free access official journals of the <a href="http://www.imstat.org/">Institute of Mathematical Statistics</a> (IMS) and the <a
href="http://isi.cbs.nl/BS/bshome.htm"> Bernoulli Society</a>. This web site uses the <a href="http://en.wikipedia.org/wiki/Open_Journal_Systems">Open Journal System</a> (OJS) free software developed
by the non-profit organization <a href="http://en.wikipedia.org/wiki/Public_Knowledge_Project">Public Knowledge Project</a> (PKP).</p><p>Please consider donating to the <a href="http://www.imstat.org
/publications/open.htm" target="_blank">Open Access Fund</a> of the IMS at this <a href="https://secure.imstat.org/secure/orders/donations.asp" target="_blank"><strong>page</strong></a> to keep the
journal free.</p> en-US Electronic Journal of Probability 1083-6489 The Electronic Journal of Probability applies the <a href="http://creativecommons.org/licenses/by/2.5/legalcode" target="_blank">
Creative Commons Attribution License</a> (CCAL) to all articles we publish in this journal. Under the CCAL, authors retain ownership of the copyright for their article, but authors allow anyone to
download, reuse, reprint, modify, distribute, and/or copy articles published in EJP, so long as the original authors and source are credited. This broad license was developed to facilitate open
access to, and free use of, original works of all types. Applying this standard license to your work will ensure your right to make your work freely and openly available.<br /><br /><strong>Summary
of the Creative Commons Attribution License</strong><br /><br />You are free<br /><ul><li> to copy, distribute, display, and perform the work</li><li> to make derivative works</li><li> to make
commercial use of the work</li></ul>under the following condition of Attribution: others must attribute the work if displayed on the web or stored in any electronic archive by making a link back to
the website of EJP via its Digital Object Identifier (DOI), or if published in other media by acknowledging prior publication in this Journal with a precise citation including the DOI. For any
further reuse or distribution, the same terms apply. Any of these conditions can be waived by permission of the Corresponding Author. http://ejp.ejpecp.org/article/view/2709 Kardar-Parisi-Zhang (KPZ)
equation is a quasilinear stochastic partial differential equation (SPDE) driven by a space-time white noise. In recent years there have been several works directed towards giving a rigorous meaning
to a solution of this equation. Bertini, Cancrini and Giacomin have proposed a notion of a solution through a limiting procedure and a certain renormalization of the nonlinearity. In this work we
study connections between the KPZ equation and certain infinite dimensional forward-backward stochastic differential equations. Forward-backward equations with a finite dimensional noise have been
studied extensively, mainly motivated by problems in mathematical finance. Equations considered here differ from the classical works in that, in addition to having an infinite dimensional driving
noise, the associated SPDE involves a non-Lipschitz (specifically, a quadratic) function of the gradient. Existence and uniqueness of solutions of such infinite dimensional forward-backward equations
is established and the terminal values of the solutions are then used to give a new probabilistic representation for the solution of the KPZ equation. Sergio Almada Monter Amarjit Budhiraja
2014-04-04 2014-04-04 19 http://ejp.ejpecp.org/article/view/3189 We introduce the multiplicative Ising model and prove basic properties of its thermodynamic formalism such as existence of pressure
and entropies. We generalize to one-dimensional "layer-unique'' Gibbs measures for which the same results can be obtained. For more general models associated to a $d$-dimensional multiplicative
invariant potential, we prove a large deviation theorem in the uniqueness regime for averages of multiplicative shifts of general local functions. This thermodynamic formalism is motivated by the
statistical properties of multiple ergodic averages. Jean-René Chazottes Frank Redig 2014-04-01 2014-04-01 19 http://ejp.ejpecp.org/article/view/3020 Variance-Gamma distributions are widely used in
financial modelling and contain as special cases the normal, Gamma and Laplace distributions. In this paper we extend Stein's method to this class of distributions. In particular, we obtain a Stein
equation and smoothness estimates for its solution. This Stein equation has the attractive property of reducing to the known normal and Gamma Stein equations for certain parameter values. We apply
these results and local couplings to bound the distance between sums of the form $\sum_{i,j,k=1}^{m,n,r}X_{ik}Y_{jk}$, where the $X_{ik}$ and $Y_{jk}$ are independent and identically distributed
random variables with zero mean, by their limiting Variance-Gamma distribution. Through the use of novel symmetry arguments, we obtain a bound on the distance that is of order $m^{-1}+n^{-1}$ for
smooth test functions. We end with a simple application to binary sequence comparison. Robert Edward Gaunt 2014-03-29 2014-03-29 19 http://ejp.ejpecp.org/article/view/3049 Let $B$ be a Brownian
motion and $T_1$ its first hitting time of the level $1$. For $U$ a uniform random variable independent of $B$, we study in depth the distribution of $B_{UT_1}/\sqrt{T_1}$, that is the rescaled
Brownian motion sampled at uniform time. In particular, we show that this variable is centered. Romuald Elie Mathieu Rosenbaum Marc Yor 2014-03-29 2014-03-29 19 http://ejp.ejpecp.org/article/view/
2424 We consider a class of Volterra linear transforms of Brownian motion associated to a sequence of Müntz Gaussian spaces and determine explicitly their kernels; the kernels take a simple form when
expressed in terms of Müntz-Legendre polynomials. These are new explicit examples of progressive Gaussian enlargement of a Brownian filtration. We give a necessary and sufficient condition for the
existence of kernels of infinite order associated to an infinite dimensional Müntz Gaussian space; we also examine when the transformed Brownian motion remains a semimartingale in the filtration of
the original process. This completes some already obtained partial answers to the aforementioned problems in the infinite dimensional case. Larbi Alili Ching-Tang Wu 2014-03-22 2014-03-22 19 | {"url":"http://www.emis.de/journals/EJP-ECP/gateway/plugin/WebFeedGatewayPlugin/rsshtml.html","timestamp":"2014-04-19T01:49:50Z","content_type":null,"content_length":"10652","record_id":"<urn:uuid:3e4e0075-2a00-48b9-b6e0-4160e602b50f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus/Multivariable and differential calculus:Exercises
Parametric EquationsEdit
1. Find parametric equations describing the line segment from P(0,0) to Q(7,17).
2. Find parametric equations describing the line segment from $P(x_1,y_1)$ to $Q(x_2,y_2)$.
3. Find parametric equations describing the ellipse centered at the origin with major axis of length 6 along the x-axis and the minor axis of length 3 along the y-axis, generated clockwise.
Polar CoordinatesEdit
20. Convert the equation into Cartesian coordinates: $r=\sin(\theta)\sec^2(\theta).$
21. Find an equation of the line y=mx+b in polar coordinates.
Sketch the following polar curves without using a computer.
23. $r^2 = 4\cos(\theta)$
Sketch the following sets of points.
25. $\{(r,\theta):\theta=2\pi/3\}$
26. $\{(r,\theta):|\theta|\leq\pi/3\mbox{ and }|r|<3\}$
Calculus in Polar CoordinatesEdit
Find points where the following curves have vertical or horizontal tangents.
Sketch the region and find its area.
42. The region inside the
limaçon $2+\cos(\theta)$
43. The region inside the petals of the rose $4\cos(2\theta)$ and outside the circle $r=2$
Vectors and Dot ProductEdit
60. Find an equation of the sphere with center (1,2,0) passing through the point (3,4,5)
61. Sketch the plane passing through the points (2,0,0), (0,3,0), and (0,0,4)
62. Find the value of $|\mathbf u+3\mathbf v|$ if $\mathbf u = \langle 1,3,0\rangle$ and $\mathbf v = \langle 3,0,2\rangle$
63. Find all unit vectors parallel to $\langle 1,2,3\rangle$
64. Prove one of the distributive properties for vectors in $\mathbb R^3$: $c(\mathbf u + \mathbf v) = c\mathbf u+c\mathbf v$
65. Find all unit vectors orthogonal to $3\mathbf i+4\mathbf j$ in $\mathbb R^2$
66. Find all unit vectors orthogonal to $3\mathbf i+4\mathbf j$ in $\mathbb R^3$
67. Find all unit vectors that make an angle of $\pi/3$ with the vector $\langle 1,2\rangle$
Cross ProductEdit
Find $\mathbf u\times\mathbf v$ and $\mathbf v\times\mathbf u$
80. $\mathbf u = \langle -4, 1, 1\rangle$ and $\mathbf v = \langle 0,1,-1\rangle$
81. $\mathbf u = \langle 1,2,-1\rangle$ and $\mathbf v = \langle 3,-4,6\rangle$
Find the area of the parallelogram with sides $\mathbf u$ and $\mathbf v$.
82. $\mathbf u = \langle -3, 0, 2\rangle$ and $\mathbf v = \langle 1,1,1\rangle$
83. $\mathbf u = \langle 8, 2, -3\rangle$ and $\mathbf v = \langle 2,4,-4\rangle$
84. Find all vectors that satisfy the equation $\langle 1,1,1\rangle\times\mathbf u = \langle 0,1,1\rangle$
85. Find the volume of the parallelepiped with edges given by position vectors $\langle 5,0,0\rangle$, $\langle 1,4,0\rangle$, and $\langle 2,2,7\rangle$
86. A wrench has a pivot at the origin and extends along the x-axis. Find the magnitude and the direction of the torque at the pivot when the force $\mathbf F = \langle 1,2,3\rangle$ is applied to
the wrench n units away from the origin.
Prove the following identities or show them false by giving a counterexample.
87. $\mathbf u\times (\mathbf u\times \mathbf v) = \mathbf 0$
88. $\mathbf u\cdot (\mathbf v \times \mathbf w) = \mathbf w\cdot (\mathbf u \times \mathbf v)$
89. $(\mathbf u - \mathbf v)\times(\mathbf u + \mathbf v) = 2(\mathbf u\times\mathbf v)$
Calculus of Vector-Valued FunctionsEdit
100. Differentiate $\mathbf r(t) = \langle te^{-t}, t\ln t, t\cos(t)\rangle$.
101. Find a tangent vector for the curve $\mathbf r(t) = \langle 2t^4, 6t^{3/2}, 10/t \rangle$ at the point $t = 1$.
102. Find the unit tangent vector for the curve $\mathbf r(t) = \langle t, 2, 2/t \rangle,\ t eq 0$.
103. Find the unit tangent vector for the curve $\mathbf r(t) = \langle \sin(t), \cos(t), e^{-t}\rangle,\ t\in[0,\pi]$ at the point $t = 0$.
104. Find $\mathbf r$ if $\mathbf r'(t) = \langle \sqrt t, \cos(\pi t), 4/t\rangle$ and $\mathbf r(1) = \langle 2,3,4\rangle$.
105. Evaluate $\displaystyle\int_0^{\ln 2}(e^{-t}\mathbf i+2e^{2t}\mathbf j-4e^{t}\mathbf k)dt$
Motion in SpaceEdit
120. Find velocity, speed, and acceleration of an object if the position is given by $\mathbf r(t) = \langle 3\sin(t),5\cos(t),4\sin(t)\rangle$.
121. Find the velocity and the position vectors for $t\geq 0$ if the acceleration is given by $\mathbf a(t) = \langle e^{-t},1\rangle,\ \mathbf v(0) = \langle 1,0\rangle,\ \mathbf r(0) = \langle 0,0\
Length of CurvesEdit
Find the length of the following curves.
140. $\mathbf r(t) = \langle4\cos(3t),4\sin(3t)\rangle,\ t \in [0,2\pi/3].$
141. $\mathbf r(t) = \langle2+3t,1-4t,3t-4\rangle,\ t \in [1,6].$
Parametrization and Normal VectorsEdit
142. Find a description of the curve that uses arc length as a parameter: $\mathbf r(t) = \langle t^2,2t^2,4t^2\rangle\ t\in[1,4].$
143. Find the unit tangent vector T and the principal unit normal vector N for the curve $\mathbf r(t) = \langle t^2,t\rangle.$ Check that T⋅N=0.
Equations of Lines And PlanesEdit
160. Find an equation of a plane passing through points $(1,1,2),\ (1,2,2),\ (-1,0,1).$
161. Find an equation of a plane parallel to the plane 2x−y+z=1 passing through the point (0,2,-2)
162. Find an equation of the line perpendicular to the plane x+y+2z=4 passing through the point (5,5,5).
163. Find an equation of the line where planes x+2y−z=1 and x+y+z=1 intersect.
164. Find the angle between the planes x+2y−z=1 and x+y+z=1.
165. Find the distance from the point (3,4,5) to the plane x+y+z=1.
Limits And ContinuityEdit
Evaluate the following limits.
180. $\displaystyle\lim_{(x,y)\rightarrow(1,-2)}\frac{y^2+2xy}{y+2x}$
181. $\displaystyle\lim_{(x,y)\rightarrow(4,5)}\frac{\sqrt{x+y}-3}{x+y-9}$
At what points is the function f continuous?
183. $f(x,y) = \displaystyle\frac{\ln(x^2 + y^2)}{x-y+1}$
Use the two-path test to show that the following limits do not exist. (A path does not have to be a straight line.)
184. $\displaystyle\lim_{(x,y)\rightarrow(0,0)}\frac{4xy}{3x^2+y^2}$
185. $\displaystyle\lim_{(x,y)\rightarrow(0,0)}\frac{y}{\sqrt{x^2-y^2}}$
186. $\displaystyle\lim_{(x,y)\rightarrow(0,0)}\frac{x^3-y^2}{x^3+y^2}$
187. $\displaystyle\lim_{(x,y)\rightarrow(0,0)}\frac{x^2y^2+y^6}{x^3}$
Partial DerivativesEdit
200. Find $\partial z / \partial x$ if $\displaystyle z(x,y) = \frac1{\ln(xy)}$
201. Find all three partial derivatives of the function $\displaystyle f(x,y,z) = xe^{y^2 + z}$
Find the four second partial derivatives of the following functions.
Chain RuleEdit
Find $df/dt.$
220. $f(x,y) = x^2y-xy^3,\ x(t) = t^2,\ y(t) = t^{-2}$
221. $f(x,y) = \sqrt{x^2+y^2},\ x(t) = \cos(2t),\ y(t) = \sin(2t)$
222. $\displaystyle f(x,y,z) = \frac{x-y}{y+z},\ x(t) = t,\ \displaystyle y(t) = 2t,\ z(t) = 3t$
Find $f_s,\ f_t.$
223. $f(x,y) = \sin(x)\cos(2y),\ x=s+t,\ y=s-t$
224. $\displaystyle f(x,y,z) = \frac{x-z}{y+z},\ x(t)=s+t,\ y(t)=st,\ z(t)=s-t$
225. The volume of a pyramid with a square base is $V = \frac13x^2h$, where x is the side of the square base and h is the height of the pyramid. Suppose that $\displaystyle x(t) = \frac t{t+1}$ and $
\displaystyle h(t) = \frac1{t+1}$ for $t\geq 0.$ Find $V'(t).$
Tangent PlanesEdit
Find an equation of a plane tangent to the given surface at the given point(s).
240. $xy\sin(z) = 1,\ (1,2,\pi/6),\ (-1,-2,5\pi/6).$
241. $z = x^2e^{x-y},\ (2,2,4),\ (-1,-1,1).$
242. $z = \tan^{-1}(x+y),\ (0,0,0).$
243. $\sin(xyz) = 1/2,\ (\pi,1,1/6).$
Maximum And Minimum ProblemsEdit
Find critical points of the function f. When possible, determine whether each critical point corresponds to a local maximum, a local minimum, or a saddle point.
260. $f(x,y) = x^4 + 2y^2 - 4xy$
261. $f(x,y) = \tan^{-1}(xy)$
262. $f(x,y) = 2xye^{-x^2-y^2}$
Find absolute maximum and minimum values of the function f on the set R.
263. $f(x,y) = x^2+y^2-2y+1,\ R=\{(x,y)\mid x^2+y^2\leq 4\}$
264. $f(x,y) = x^2+y^2-2x-2y,$R is a closed triangle with vertices (0,0), (2,0), and (0,2).
265. Find the point on the plane x−y+z=2 closest to the point (1,1,1).
266. Find the point on the surface $z = x^2+y^2+10$ closest to the plane $x+2y-z=0.$
Double Integrals over Rectangular RegionsEdit
Evaluate the given integral over the region R.
280. $\displaystyle\iint_R (x^2+xy)dA,\ R = \{(x,y)\mid x\in[1,2],\ y\in[-1,1]\}$
281. $\displaystyle\iint_R (xy\sin(x^2))dA,\ R = \{(x,y)\mid x\in[0,\sqrt{\pi/2}],\ y\in[0,1]\}$
282. $\displaystyle\iint_R \frac{x}{(1+xy)^2}dA,\ R = \{(x,y)\mid x\in[0,4],\ y\in[1,2]\}$
Evaluate the given iterated integrals.
283. $\displaystyle\int_0^2\int_0^1 x^5y^2e^{x^3y^3}dydx$
284. $\displaystyle\int_1^4\int_0^2 e^{y\sqrt x}dydx$
Double Integrals over General RegionsEdit
Evaluate the following integrals.
300. $\displaystyle\iint_R xy dA,$R is bounded by x=0, y=2x+1, and y=5−2x.
301. $\displaystyle\iint_R (x+y) dA,$R is in the first quadrant and bounded by x=0, $y=x^2,$ and $y=8 - x^2.$
Use double integrals to compute the volume of the given region.
302. The solid in the first octant bound by the coordinate planes and the surface $z = 8-x^2-2y^2.$
303. The solid beneath the cylinder $z=y^2$ and above the region $R = \{(x,y)\mid y\in[0,1],\ x\in[y,1]\}.$
304. The solid bounded by the paraboloids $z=x^2+y^2$ and $z = 50-x^2-y^2.$
Double Integrals in Polar CoordinatesEdit
320. Evaluate $\displaystyle\iint_R 2xy dA$ for $R=\{(r,\theta)\mid r\in[1,3],\ \theta\in[0,\pi/2]\}$
321. Find the average value of the function $f(r,\theta) = 1/r^2$ over the region $\{(r,\theta)\mid r\in[2,4]\}.$
322. Evaluate $\displaystyle\int_0^3\int_0^{\sqrt{9-x^2}} \sqrt{x^2+y^2}dydx.$
323. Evaluate $\displaystyle\iint_R \frac{x-y}{x^2+y^2+1}dA$ if R is the unit disk centered at the origin.
Triple IntegralsEdit
340. Evaluate $\displaystyle\int_1^{\ln 8}\int_0^{\ln 4}\int_0^{\ln 2} e^{-x-y-2z}dxdydz.$
In the following exercises, sketching the region of integration may be helpful.
341. Find the volume of the solid in the first octant bounded by the plane 2x+3y+6z=12 and the coordinate planes.
342. Find the volume of the solid in the first octant bounded by the cylinder $z=\sin(y)$ for $y\in[0,\pi]$, and the planes y=x and x=0.
343. Evaluate $\displaystyle\int_0^1\int_y^{2-y}\int_0^{2-x-y} xydzdxdy.$
344. Rewrite the integral $\displaystyle\int_0^1\int_{-2}^2\int_0^{\sqrt{4-y^2}}dzdydx$ in the order dydzdx.
Cylindrical And Spherical CoordinatesEdit
360. Evaluate the integral in cylindrical coordinates: $\displaystyle\int_0^3\int_{0}^{\sqrt{9-x^2}}\int_0^{\sqrt{x^2+y^2}}\frac1{\sqrt{x^2+y^2}}dzdydx$
361. Find the mass of the solid cylinder $D = \{(r,\theta,z)\mid r\in[0,3],\ z\in[0,2]\}$ given the density function $\delta(r,\theta,z) = 5e^{-r^2}$
362. Use a triple integral to find the volume of the region bounded by the plane z=0 and the hyperboloid $z = \sqrt{17} - \sqrt{1+x^2+y^2}$
363. If D is a unit ball, use a triple integral in spherical coordinates to evaluate $\iiint_D(x^2+y^2+z^2)^{5/2}dV$
364. Find the mass of a solid cone $\{(\rho,\phi,\theta)\mid \phi\leq\pi/3,\ z\in[0,4]\}$ if the density function is $\delta(\rho,\phi,\theta) = 5-z$
365. Find the volume of the region common to two cylinders: $x^2+z^2 = 1,\ y^2+z^2 = 1$
Center of Mass and CentroidEdit
380. Find the center of mass for three particles located in space at (1,2,3), (0,0,1), and (1,1,0), with masses 2, 1, and 1 respectively.
381. Find the center of mass for a piece of wire with the density $\rho(x) = 1+\sin(x)$ for $x\in[0,\pi].$
382. Find the center of mass for a piece of wire with the density $\rho(x) = 2-x^2/16$ for $x\in[0,4].$
383. Find the centroid of the region in the first quadrant bounded by the coordinate axes and $x^2+y^2=16.$
384. Find the centroid of the region in the first quadrant bounded by $y=\ln(x)$, $y=0$, and $x=e$.
385. Find the center of mass for the region $\{(x,y)\mid x\in[0,4], y\in[0,2]\}$, with the density $\rho(x,y) = 1+x/2.$
386. Find the center of mass for the triangular plate with vertices (0,0), (0,4), and (4,0), with density $\rho(x,y) = 1+x+y.$
Vector FieldsEdit
One can sketch two-dimensional vector fields by plotting vector values, flow curves, and/or equipotential curves.
401. Find and sketch the gradient field $\mathbf F = abla\phi$ for the potential function $\phi(x,y) = \sqrt{x^2+y^2}$.
402. Find and sketch the gradient field $\mathbf F = abla\phi$ for the potential function $\phi(x,y) = \sin(x)\sin(y)$ for $|x|\leq\pi$ and $|y|\leq\pi$.
403. Find the gradient field $\mathbf F = abla\phi$ for the potential function $\phi(x,y,z) = e^{-z}\sin(x+y)$
Line IntegralsEdit
420. Evaluate $\int_C(x^2+y^2)ds$ if C is the line segment from (0,0) to (5,5)
421. Evaluate $\int_C(x^2+y^2)ds$ if C is the circle of radius 4 centered at the origin
422. Evaluate $\int_C(y-z)ds$ if C is the helix $\mathbf r(t) = \langle 3\cos(t),3\sin(t),t\rangle,\ t\in[0,2\pi]$
423. Evaluate $\int_C \mathbf F\cdot d\mathbf r$ if $\mathbf F =\langle x,y\rangle$ and C is the arc of the parabola $\mathbf r(t) = \langle 4t,t^2\rangle,\ t\in[0,1]$
424. Find the work required to move an object from (1,1,1) to (8,4,2) along a straight line in the force field $\displaystyle\mathbf F = \frac{\langle x,y,z\rangle}{x^2+y^2+z^2}$
Conservative Vector FieldsEdit
Determine if the following vector fields are conservative on $\mathbb R^2.$
440. $\langle -y, x+y \rangle$
441. $\langle 2x^3+xy^2, 2y^3+x^2y\rangle$
Determine if the following vector fields are conservative on their respective domains in $\mathbb R^3.$ When possible, find the potential function.
442. $\langle y,x,1\rangle$
443. $\langle x^3,2y,-z^3 \rangle$
Green's TheoremEdit
460. Evaluate the circulation of the field $\mathbf F=\langle 2xy,x^2-y^2\rangle$ over the boundary of the region above y=0 and below y=x(2-x) in two different ways, and compare the answers.
461. Evaluate the circulation of the field $\mathbf F=\langle 0,x^2+y^2\rangle$ over the unit circle centered at the origin in two different ways, and compare the answers.
462. Evaluate the flux of the field $\mathbf F=\langle y,-x\rangle$ over the square with vertices (0,0), (1,0), (1,1), and (0,1) in two different ways, and compare the answers.
Divergence And CurlEdit
480. Find the divergence of $\langle 2x, 4y, -3z\rangle$
481. Find the divergence of $\displaystyle\frac{\langle x,y,z\rangle}{1+x^2+y^2}$
482. Find the curl of $\langle x^2-y^2, xy, z\rangle$
483. Find the curl of $\langle z^2\sin(y), xz^2\cos(y), 2xz\sin(y)\rangle$
484. Prove that the general rotation field $\mathbf F = \mathbf a\times\mathbf r$, where $\mathbf a$ is a non-zero constant vector and $\mathbf r = \langle x,y,z\rangle$, has zero divergence, and the
curl of $\mathbf F$ is $2\mathbf a$.
Surface IntegralsEdit
500. Give a parametric description of the plane $2x-4y+3z=16.$
501. Give a parametric description of the hyperboloid $z^2=1+x^2+y^2.$
502. Integrate $f(x,y,z) = xy$ over the portion of the plane z=2−x−y in the first octant.
503. Integrate $f(x,y,z) = x^2 + y^2$ over the paraboloid $z=x^2+y^2,\ z\in[0,4].$
504. Find the flux of the field
$\mathbf F = \langle x,y,z\rangle$
across the surface of the cone
$z^2=x^2+y^2, \ z\in[0,1],$
with normal vectors pointing in the positive
505. Find the flux of the field
$\mathbf F = \langle -y,z,1\rangle$
across the surface
$y=x^2, \ z\in[0,4],\ x\in[0,1],$
with normal vectors pointing in the positive
Stokes' TheoremEdit
520. Use a surface integral to evaluate the circulation of the field $\mathbf F = \langle x^2 - z^2, y, 2xz \rangle$ on the boundary of the plane $z = 4-x-y$ in the first octant.
521. Use a surface integral to evaluate the circulation of the field $\mathbf F = \langle y^2,-z^2,x \rangle$ on the circle $\mathbf r(t) = \langle 3\cos(t),4\cos(t),5\sin(t) \rangle.$
522. Use a line integral to find $\iint_S(abla\times F)\cdot\mathbf n dS$
where $\mathbf F = \langle x,y,z \rangle$, $S$ is the upper half of the ellipsoid $\frac{x^2}4 + \frac{y^2}9 + z^2 = 1$, and $\mathbf n$ points in the direction of the z-axis.
523. Use a line integral to find $\iint_S(abla\times F)\cdot\mathbf n dS$
where $\mathbf F = \langle 2y,-z,x-y-z \rangle$, $S$ is the part of the sphere $x^2 + y^2 + z^2 = 25$ for $3 \leq z \leq 5$, and $\mathbf n$ points in the direction of the z-axis.
Divergence TheoremEdit
Compute the net outward flux of the given field across the given surface.
540. $\mathbf F = \langle x, -2y, 3z \rangle$, $S$ is a sphere of radius $\sqrt 6$ centered at the origin.
541. $\mathbf F = \langle x, 2y, z \rangle$, $S$ is the boundary of the tetrahedron in the first octant bounded by $x+y+z=1$
542. $\mathbf F = \langle y+z, x+z, x+y \rangle$, $S$ is the boundary of the cube $\{(x,y,z)\mid |x|\leq 1, |y|\leq 1, |z|\leq 1\}$
543. $\mathbf F = \langle x, y, z \rangle$, $S$ is the surface of the region bounded by the paraboloid $z=4-x^2-y^2$ and the xy-plane.
544. $\mathbf F = \langle z-x, x-y, 2y-z \rangle$, $S$ is the boundary of the region between the concentric spheres of radii 2 and 4, centered at the origin.
545. $\mathbf F = \langle x, 2y, 3z \rangle$, $S$ is the boundary of the region between the cylinders $x^2+y^2=1$ and $x^2+y^2=4$ and cut off by planes $z=0$ and $z=8$
Last modified on 8 January 2014, at 17:58 | {"url":"http://en.m.wikibooks.org/wiki/Calculus/Multivariable_and_differential_calculus:Exercises","timestamp":"2014-04-17T09:43:16Z","content_type":null,"content_length":"129616","record_id":"<urn:uuid:eec44940-492c-4746-a287-b07690e204a6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
How the Initial Proof Searcher
May Use BIOPS to Solve the First Proof Search Task
Next: Asymptotic Optimality Despite Online Up: Bias-Optimal Proof Search (BIOPS) Previous: Bias-Optimal Proof Search (BIOPS)
How the Initial Proof Searcher May Use BIOPS to Solve the First Proof Search Task
BIOPS first invokes a variant of Levin's universal search [22] (Levin attributes similar ideas to Adleman [24]). Universal search is a simple, asymptotically optimal [22,24,25,16,49],
near-bias-optimal [38,40] way of solving a broad class of problems whose solutions can be quickly verified. It was originally described for universal Turing machines with unlimited storage [22]. In
realistic settings, however, we have to introduce a recursive procedure [38,40] for time-optimal backtracking in program space to perform efficient storage management on realistic, limited computers.
Previous practical variants and extensions of universal search have been been applied [33,35,46,38,40] to offline program search tasks where the program inputs are fixed such that the same program
always produces the same results.
This is not the case in the present online setting: the same proof technique started at different times may yield different proofs, as it may read parts of
For convenience, let us first rename the storage writable by proof techniques: we place switchprog, proof, and all other proof technique-writable storage cells in a common address space called temp
changes, we introduce global Boolean variables ALSE) for all 2.1:
Method 2.1 In the O:
Make an empty stack called Stack. FOR all self-delimiting proof techniques O:
1. Run ALSE, set RUE and save the previous value Stack.
2. Pop off all elements of Stack and use the information contained therein to undo the effects of ALSE. This does not cost significantly more time than executing ^4
Method 2.1 is conceptually very simple: it essentially just time-shares all program tests such that each program gets at most a constant fraction of the total search time. Note that certain long
proofs producible by short programs (with repetitive loops etc.) are tested early by this method. This is one major difference between BIOPS and more traditional brute force proof searchers that
systematically order proofs by their sizes, instead of the sizes (or probabilities) of their proof-computing proof techniques.
None of the proof techniques can produce an incorrect proof, due to the nature of the theorem-generating instructions from Section 2.2.1. A proof technique 2.1 only by invoking the instruction check
() which may transfer control to switchprog (which possibly will delete or rewrite Method 2.1).
Clearly, since the initial
Next: Asymptotic Optimality Despite Online Up: Bias-Optimal Proof Search (BIOPS) Previous: Bias-Optimal Proof Search (BIOPS) Juergen Schmidhuber 2003-10-28
Back to Goedel Machine Home Page | {"url":"http://www.idsia.ch/~juergen/gmweb2/node10.html","timestamp":"2014-04-19T04:20:33Z","content_type":null,"content_length":"10678","record_id":"<urn:uuid:9367389f-df3b-4f65-9a5f-821b33b3ae8e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Bias in predictions from log-log regression
Replies: 1 Last Post: Jun 21, 1996 10:07 AM
Messages: [ Previous | Next ]
Bias in predictions from log-log regression
Posted: Jun 21, 1996 5:37 AM
I have regression data in which both x and y variables require a log
transformation, ie, the appropriate equation is ln(y) = a + b ln(x) + e
Predictions of y from this equation will be biased because of the
transformation. In fact it will provide predictions of the geometric mean of y
for a given x. My inclination is to add half the mean square error to the
intercept to correct for the bias. Is this a valid procedure? Is there any
other approach that will give unbiased predictions in this situation?
Date Subject Author
6/21/96 Bias in predictions from log-log regression Mark Kimberley
6/21/96 Re: Bias in predictions from log-log regression AaCBrown | {"url":"http://mathforum.org/kb/thread.jspa?threadID=494231","timestamp":"2014-04-19T21:02:56Z","content_type":null,"content_length":"17495","record_id":"<urn:uuid:2c2631f6-4529-4423-9ca0-257bfaac777a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry Tutors
Tenafly, NJ 07670
Math Made Easy & Computer Skills Too
...I can help you prepare for the math problem-solving section of the SSAT, as well as the reading and writing sections. I can help you prepare for the math portion of the ACT test, including
algebra, plane and coordinate
, and trigonometry. I can help...
Offering 10+ subjects including geometry | {"url":"http://www.wyzant.com/Ridgewood_NJ_geometry_tutors.aspx","timestamp":"2014-04-21T00:17:37Z","content_type":null,"content_length":"61070","record_id":"<urn:uuid:c89920f5-1509-477b-a83e-87256027fe55>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question About Binary Division
This is a discussion on Question About Binary Division within the Tech Board forums, part of the Community Boards category; How would I divide these two binary numbers? 100101/011011 This is assuming
I am on a 6bit 2's complement machine. ...
How would I divide these two binary numbers? 100101/011011 This is assuming I am on a 6bit 2's complement machine. I would understand how to do it if the first number were not negative. How do I
approach this? Thank you!
Why not negate the first number and then make the answer negative at the end? Edit: Although I did verify that the usual "subtract until you get 0" method works here as well; you subtract 011011 from
100101 a total of 111111 times to get to 0, for a quotient of 111111 = -1 which you would expect from -27/27.
Why not negate the first number and then make the answer negative at the end? Is this how the computer would operate on this problem?
Code: //try //{ if (a) do { f( b); } while(1); else do { f(!b); } while(1); //}
//try //{ if (a) do { f( b); } while(1); else do { f(!b); } while(1); //} | {"url":"http://cboard.cprogramming.com/tech-board/112198-question-about-binary-division.html","timestamp":"2014-04-18T13:46:06Z","content_type":null,"content_length":"49952","record_id":"<urn:uuid:72af24ec-d546-40be-aaeb-fbf93cf91d1a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
Of P-Values and Bayes: A Modest Proposal : Epidemiology
I am delighted to be invited to comment on the use of P-values, but at the same time, it depresses me. Why? So much brainpower, ink, and passion have been expended on this subject for so long, yet
plus ca change, plus c’ést le meme chose– the more things change, the more they stay the same. The references on this topic encompass innumerable disciplines, going back almost to the moment that P
-values were introduced (by R.A. Fisher in the 1920s). The introduction of hypothesis testing in 1933 precipitated more intense engagement, caused by the subsuming of Fisher’s “significance test”
into the hypothesis test machinery. ^1–9 The discussion has continued ever since. I have been foolish enough to think I could whistle into this hurricane and be heard. ^10–12 But we (and I) still use
P-values. And when a journal like Epidemiology takes a principled stand against them, ^13 epidemiologists who may recognize the limitations of P-values still feel as if they are being forced to walk
on one leg. ^14
So why do those of us who criticize the use of P-values bother to continue doing so? Isn’t the “real world” telling us something – that we are wrong, that the effort is quixotic, or that this is too
trivial an issue for epidemiologists to spend time on? Admittedly, this is not the most pressing methodologic issue facing epidemiologists. Still, I will try to argue that the topic is worthy of
serious consideration.
Let me begin with an observation. When epidemiologists informally communicate their results (in talks, meeting presentations, or policy discussions), the balance between biology, methodology, data,
and context is often appropriate. There is an emphasis on presenting a coherent epidemiologic or pathophysiologic “story,” with comparatively little talk of statistical “rejection” or other related
tomfoolery. But this same sensibility is often not reflected in published papers. Here, the structure of presentation is more rigid, and statistical summaries seem to have more power. Within these
confines, the narrative flow becomes secondary to the distillation of complex data, and inferences seem to flow from the data almost automatically. It is this automaticity of inference that is most
distressing, and for which the elimination of P-values has been attempted as a curative.
Although I applaud the motivation of attempts to eliminate P-values, they have failed in the past and I predict that they will continue to fail. This is because they treat the symptoms and not the
underlying mindset, which must be our target. We must change how we think about science itself.
I and others have discussed the connections between statistics and scientific philosophy elsewhere, ^11,12,15–22 so I will cut to the chase here. The root cause of our problem is a philosophy of
scientific inference that is supported by the statistical methodology in dominant use. This philosophy might best be described as a form of “naïve inductivism,”^23 a belief that all scientists seeing
the same data should come to the same conclusions. By implication, anyone who draws a different conclusion must be doing so for nonscientific reasons. It takes as given the statistical models we
impose on data, and treats the estimated parameters of such models as direct mirrors of reality rather than as highly filtered and potentially distorted views. It is a belief that scientific
reasoning requires little more than statistical model fitting, or in our case, reporting odds ratios, P-values and the like, to arrive at the truth.
How is this philosophy manifest in research reports? One merely has to look at their organization. Traditionally, the findings of a paper are stated at the beginning of the discussion section. It is
as if the finding is something derived directly from the results section. Reasoning and external facts come afterward, if at all. That is, in essence, naïve inductivism. This view of the scientific
enterprise is aided and abetted by the P-value in a variety of ways, some obvious, some subtle. The obvious way is in its role in the reject/accept hypothesis test machinery. The more subtle way is
in the fact that the P-value is a probability – something absolute, with nothing external needed for its interpretation.
Now let us imagine another world – a world in which we use an inferential index that does not tell us where we stand, but how much distance we have covered. Imagine a number that does not tell us
what we know, but how much we have learned. Such a number could lead us to think very differently about the role of data in making inferences, and in turn lead us to write about our data in a
profoundly different manner.
This is not an imaginary world; such a number exists. It is called the Bayes factor. ^15,17,25 It is the data component of Bayes Theorem. The odds we put on the null hypothesis (relative to others)
using data external to a study is called the “prior odds,” and the odds after seeing the data is the “posterior odds.” The Bayes factor tells us how far apart those odds are, ie, the degree to which
the data from a study move us from our initial position. It is quite literally an epistemic odds ratio, the ratio of posterior to prior odds, although it is calculable from the data, without those
odds. It is the ratio of the data’s probability under two competing hypotheses. ^15,17
If we have a Bayes factor equal to 1/10 for the null hypothesis relative to the alternative hypothesis, it means that these study results have decreased the relative odds of the null hypothesis by
10-fold. For example, if the initial odds of the null were 1 (ie, a probability of 50%), then the odds after the study would be 1/10 (a probability of 9%). Suppose that the probability of the null
hypothesis is high to begin with (as they typically are in data dredging settings), say an odds of 9 (90%). Then a 10-fold decrease would change the odds of the null hypothesis to 9/10 (a probability
of 47%), still quite probable. The Bayes factor is a measure of evidence in the same way evidence is viewed in a legal setting, or informally by scientists. Evidence moves us in the direction of
greater or lesser doubt, but except in extreme cases it does not dictate guilt or innocence, truth or falsity.
I should warn readers knowledgeable in Bayesian methods to stop here. They may be severely disappointed (or even horrified) by the proposal I am about to make. I suggest that the Bayes factor does
not necessarily have to be derived from a standard Bayesian analysis, although I would prefer that it were. As a simple alternative, it is possible instead to use the minimum Bayes factor (for the
null hypothesis). ^26 The appeal of the minimum Bayes factor is that it is calculated from the same information that goes into the P-value, and can easily be derived from standard analytic results,
as described below. Quantitatively, it is only a small step from the P-value (and shares the liability of confounding the effect size with its precision). But conceptually, it is a huge leap. I
recommend it not as a cure-all, but as a practical first step toward methodologic sanity.
The calculation goes like this. If a statistical test is based on a Gaussian approximation (as they are in many epidemiologic analyses), the strongest Bayes factor against the null hypothesis is exp
(−Z^2/2), where Z is the number of standard errors from the null value. Thus it can be applied to most regression coefficients (whose significance is typically based on some form of normal
approximation) and contingency tables. (When the t-statistic is used, it can substitute for Z.) If the log-likelihood of a model is reported, the minimum Bayes factor is simply the exponential of the
difference between the log-likelihoods of two competing models (ie, the ratio of their maximum likelihoods). This likelihood-ratio (the minimum Bayes factor) is the basis for most frequentist
analyses. While it is invariably converted into a P-value, it has inferential meaning without such conversion.
The minimum Bayes factor described above does not involve a prior probability distribution over non-null hypotheses; it is a global minimum for all prior distributions. However, there is also a
simple formula for the minimum Bayes factor in the situation where the prior probability distribution is symmetric and descending around the null value. This is −e p ln(p), ^27,28 where p is the
fixed-sample size P-value. The table shows the correspondence between P-values, Z- (or t-) scores, and the two forms of minimum Bayes factors described above. Note that even the strongest evidence
against the null hypothesis does not lower its odds as much as the P-value magnitude might lead people to believe. More importantly, the minimum Bayes factor makes it clear that we cannot estimate
the credibility of the null hypothesis without considering evidence outside the study.
This translation from P-value to minimum Bayes factor is not merely a recalibration of our evidential measure, like converting from Fahrenheit to Celsius. By assessing the result with a minimum Bayes
factor, we bring into play a different conceptual framework, which requires us to separate statistical results from inductive inferences. Reading from Table 1, a P-value of 0.01 represents a “weight
of evidence” for the null hypothesis of somewhere between 1/25 (0.04)) and 1/8 (0.13). In other words, the relative odds of the null hypothesis vs any alternative are at most 8–25 times lower than
they were before the study. If I am going to make a claim that a null effect is highly unlikely (eg, less than 5%), it follows that I should have evidence outside the study that the prior probability
of the null was no greater than 60%. If the relationship being studied is far-fetched (eg, the probability of the null was greater than 60%), the evidence may still be too weak to make a strong
knowledge claim. Conversely, even weak evidence in support of a highly plausible relationship may be enough for an author to make a convincing case. ^15,17
The use of the Bayes factor could give us a different view of results and discussion sections. In the results section, both the data and model-based data summaries are presented. (The choice of a
mathematical model can be regarded as an inferential step, but I will not explore that here.) This can be followed by an index like the Bayes factor if two hypotheses are to be contrasted. The
discussion section should then serve as a bridge between these indices and the conclusions. The components of this bridge are the plausibility of the proposed mechanisms, (drawing on laboratory,
other experimental evidence and patterns within this data), other empirical results related to this hypothesis and the qualitative strength of the current study’s design and execution.
P-values need not be banned, although I would be happy to see them go. (When I see them, I translate them into approximate Bayes factors.) But we should certainly ban inferential reasoning based on
the naïve use of P-values and hypothesis tests, and their various partners in crime, eg, stepwise regression (which chooses regression terms based exclusively on statistical significance, widely
recognized as egregiously biased and misleading). ^29,30 Even without formal Bayesian analysis, the use of minimum Bayes factors (along with, or in lieu of, P-values) might provide an antidote for
the worst inferential misdeeds. More broadly, we should incorporate a Bayesian framework into our writing, and not just our speaking. We should describe our data as one source of information among
many that make a relationship either plausible or unlikely. The use of summaries such as the Bayes factor encourages that, while use of the P-value makes it nearly impossible.
Changing the P-value culture is just a beginning. We utilize powerful tools to organize data and to guess at the reality which gave rise to them. We need to remember that these tools can create their
own virtual reality. ^17,30,31 The object of our study must be nature itself, not artifacts of the tools we use to probe its secrets. If we approach our data with respect for their complexity, with
humility about our ability to sort that out, and with detailed knowledge of the phenomena under study, we will serve our science and the public health well. From that perspective, whether or not we
use P-values seems, well, insignificant. | {"url":"http://journals.lww.com/epidem/Fulltext/2001/05000/Of_P_Values_and_Bayes__A_Modest_Proposal.6.aspx","timestamp":"2014-04-20T23:48:20Z","content_type":null,"content_length":"214076","record_id":"<urn:uuid:5d351281-19e9-4d86-b1f1-1e0af462ab1c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
A lattice of interacting chemical oscillators
At Brandeis, there is a long tradition of interesting experiments on the Belousov-Zhabostinsky reaction system, with the legendary Zhabotinsky himself having been a part of the fraternity. This
reaction system shows interesting oscillatory and stable patterns (see videos on Youtube). In the Fraden lab, an oil emulsion of micron-sized water droplets containing the BZ reactions, was shown to
show interesting synchronization properties and complex spatial patterns [Toiya et al, J. Phys. Chem. Lett. 1, 1241 (2010)]. A coupling between the droplets due to preferential diffusion of an
inhibitory reactant (bromine) in the oil medium was seen to be responsible for these collective phenomena.
In a new paper titled “Phase and frequency entrainment in locally coupled phase oscillators with repulsive interactions” in Phys. Rev. E, Physics Ph. D student Michael Giver, postdoc Zahera Jabeen
and Prof. Bulbul Chakraborty show that neighboring oscillators can be modeled as Kuramoto phase oscillators, coupled nonlinearly to its nearest neighbors. The form of the coupling chosen is
repulsive, which favors out of phase synchronization. They show using linear stability analysis as well as numerical study that the stable phase patterns depend on the geometry of the lattice. A
linear chain of these repulsively coupled oscillators shows anti-phase synchronization, in which neighboring oscillators show a phase difference of π The phase difference between the neighboring
oscillators when placed on a ring however depends on the number of oscillators. In such a case, the locally preferred phase difference of π is ruled out for an odd number of oscillators, as this may
lead to frustration. When these oscillators are placed on a triangular lattice in two dimensions, the geometry of the lattice constrains the phase difference between two neighboring oscillators to 2
π /3. Interestingly, domains with different helicities form in the lattice. In each domain, the phases of any three neighboring oscillators can vary continuously in either clockwise or an
anti-clockwise direction. Hence, phase difference between the nearest neighbors are seen to be ±2π /3 in the two domains (See figure). A phase difference of π is seen at the interfaces of these
domains. These domains can grow in time, resembling domain coarsening in other statistical studies. At large coupling strengths, the domains freeze in size due to frequency synchronization of all the
oscillators. Hence, an interplay between frequency synchronization and phase synchronization was seen in this system. Ongoing studies in the BZ experimental setup at the Fraden Lab, find correlations
with the above results. Hence, insights into a complex system like the BZ oscillators could be gained using the phase oscillator formalism.
The research was supported by the ACS Petroleum Research Fund and the Brandeis MRSEC. Michael Giver is a trainee in the Brandeis NSF-sponsored IGERT program Time, Space & Structure: Physics and
Chemistry of BIological Systems | {"url":"http://blogs.brandeis.edu/science/2011/05/24/a-lattice-of-interacting-chemical-oscillators/","timestamp":"2014-04-17T01:36:53Z","content_type":null,"content_length":"63079","record_id":"<urn:uuid:3aaa7989-8e12-4b5c-ba5c-239b2c15c2c6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Capitol Heights Geometry Tutor
Find a Capitol Heights Geometry Tutor
...I love languages, and have varying degrees of proficiency in Spanish, Russian, German, and Latin. My own love of languages translates into a desire to help others learn the intricacies of
English - its idioms, its cadence, and its beauty. I understand the anxiety that learning a foreign language can cause, as well as the special barriers that English presents for non-native
82 Subjects: including geometry, Spanish, reading, English
...I completed a B.S. degree in Applied Mathematics from GWU, graduating summa cum laude, and also received the Ruggles Prize, an award given annually since 1866 for excellence in mathematics. I
minored in economics and went on to study it further in graduate school. My graduate work was completed...
16 Subjects: including geometry, calculus, econometrics, ACT Math
...Some lack confidence, others lack motivation. Whatever the case may be, everyone has their story and the potential for improvement. If you’re interested in working with me, feel free to send me
an email and inquire about my availability.
9 Subjects: including geometry, calculus, physics, algebra 1
...The pursuit of my strong passion for mathematics, therefore, felt natural at Morehouse College where I graduated summa cum laude with a double major in Mathematics and Economics. For students
who don’t play math games and compete with siblings over who can compute the tax and tip the fastest at ...
11 Subjects: including geometry, calculus, algebra 1, algebra 2
...I was a volunteer science facilitator for five years with a non-profit organization based in Philadelphia, teaching students in grades 2-6 various science concepts through experimentation as
well as aiding in the creation and execution of city-wide science fair projects. In addition to teaching ...
14 Subjects: including geometry, reading, biology, algebra 1 | {"url":"http://www.purplemath.com/capitol_heights_md_geometry_tutors.php","timestamp":"2014-04-16T16:51:31Z","content_type":null,"content_length":"24446","record_id":"<urn:uuid:955093d3-f598-4e8a-a774-9a3de081361d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00652-ip-10-147-4-33.ec2.internal.warc.gz"} |
Enumerating levels of Grzegorczyk-hierarchy
up vote 5 down vote favorite
Grzegorczyk has divided the class of primitive recursive functions to Grzegorczyk-hierarchy by their rate of growth. In this hierarchy $E_i\subset E_{i+1}$ and the subset-relation is strict. Also $\
cup_{i}E_i = Pr$, i.e. the union of all levels is equal to the class of primitive recursive functions.
I know that primitive recursive functions are recursively enumerable, but I wonder if the levels of Grzegorczyk-hierarchy are recursively enumerable, i.e. is it possible to "scan through" some level
$E_i$ or, even better, functions in $E_i\setminus E_{i-1}$?
lo.logic computability-theory reference-request
add comment
1 Answer
active oldest votes
The usual definition of $E_n$ is in terms of basic functions, the $n$'th generator function, closed under composition, and bounded recursion. I take it that you see how an enumeration
could easily be constructed from some sort of a syntax tree, except for the difficulty that the restriction on the scheme of bounded recursion is non-syntactic. A simple idea that occurs
to me is to get around this by rewording the restriction of the scheme, e.g. given $f$,$g$,$h$, define $j$: $$j(x,0) = min(h(x,0),f(x))$$ $$j(x,n+1) = min(h(x,n+1),g(x,j(x,n)))$$ Thus
there is no syntactic restriction on bounded recursion, but the value of the new function $j$ is still semantically bounded by the prior function $h$, and therefore I'm fairly sure this
is equivalent to the usual scheme of bounded recursion.
up vote 3 There are also alternate characterisations of the levels of the Grzegorczyk hierarchy that are more naturally syntactic and from which an enumeration can easily be constructed. I have in
down vote mind the characterisation by Marc Wirz in terms of safe recursion.
To get an enumeration of $E_{n+1} \setminus E_n$ has the following problem: define $f(m) = 0$ if Goldbach's conjecture holds up to $m$, otherwise $f(m)$ is equal to the $m$'th value of
the $n+1$'st generator function. Now syntactically we see $f \in E_{n+1}$, but semantically we see $f$ is constant-zero (therefore in $E_0$ iff Goldbach's conjecture is true. I'm fairly
sure this idea can be turned into an impossibility theorem.
I had the idea of using $min$-function but rejected it for some reason. I can't, however, see now why that wouldn't work. Thanks for bringing that up, I need to re-check my previous
work. I don't understand the last paragraph of your answer. I need to think about it. (Its not clear to me how $f$ is actually syntactically defined.) Thank you, especially for the safe
recursion -pointer. – user10891 Jan 24 '11 at 7:33
add comment | {"url":"http://mathoverflow.net/questions/52606/enumerating-levels-of-grzegorczyk-hierarchy?sort=votes","timestamp":"2014-04-18T08:48:25Z","content_type":null,"content_length":"51582","record_id":"<urn:uuid:0e746d5e-af2f-4ffd-bc74-79d604555291>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
The 3x+1 problem.
Contents Next: A heuristic argument. Up: No Title Previous: Introduction
The known results on the problem are most elegantly expressed in terms of iterations of the function One way to think of the problem involves a directed graph whose vertices are the positive
integers and that has directed edges from n to . I call this graph the Collatz graph of in honor of L. Collatz [25]. A portion of the Collatz graph of T(n) is pictured in the Figure below.
(Follow this link to see the definition of the Collatz function and to experiment with Gaston Gonnet's Fast Maple Code for computing the stopping time function for the Collatz function!) A directed
graph is said to be weakly connected if it is connected when viewed as an undirected graph, i.e., for any two vertices there is a path of edges joining them, ignoring the directions on the edges. The
Conjecture can be formulated in terms of the Collatz graph as follows.
3x+1 CONJECTURE (First form).
The Collatz graph of on the positive integers is weakly connected.
We call the sequence of iterates the trajectory of n. There are three possible behaviors for such trajectories when n > 0.
The Conjecture asserts that all trajectories of positive n are convergent. It is certainly true for n > 1 that cannot occur without some occurring. Call the least positive k for which the
stopping time of n, and set if no k occurs with . Also call the least positive k for which the total stopping time of n, and set if no such k occurs. We may restate the Conjecture in terms of
the stopping time as follows.
3x+1 CONJECTURE (Second form).
Every integer has a finite stopping time.
The appeal of the problem lies in the irregular behavior of the successive iterates . One can measure this behavior using the stopping time, the total stopping time, and the expansion factor
defined by
if n has a bounded trajectory and if n has a divergent trajectory. For example n = 27 requires 70 iterations to arrive at the value 1 and Table 1 illustrates the concepts defined so far by
giving data on the iterates for selected values of n.
TABLE 1. Behavior of iterates .
The Conjecture has been numerically checked for a large range of values of n. It is an interesting problem to find efficient algorithms to test the conjecture on a computer. The current record for
verifying the Conjecture seems to be held by Nabuo Yoneda at the University of Tokyo, who has reportedly checked it for all [2]. In several places the statement appears that A. S. Fraenkel has
checked that all have a finite total stopping time; this statement is erroneous [32].
Contents Next: A heuristic argument. Up: No Title Previous: Introduction | {"url":"http://www.cecm.sfu.ca/organics/papers/lagarias/paper/html/node2.html","timestamp":"2014-04-18T08:36:06Z","content_type":null,"content_length":"11952","record_id":"<urn:uuid:5ee8fe15-b517-4c82-9076-bd7247fd7a23>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rotor dynamical modelling and analysis of hydropower units
Rotor dynamical modelling and analysis of hydropower units
Abstract (Summary)
In almost all production of electricity the rotating machines serves as an important part of the energy transformation system. In hydropower units, a hydraulic turbine connected to a generator
converts the potential energy stored in the water reservoir into electrical energy in the generator. An essential part of this energy conversion is the rotating system of which the turbine and the
generator are crucial parts. During the last century the machines for production of electricity have been developed from a few megawatts per unit, up to several hundreds megawatts per unit. The
development and increased size of the hydropower machines has also brought a need for new techniques. The most important developments are the increased efficiency of the turbines and generators, new
types of bearings and the introduction of new materials. Vibration measurement is still the most reliable and commonly used method for avoiding failure during commissioning, for periodic maintenance,
and for protection of the systems. Knowledge of the bearing forces at different operational modes is essential in order to estimate the degeneration of components and to avoid failures. In the
appended Paper A, a method has been described for measurement of bearing load by use of strain gauges installed on the guide bearing bracket. This technique can determine the magnitude and direction
of both static and dynamic loads acting on the bearing. This method also makes it possible to find the cause of the radial bearing force among the various eccentricities and disturbances in the
system. This method was used in Paper C to investigate bearing stiffness and damping. A principal cause of many failures in large electrical machines is the occurrence of high radial forces due to
misalignment between rotor and stator, rotor imbalance or disturbance from the turbine. In this thesis, two rotor models are suggested for calculation of forces and moments acting on the generator
shaft due to misalignment between stator and rotor. These two methods are described in appended papers B and D. In Paper B, a linear model is proposed for an eccentric generator rotor subjected to a
radial magnetic force. Both the radial force and the bending moment affecting the generator shaft are considered when the centre of the rotor spider hub deviates from the centre of the rotor rim. The
magnetic force acting on the rotor is assumed to be proportional to the rotor displacement. In Paper D, a non-linear model is proposed for analysis of an eccentric rotor subjected to radial magnetic
forces. Both the radial and bending moments affecting the generator shaft are considered when the centre of the generator spider hub deviates from the centre of the generator rim. The magnetic forces
acting on the rotor are assumed to be a non-linear function of the air-gap between the rotor and stator. The stability analysis shows that the rotor can become unstable for small initial
eccentricities if the position of the rotor rim relative to the rotor hub is included in the analysis. The analysis also shows that natural frequencies can decrease and the rotor response can
increase if the position of the rotor rim in relation to the rotor spider is considered. In Paper E, the effect of damping rods was included in the analysis of the magnetic pull force. The resulting
force was found to be reduced significantly when the damper rods were taken into account. An interesting effect of the rotor damper rods was that they reduced the eccentricity forces and introduced a
force component perpendicular to the direction of eccentricity. The results from the finite-element simulations were used to determine how the forces affect the stability of the generator rotor.
Damped natural eigenfrequencies and the damping ratio for load and no-load conditions were investigated. When applying the forces computed in the time- dependent model, the damped natural
eigenfrequencies were found to increase and the stability of the generator rotor was found to be reduced, compared with when the forces were computed in a stationary model. Damage due to contact
between the runner and the discharge ring have been observed in several hydroelectric power units. The damage can cause high repair costs to the runner and the discharge ring as well as considerable
production losses. In Paper F a rotor model of a 45 MW hydropower unit is used for the analysis of the rotor dynamical phenomena occurring due to contact between the runner and the discharge ring for
different grades of lateral force on the turbine and bearing damping. The rotor model consists of a generator rotor and a turbine, which are connected to an elastic shaft supported by three isotropic
bearings. The discrete representation of the rotor model consist of 32 degrees of freedom. To increase the speed of the analysis, the size of the model has been reduced with the IRS method to a
system with 8 degrees of freedom. The results show that a small gap between the turbine and discharge ring can be dangerous, due to the risk of contact with high contact forces as a consequence. It
has also been observed that backward whirl can occur and in some cases the turbine motion becomes quasi-periodic or chaotic. The endurance of hydropower rotor components is often associated with the
dynamic loads acting on the rotating system and the number of start-stop cycles of the unit. Measurements, together with analysis of the rotor dynamics, are often the most powerful methods available
to improve understanding of the cause of the dynamic load. The method for measurement of the bearing load presented in this thesis makes it possible to investigate the dynamic as well as the static
loads acting on the bearing brackets. This can be done using the suggested method with high accuracy and without re-designing the bearings. During commissioning of a hydropower unit, measurement of
shaft vibrations and forces is the most reliable methods for investigating the status of the rotating system. Generator rotor models suggested in this work will increase the precision of the
calculated behaviour of the rotor. Calculation of the rotor behaviour is important before a generator is put in operation, after overhaul or when a new machine is to be installed.
Bibliographical Information:
School:Luleå tekniska universitet
School Location:Sweden
Source Type:Doctoral Dissertation
Date of Publication:01/01/2008 | {"url":"http://www.openthesis.org/documents/Rotor-dynamical-modelling-analysis-hydropower-594695.html","timestamp":"2014-04-19T15:17:05Z","content_type":null,"content_length":"14092","record_id":"<urn:uuid:9f4a8b6d-10e9-4837-9e0e-937a8dafb740>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hi all, need a simple fraction help
September 6th 2012, 07:01 AM #1
Sep 2012
Hi all, need a simple fraction help
Hi guys, my name is sdfghnjhgfdcfvgbhnjhgfdsx. I was following a math site and there were certain methods they used that confused me. Like which is the right one to use. here's the examples:
7 + 2 3/4. 2 3/4 is a mixed fraction. so 4 x 2 + 3 = 11. making this
7/1 + 11/4 .. the LCD(Least Common Denominator) is 4. so they multiplied 7 and 1 by 4.
7 x 4 = 28 and 1 x 4 = 4... so
28/4 + 11/4 = 39/4 < i get it <-- This Method worked out Adding or Subtracting Fractions
Without Common Denominators
the next example...
Write the fraction as an equivalent fraction with the given denominator. 3/5 with the denominator of 20. so if 5 x 4 = 20 (as the denominator). i am told to multiple 3 x 4 = 12 to make the
fraction equivalent. so the equivalent fraction of 3/5 with a denominator of 20 is 12/20. basically multiplying both 3 and 5 by 4. <-- This Method works out Equivalent Fractions
now here's my problem. the 2 examples are pretty similar, but the 1st one wasnt just to write an equivalent fraction, it also was the addition of 2 fractions. so i'm trying to use this method for
addition or subraction of fractions. But now im given this task that involves adding 2 fractions once again. but this time, their solution is not this method i just showed, it's now the
Equivalent Fraction method i showed which was simply finding out the equivalent fraction. It is now confusing me, here's the task:
3/4 + 2/5 - 7/10
so remember im using method 1. 20 is divisable by all 3 denominators. so like the example showed i multiply all fractions by 20. 3 x 20 = 60, 4 x 20 = 80 etc.. but now they are telling me this
isnt right. That i should use the method they used with Equivalent Fractions. So 20 is already the denominator. lets take the 1st fraction (3/4), if 4 x 5 = 20(denominator.) we must also multiply
3 by 5. so we get 15/20. i get it, but why did they not use the method they explained for addition? why have they confused me by using this other method? how will i know which one to use when
presented with tasks like these?
please help me
Re: Hi all, need a simple fraction help
The example you gave for addition had denominators of 1 and 4, so they multiplied the numerator and denominator of 7/1 both by 4 to get 28/4. This is no different than finding the least common
multiple for 1 and 4. I think what's confuisng you is that the only difference between the two examples is that the first started with a mixed fraction - once it got converted to a "regular"
fraction the technique is the same as in example 2. Bottom line - once you have the things you adding (or subtracting) as "regular" fractions, next step is to find the LCM for the denominators.
Re: Hi all, need a simple fraction help
well im not sure. if we remove the mixed fraction and compare the 2:
7/1 + 11/4
3/4 + 2/5 - 7/10
the 1st one multiplied 7/1 both by 4 LCM. but in the other task what's confusing me is they didnt miltiply the fractions by the LCM 20 like they did in the 1st one. Instead they used 20 as the
denominator then mulitplied the nemerator by a number that would result in 20. Why didnt they multiply both the numerators and denomerators by 20 like the previous?
Re: Hi all, need a simple fraction help
well im not sure. if we remove the mixed fraction and compare the 2:
7/1 + 11/4
3/4 + 2/5 - 7/10
the 1st one multiplied 7/1 both by 4 LCM. but in the other task what's confusing me is they didnt miltiply the fractions by the LCM 20 like they did in the 1st one. Instead they used 20 as the
denominator then mulitplied the nemerator by a number that would result in 20. Why didnt they multiply both the numerators and denomerators by 20 like the previous?
Because if you multiplied top and bottom of each fraction by 20, you will end up with the three fractions having different denominators. You can only add or subtract fractions when they have the
same denominator!
Re: Hi all, need a simple fraction help
ok that computed instantly in my head. why was i so stupid not to think of that? thanks both of you for offering me with your help
Re: Hi all, need a simple fraction help
You probably used up all of your brain power remembering your username when you logged on!
Re: Hi all, need a simple fraction help
September 6th 2012, 07:41 AM #2
September 6th 2012, 08:09 AM #3
Sep 2012
September 6th 2012, 08:24 AM #4
September 6th 2012, 08:58 AM #5
Sep 2012
September 6th 2012, 09:08 AM #6
MHF Contributor
Apr 2005
September 6th 2012, 09:37 AM #7
Sep 2012 | {"url":"http://mathhelpforum.com/new-users/203010-hi-all-need-simple-fraction-help.html","timestamp":"2014-04-18T17:19:36Z","content_type":null,"content_length":"53458","record_id":"<urn:uuid:fea760ba-f3fb-4927-91eb-2a1b407288b7>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
MIT HST.583 2006 Lab 7(ii)
Anastasia Yendiki, Ph.D.
Lab description
The purpose of this lab is to further familiarize you with linear-model fitting of fMRI data, in particular with the interaction of paradigm-related and nuisance components of the linear model.
Lab software
We will use NeuroLens for all fMRI statistical analysis labs.
Lab data
We will use data from the self-reference functional paradigm that was presented in Lab 1. For this lab we will use all data from Subject 7, available under the course directory: /afs/athena.mit.edu/
Here's a reminder of the paradigm structure. Words are presented in a blocked design. Each run consists of 4 blocks, 2 with the self-reference condition and 2 with the semantic condition. The
conditions alternate in the ABBA format. In particular, for Subject 7 the order is:
Run 1: A=semantic, B=selfref
Run 2: A=selfref, B=semantic
Run 3: A=semantic, B=selfref
Run 4: A=semantic, B=selfref
Words are presented for 3 sec each, grouped in blocks of ten. Prior to each block the subject views a 2 sec cue describing their task for the upcoming block. Each block is followed by 10 sec of a
rest condition. This is the breakdown of a single run:
10 sec Rest
2 sec Cue
30 sec Block A (10 words, each lasts 3 sec)
10 sec Rest
2 sec Cue
30 sec Block B (10 words, each lasts 3 sec)
10 sec Rest
2 sec Cue
30 sec Block B (10 words, each lasts 3 sec)
10 sec Rest
2 sec Cue
30 sec Block A (10 words, each lasts 3 sec)
16 sec Rest
TR = 2 sec
Total run duration = 184 sec (i.e., 92 scans) per run
Lab report
The lab report must include your answers to the questions found throughout the instructions below.
Due date: 12/04/2006
Lab instructions
1. As usual, open the Subject 7 data by dragging its entire folder onto the NeuroLens icon (or by starting NeuroLens and choosing Open... from the File menu):
2. We will first explore the impact of polynomial nuisance components on the fitting of a linear model. Open the data from the first run (Series 7) and pre-process it as you learned in the first
lab. (Perform motion correction and then spatial smoothing by a Gaussian kernel with a FWHM of 6mm.)
3. Fit a linear model to the motion-corrected, smoothed images. Use the file that you created in the previous lab for the paradigm-related components of the model. This time, however, you will
incorporate polynomial nuisance components of up to third order; select 3 (Cubic) from the Polynomial drift order menu in the Model tab of the Linear Modeling action.
We want to test the statistical significance of the polynomial nuisance components. Construct a separate contrast for each of them, i.e., one contrast that selects only the constant term, one
that selects only the linear term, and so on. (The Basis functions plot window in the Model tab shows you the order of the terms in the model, so it will help you in writing out the contrasts.)
Configure the output to be -log(p) maps and run the linear model fit.
Q: Examine the four -log(p) maps. Are the four polynomial terms of equal statistical significance? Is the statistical significance of each polynomial term uniform throughout the brain volume?
4. We will now quantify the significance of the four polynomial terms by calculating the fraction of the total voxels in the volume where each polynomial term has a statistically significant value.
Go to the window showing the EPI images and choose the -log(p) map of one of the polynomial terms from the Overlay toolbar item at the top of the window. We will use the -log(p) map as a ROI to
count the number of voxels in the volume where the -log(p) map takes an absolute value greater than 3.
Click on the Inspector toolbar item. Click on the ROI tab of the Inspector window. Check Compute ROI Statistics. In the Thresholding box, enter 3 as the Lower bound and a value much higher than
all values all the -log(p) maps (e.g., 1000) as the Upper bound. (Every time you change one of the threshold values, you will have to press TAB for the ROI statistics to be updated!) From the box
of ROI statistics, record the Number of Voxels.
Repeat for each of the four -log(p) maps. (Every time you change the overlay to a different map, you will have to uncheck and recheck Compute ROI Statistics for the ROI statistics to be updated!)
Using the same method, find the number of voxels where the -log(p) map takes values less than -3. Repeat for each of the four -log(p) maps.
Q: Record the numbers of voxels where each polynomial term is statistically significant at a -log(p) level greater than 3 or less than -3. Use these numbers to calculate the fraction of the total
voxels in the 3D volume where each term is statistically significant.
│ Order of poly term │ #voxels > 3 │ #voxels < -3 │ (#voxels > 3 or < -3) / (# total voxels) │
│ 0 │ │ │ │
│ 1 │ │ │ │
│ 2 │ │ │ │
│ 3 │ │ │ │
5. We will now explore how the addition of polynomial terms to the linear model affects inference on the paradigm-related terms of the model. We will analyze the data by gradually increasing the
number of polynomial terms in the model and we will measure the significance of the model term corresponding to the self-reference condition.
Fit a linear model to the motion-corrected, smoothed images from run 1. Specify a contrast of the self-reference condition vs. baseline. Set the Polynomial drift order to 0. Configure the output
to be -log(p) maps and run the linear model fit. Save the -log(p) map of the self-reference condition vs. baseline.
Repeat the fit three more times, each time increasing the maximum order of the polynomial drift terms to 1, 2 and 3. Save the resulting -log(p) maps.
6. We will now measure the maximum of the -log(p) map within certain ROIs as a function of the maximum order of the polynomial drift terms in the linear model.
Open the visual cortex ROI that you will find under /afs/athena.mit.edu/course/other/hst.583/Data2006/selfRefOld/subj7,run1,roi,visual,lab7ii.mnc
To locate the ROI in the 3D volume, you can check Show all slices at the bottom of the window displaying the ROI file:
Command-click on some voxel within the ROI and check the statistical maps to make sure that the ROI corresponds to a cluster of active voxels. Go back to the window displaying the ROI file. From
the Action toolbar item, choose ROI Statistics. A window will appear with a list of all open files (except the ROI file itself):
In the window above you have to choose the images from which you want to extract ROI statistics. Select the four -log(p) maps of the self-reference-vs.-baseline contrast that you obtained in the
previous step. (In the list of file names, you can Shift-click to select a set of file names or Command-click to add a file name to the selection.)
Enter 1 as the Lower threshold (and then press TAB as usual!) This will make sure that the statistics are only calculated over the voxels where the ROI takes a non-zero value. (If you left the
lower threshold set to 0, you would get statistics from the entire 3D volume.) Leave the Upper threshold as is.
Click on the OK button to compute ROI statistics in the four statistical maps. The result should come up in a window like the following:
Q: Record the maximum value of the -log(p) map within the visual-cortex ROI as a function of the maximum order of polynomial drift terms in the linear model.
To copy the values directly to your lab report, you can select all four rows, click on the copy selected rows to clipboard button, and then paste to your document. You only need to keep the last
value (Max) from each row.
7. Repeat the previous step for three more ROIs.
□ ROI in the motor cortex: /afs/athena.mit.edu/course/other/hst.583/Data2006/selfRefOld/subj7,run1,roi,motor,lab7ii.mnc
□ ROI in the medial prefrontal cortex (MPFC): /afs/athena.mit.edu/course/other/hst.583/Data2006/selfRefOld/subj7,run1,roi,mpfc,lab7ii.mnc
□ ROI in the cerebellum: /afs/athena.mit.edu/course/other/hst.583/Data2006/selfRefOld/subj7,run1,roi,cerebel,lab7ii.mnc
Q: Record the maximum value of the -log(p) map within each ROI as a function of the maximum order of polynomial drift terms in the linear model.
│ Highest-order poly term │ Max -log(p) visual │ Max -log(p) motor │ Max -log(p) MPFC │ Max -log(p) cerebellar │
│ 0 │ │ │ │ │
│ 1 │ │ │ │ │
│ 2 │ │ │ │ │
│ 3 │ │ │ │ │
Q: Show plots of the maximum -log(p) vs. polynomial order for each of the four ROIs. Do all ROIs exhibit the same trend as the order of the polynomial drift terms increases? If you had to choose
the maximum polynomial order in your model based on the comparisons that you performed in this lab, what would you choose?
< Previous lab Next lab > | {"url":"http://web.mit.edu/hst.583/www/lab7ii.html","timestamp":"2014-04-17T18:44:14Z","content_type":null,"content_length":"12247","record_id":"<urn:uuid:a349aae3-00c2-480d-9327-7118dc9c692b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00505-ip-10-147-4-33.ec2.internal.warc.gz"} |
Refereed journal publications (published, in press, accepted)
Refereed journal publications (in review)
1. Enhancing Privacy and Accuracy in Probe Vehicle Based Traffic Monitoring via Virtual Trip Lines, B. Hoh, T.Iwuchukwu, Q. Jacobson, M. Gruteser, A. Bayen, J-.C. Herrera, R. Herring, D. Work, M.
Annavaram, J. Ban, in review, IEEE Transactions on Mobile Computing, April 2010
2. Analytical and grid-free solutions to the Lighthill-Whitham-Richards traffic flow model, P.-E. Mazare, C. G. Claudel and A. M. Bayen, Submitted to Transportation Research B, 2010.
3. Solutions to inverse modeling problems involving Hamilton-Jacobi equations using Linear Programming, C. G. Claudel, Timothee Chamoin, and A. M. Bayen, Submitted to IEEE Transactions on Control
Systems Technology, 2010.
4. Enhancing Privacy and Accuracy in Probe Vehicle Based Traffic Monitoring via Virtual Trip Lines, B. Hoh, T.Iwuchukwu, Q. Jacobson, M. Gruteser, A. Bayen, J-.C. Herrera, R. Herring, D. Work, M.
Annavaram, J. Ban, in review, IEEE Transactions on Mobile Computing, April 2010
5. Analytical and grid-free solutions to the Lighthill-Whitham-Richards traffic flow model, P.-E. Mazare, C. G. Claudel and A. M. Bayen, Submitted to Transportation Research B, 2010.
6. Solutions to inverse modeling problems involving Hamilton-Jacobi equations using Linear Programming, C. G. Claudel, Timothee Chamoin, and A. M. Bayen, Submitted to IEEE Transactions on Control
Systems Technology, 2010.
Refereed conference publications
1. Impacts of the mobile internet on transportation cyberphysical systems: traffic monitoring using smartphones, D. Work and A. Bayen, National Workshop for Research on High-Confidence
Transportation Cyber-Physical Systems: Automotive, Aviation and Rail, Washington, DC, November 18-20, 2008
2. Automotive cyber physical systems in the context of human mobility, D. Work, A. Bayen, and Q. Jacobson, National Workshop on High-Confidence Automotive Cyber-Physical Systems, Troy, MI, April
3-4, 2008
3. Mobile Century - using GPS mobile phones as traffic sensors: A field experiment, S. Amin et al., 15th World Congress on Intelligent Transportation Systems, New York, NY, Nov. 2008 | {"url":"http://traffic.berkeley.edu/project/publications","timestamp":"2014-04-16T04:10:25Z","content_type":null,"content_length":"16915","record_id":"<urn:uuid:6fca6a2d-e851-4ebb-8a05-13755266cd40>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trading Strategy – Buy on Gap (EPChan)
This post is going to investigate a strategy called Buy on Gap that was discussed by E.P Chan in his blog post “the life and death of a strategy”. The strategy is a mean reverting strategy that looks
to buy the weakest stocks in the S&P 500 at the open and liquidate the positions at the close. The performance of the strategy is seen in the image below, Annualized Sharpe Ratio (Rf=0%) 2.129124.
All numbers in this table are %(ie 12.6 is 12.6%)
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec BuyOnGap S&P500
2005 0.0 0.0 0.0 0.0 0.0 -0.4 -0.6 1.1 0.7 1.0 -0.2 0.3 1.8 -0.1
2006 0.2 -0.6 -0.3 0.0 1.1 0.1 0.0 0.4 0.1 0.1 0.2 -0.2 1.1 -2.0
2007 0.8 0.9 0.1 -1.1 0.3 -0.2 -1.5 0.2 -0.2 0.9 -0.4 0.3 0.0 1.0
2008 4.3 -1.9 0.8 -0.5 0.0 0.7 0.2 -0.7 2.0 3.3 2.0 2.0 12.6 6.0
2009 -2.9 0.1 1.4 -1.1 1.3 -0.8 0.4 0.0 -0.3 -1.9 0.8 -1.0 -4.0 -7.3
2010 -1.0 -0.1 0.0 -1.1 -0.7 -0.6 1.1 0.7 -0.5 0.6 0.5 0.1 -1.1 -5.9
2011 0.6 0.3 0.2 -0.1 0.2 0.4 0.7 0.1 -1.4 -1.2 1.4 -0.2 1.0 2.1
2012 -0.3 -0.5 -0.1 0.0 -1.0 NA NA NA NA NA NA NA -1.8 -0.8
From the post two trading criterion were mentioned:
1. Buy the 100 stocks out of the S&P 500 constituents that have the lowest previous days lows to the current days opening price
2. Provided that the above return is less than the 1 times the 90day standard deviation of Close to Close returns
The criterion are fairly specific however it is important to write flexible code where it is easy to change the main model parameters, below is a list of variable names that specify the parameters in
the R script:
• nStocksBuy – How many stocks to buy
• stdLookback – How many days to look back for the standard deviation calculation
• stdMultiple – Number to multiply the standard deviation by (was 1 in criterion 2.), the larger this variable the more stocks that will satisfy criterion 2.
The code is split into 5 distinct sections.
Section 1: Loop through all the stocks loaded from the data file, for each stock calculate the previous day close to current days open (lowOpenRet). Calculate the Close Close return and calculate the
standard deviation (stdClClRet). Also calculate the Open to Close return for every day (dayClOpRet), if we decide to trade this day this would be the return of the strategy for the day.
Section 2: This section combines columns from each of the individual stock data frames into large matrices that cover all the stocks. retMat contains the lowOpenRet for each stock. stdMat contains
the stdClClRet for all stocks, dayretMat contains the dayClOpRet for all stocks.
Essentially instead of having lots of variables, we combine them into a big matrix.
Section 3: This will check if matrices in section 2 match the trade entry criterion. This section produces two matrices (conditionOne and conditionTwo). The matrices contain a 1 for a passed entry
criterion and a 0 for a failed entry criterion.
Section 4: This multiples the conditionOne with conditionTwo to give conditionsMet, since those matricies are binary multiplying them together identifies the regions where both conditions passed (1*1
=1 ie a pass). This means enter a trade.
conditionsMet is then used as a mask, it has 1′s when a trade should occur and 0′s when no trade should happen. So multiplying this with dayClOpRet gives us the Open to Close daily returns for all
days and stocks that a trade occurred on.
The script assumes capital is split equally between all the stocks that are bought at the open, if less than 100 stocks meet the entry criteria then it is acceptable to buy less.
Section 5: This section does simple performance analytics and plots the equity curve against the S&P 500 index.
Onto the code (note the datafile is generated in Stock Data Download & Saving R):
#install.packages("caTools") #for rolling standard deviation
library("PerformanceAnalytics") #Load the PerformanceAnalytics library
datafilename = "stockdata.RData"
stdLookback <- 90 #How many periods to lookback for the standard deviation calculation
stdMultiple <- 1 #A Number to multiply the standard deviation by
nStocksBuy <- 100 #How many stocks to buy
#CONDITION 1
#Buy 100 stocks with lowest returns from their previous days lows
#To the current days open
#CONDITION 2
#Provided returns are lower than one standard deviation of the
#90 day moving standard deviation of close close returns
#Exit long positions at the end of the day
#SECTION 1
symbolsLst <- ls(stockData)
#Loop through all stocks in stockData and calculate required returns / stdev's
for (i in 1:length(symbolsLst)) {
cat("Calculating the returns and standard deviations for stock: ",symbolsLst[i],"\n")
sData <- eval(parse(text=paste("stockData$",symbolsLst[i],sep="")))
#Rename the colums, there is a bug in quantmod if a stock is called Low then Lo() breaks!
#Ie if a column is LOW.x then Lo() breaks
oldColNames <- names(sData)
colnames(sData) <- c("S.Open","S.High","S.Low","S.Close","S.Volume","S.Adjusted")
#Calculate the return from low of yesterday to the open of today
lowOpenRet <- (Op(sData)-lag(Lo(sData),1))/lag(Lo(sData),1)
colnames(lowOpenRet) <- paste(symbolsLst[i],".LowOpenRet",sep="")
#Calculate the n day standard deviation from the close of yesterday to close 2 days ago
stdClClRet <- runsd((lag(Cl(sData),1)-lag(Cl(sData),2))/lag(Cl(sData),2),k=stdLookback,endrule="NA",align="right")
stdClClRet <- stdMultiple*stdClClRet + runmean(lag(Cl(sData),1)/lag(Cl(sData),2),k=stdLookback,endrule="NA",align="right")
colnames(stdClClRet) <- paste(symbolsLst[i],".StdClClRet",sep="")
#Not part of the strategy but want to calculate the Close/Open ret for current day
#Will use this later to evaluate performance if a trade was taken
dayClOpRet <- (Cl(sData)-Op(sData))/Op(sData)
colnames(dayClOpRet) <- paste(symbolsLst[i],".DayClOpRet",sep="")
colnames(sData) <- oldColNames
eval(parse(text=paste("stockData$",symbolsLst[i]," <- cbind(sData,lowOpenRet,stdClClRet,dayClOpRet)",sep="")))
#SECTION 2
#Have calculated the relevent returns and standard deviations
#Now need to to work out what 100 (nStocksBuy) stocks have the lowest returns
#Make a returns matrix
for (i in 1:length(symbolsLst)) {
cat("Assing stock: ",symbolsLst[i]," to the returns table\n")
sDataRET <- eval(parse(text=paste("stockData$",symbolsLst[i],"[,\"",symbolsLst[i],".LowOpenRet\"]",sep="")))
sDataSTD <- eval(parse(text=paste("stockData$",symbolsLst[i],"[,\"",symbolsLst[i],".StdClClRet\"]",sep="")))
sDataDAYRET <- eval(parse(text=paste("stockData$",symbolsLst[i],"[,\"",symbolsLst[i],".DayClOpRet\"]",sep="")))
if(i == 1){
retMat <- sDataRET
stdMat <- sDataSTD
dayretMat <- sDataDAYRET
} else {
retMat <- cbind(retMat,sDataRET)
stdMat <- cbind(stdMat,sDataSTD)
dayretMat <- cbind(dayretMat,sDataDAYRET)
#SECTION 3
#CONDITON 1 test output (0 = failed test, 1 = passed test)
#Now will loop over the returns matrix finding the nStocksBuy smallest returns
conditionOne <- retMat #copying the structure and data, only really want the structure
conditionOne[,] <- 0 #set all the values to 0
for (i in 1:length(retMat[,1])){
orderindex <- order((retMat[i,]),decreasing=FALSE) #order row entries smallest to largest
orderindex <- orderindex[1:nStocksBuy] #want the smallest n (nStocksBuy) stocks
conditionOne[i,orderindex] <- 1 #1 Flag indicates entry is one of the nth smallest
#CONDITON 2
#Check Close to Open return is less than 90day standard deviation
conditionTwo <- retMat #copying the structure and data, only really want the structure
conditionTwo[,] <- 0 #set all the values to 0
conditionTwo <- retMat/stdMat #If ClOp ret is < StdRet tmp will be < 1
conditionTwo[is.na(conditionTwo)] <- 2 #GIVE IT FAIL CONDITION JUST STRIPPING NAs here
conditionTwo <- apply(conditionTwo,1:2, function(x) {if(x<1) { return (1) } else { return (0) }})
#SECTION 4
#CHECK FOR TRADE output (1 = passed conditions for trade, 0 = failed test)
#Can just multiply the two conditions together since they're boolean
conditionsMet <- conditionOne * conditionTwo
colnames(conditionsMet) <- gsub(".LowOpenRet","",names(conditionsMet))
#Lets calculate the results
tradeMat <- dayretMat
colnames(tradeMat) <- gsub(".DayClOpRet","",names(tradeMat))
tradeMat <- tradeMat * conditionsMet
tradeMat[is.na(tradeMat)] <- 0
tradeVec <- as.data.frame(apply(tradeMat, 1,sum) / apply(conditionsMet, 1,sum)) #Calculate the mean for each row
colnames(tradeVec) <- "DailyReturns"
tradeVec[is.nan(tradeVec[,1]),1] <- 0 #Didnt make or loose anything on this day
plot(cumsum(tradeVec[,1]),xlab="Date", ylab="EPCHAN Buy on Gap",xaxt = "n")
#SECTION 5
#### Performance Analysis ###
#Get the S&P 500 index data
indexData <- new.env()
startDate = as.Date("2005-01-13") #Specify what date to get the prices from
getSymbols("^GSPC", env = indexData, src = "yahoo", from = startDate)
#Calculate returns for the index
indexRet <- (Cl(indexData$GSPC)-lag(Cl(indexData$GSPC),1))/lag(Cl(indexData$GSPC),1)
colnames(indexRet) <- "IndexRet"
zooTradeVec <- cbind(as.zoo(tradeVec),as.zoo(indexRet)) #Convert to zoo object
colnames(zooTradeVec) <- c("BuyOnGap","S&P500")
#Lets see how all the strategies faired against the index
charts.PerformanceSummary(zooTradeVec,main="Performance of EPCHAN Buy on Gap",geometric=FALSE)
#Lets calculate a table of montly returns by year and strategy
cat("Calander Returns - Note 13.5 means a return of 13.5%\n")
#Lets make a boxplot of the returns
#Set the plotting area to a 2 by 2 grid
#Plot various histograms with different overlays added
chart.Histogram(zooTradeVec, main = "Plain", methods = NULL)
chart.Histogram(zooTradeVec, main = "Density", breaks=40, methods = c("add.density", "add.normal"))
chart.Histogram(zooTradeVec, main = "Skew and Kurt", methods = c("add.centered", "add.rug"))
chart.Histogram(zooTradeVec, main = "Risk Measures", methods = c("add.risk"))
Possible Future Modifications
• Add shorting the strongest stocks so that the strategy is market neutral
• Vary how many stocks to hold
• Vary the input variables (discussed above)
• Try a different asset class, does this work for forex? | {"url":"http://gekkoquant.com/2012/06/01/trading-strategy-buy-on-gap-epchan/","timestamp":"2014-04-16T17:14:05Z","content_type":null,"content_length":"69999","record_id":"<urn:uuid:2eebb680-36e3-466d-81b5-65b468d58698>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Palmer Township, PA Calculus Tutor
Find a Palmer Township, PA Calculus Tutor
...Although technically, you don't actually need to know much about these subjects to be successful on the test. Every clue you need to solve a problem is provided in the question, and some
clever sleuthing is often all you need to get the answer. The MCAT is by far the toughest standardized test you'll ever take.
34 Subjects: including calculus, English, writing, physics
...First of all, I recently moved to Bethlehem area from Maryland, where I spent 4 years as a math teacher. I got married, and we moved to this area for my husband's work. While my degree is in
Biology, I also have experience in Mathematics.
35 Subjects: including calculus, chemistry, geometry, biology
...PowerPoint is one of Microsoft's best programs. You will be amazed at how easy it will be to familiarize yourself with the various aspects of this program. An understanding of algebra is a
foundational skill to virtually all topics in higher-level mathematics, and it is useful in science, statistics, accounting, and numerous other professional and academic areas. 1.
27 Subjects: including calculus, geometry, statistics, algebra 1
...I have found in my studies that to be good at anything, practice is necessary. I find running through example problems to be the most effective way of learning a subject. Please let me know if
you would like to know more about me!-LyleI have taken many chemistry courses.
10 Subjects: including calculus, chemistry, physics, precalculus
...From working with biological scientists in National Cancer Institute research, I can help you understand the real-life applications for understanding logarithms, exponentials, and
combinatorics. Exponentials and logarithms are important for analyzing cell populations. I also know how to teach combinatorics clearly.
13 Subjects: including calculus, reading, writing, physics
Related Palmer Township, PA Tutors
Palmer Township, PA Accounting Tutors
Palmer Township, PA ACT Tutors
Palmer Township, PA Algebra Tutors
Palmer Township, PA Algebra 2 Tutors
Palmer Township, PA Calculus Tutors
Palmer Township, PA Geometry Tutors
Palmer Township, PA Math Tutors
Palmer Township, PA Prealgebra Tutors
Palmer Township, PA Precalculus Tutors
Palmer Township, PA SAT Tutors
Palmer Township, PA SAT Math Tutors
Palmer Township, PA Science Tutors
Palmer Township, PA Statistics Tutors
Palmer Township, PA Trigonometry Tutors
Nearby Cities With calculus Tutor
Alpha, NJ calculus Tutors
Bethlehem, PA calculus Tutors
Catasauqua calculus Tutors
Easton, PA calculus Tutors
Forks Township, PA calculus Tutors
Freemansburg, PA calculus Tutors
Glendon, PA calculus Tutors
Harmony Township, NJ calculus Tutors
Nazareth, PA calculus Tutors
New Hanover Twp, PA calculus Tutors
Phillipsburg, NJ calculus Tutors
Riegelsville calculus Tutors
Stockertown calculus Tutors
Tatamy calculus Tutors
West Easton, PA calculus Tutors | {"url":"http://www.purplemath.com/Palmer_Township_PA_calculus_tutors.php","timestamp":"2014-04-17T13:46:01Z","content_type":null,"content_length":"24536","record_id":"<urn:uuid:8ec2b3f4-50ad-4eb2-8898-75f6762551ee>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Section 8.7
Java Security Update: Oracle has updated the security settings needed to run Physlets.
Click here for help on updating Java and setting Java security.
Section 8.7: Exploring Fourier Transforms by Matching
Wave Function 1 | Wave Function 2 | Wave Function 3 | Wave Function 4 | Wave Function 5 | Wave Function 6
Please wait for the animation to completely load.
Shown are six Gaussian wave functions at t = 0 in color-as-phase representation. In the animation, ħ = 2m = 1.
1. If the wave functions shown are in position space, what do the momentum-space wave functions look like?
2. If the wave functions shown are in momentum space, what do the position-space wave functions look like?
Draw your answers making sure to label axes and to represent the phase of the resulting wave function as lines across your wave function. Once you do so, check your answers with the animation
provided below.
Please wait for the animation to completely load.
« previous
next » | {"url":"http://www.compadre.org/PQP/quantum-theory/section8_7.cfm","timestamp":"2014-04-20T18:47:25Z","content_type":null,"content_length":"25093","record_id":"<urn:uuid:18d3483b-2dbe-41ba-b7d8-297f9f91fb76>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Springfield, PA Algebra 2 Tutor
Find a Springfield, PA Algebra 2 Tutor
...In high school I scored 1550/1600 (780M, 770V) on the SAT and in January 2013 I scored 2390/2400 (800M, 790R, 800W). Yes, I still take the tests to make sure I am teaching the appropriate
material. I am available to tutor for the ACT and SAT (all subject areas). Please note that I am available o...
19 Subjects: including algebra 2, calculus, statistics, geometry
...I am a graduate from the University of Pittsburgh with a B.S degree in Pre-Med and a minor in Chemistry which requires knowledge of advanced math. I had a 3.4 GPA. I have tutored math and
sciences in many volunteer and job opportunities.
13 Subjects: including algebra 2, chemistry, geometry, biology
...As a teacher, I believe in a balanced based approach between the "new math" and traditional teaching methods. I believe students understand math better when they see the real-life application
of it in the real world. But, I also whole-heartedly believe the basics are essential.
12 Subjects: including algebra 2, geometry, algebra 1, trigonometry
I graduated from Rensselaer Polytechnic Institute in 2010 with a Bachelor's in chemical engineering. I've worked both as a mechanical engineer for one of the country's nuclear power laboratories
and as a teacher assistant in two school districts in Upstate New York. Before tutoring for WyzAnt, I t...
25 Subjects: including algebra 2, chemistry, writing, geometry
...I would be happy to share that love with any student if they wished. While tutoring French, I focus on drawing parallels between French and other languages, particularly English, to enhance
the retention of meaning. I primarily focus increasing ability to communicate and confidence in one's abilities to do so.
33 Subjects: including algebra 2, English, French, physics
Related Springfield, PA Tutors
Springfield, PA Accounting Tutors
Springfield, PA ACT Tutors
Springfield, PA Algebra Tutors
Springfield, PA Algebra 2 Tutors
Springfield, PA Calculus Tutors
Springfield, PA Geometry Tutors
Springfield, PA Math Tutors
Springfield, PA Prealgebra Tutors
Springfield, PA Precalculus Tutors
Springfield, PA SAT Tutors
Springfield, PA SAT Math Tutors
Springfield, PA Science Tutors
Springfield, PA Statistics Tutors
Springfield, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/Springfield_PA_algebra_2_tutors.php","timestamp":"2014-04-20T19:18:42Z","content_type":null,"content_length":"24338","record_id":"<urn:uuid:6a9febbc-e862-4ef6-8b40-a6c828848405>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Walk Like a Sabermetrician
This series is pretty disjointed, and this entry will be no exception. I have realized that in this piece I will discuss some of the assumptions that influence my thinking on the other issues I have
discussed, so this probably should have come first.
The major point of this installment is that we should state our assumptions and our goals before we begin so that we can make the right choices. The right choice will be different, potentially wildly
different, depending on what we are setting out to do. This series purports to choose which rate stat is best to use for an individual batter. But what is best?
It depends on what you are trying to measure of course. A frivolous example is that ISO is a good metric if you are trying to measure power, but a horrible metric if you are trying to measure on base
ability. We first must define what the properties of a good rate stat for a batter would be. If we use a different definition, we will get different answers.
What I have used as my definition, throughout this series without explicitly stating it(which was a mistake), is that the true measure of a batter’s production is how many runs an otherwise average
team would score if we added the batter to it. Actually, wins instead of runs ideally, but adding more runs will almost always add more wins for a team that begins at average.
What I want to do is look at a team that scored, say, 750 runs in a league where the average team scored 750 runs, and add one player to that team, and give him one-ninth of the team plate
appearances, and see how many runs that team will score with the player added. We will account for both the player’s impact on the team scoring rate and the number of plate appearances that they
have. A “good” rate stat will be one that accurately reflects the rank order of players when using this criteria and as a bonus, if it could accurately reflect the magnitude of the players’
contributions. In other words, if we find that Batter A will add 50 runs to a team and Batter B will add 45 runs, our ideal rate stat would rank A ahead of B, but not by an enormous amount--in fact,
by a margin that if converted to runs above average would be about five.
If you start with different assumptions, you may get different answers. For example, if your goal was to find out how many runs a batter would add to a team filled with replacement-level players, you
will likely reach similar conclusions about which rate stat is superior, but you may not, especially for close calls. If your goal is to estimate a batter’s contribution if he does not affect the
team’s run environment, you will potentially get different answers. If your goal is to estimate an individual’s ability on a realistic range of different teams, you will have a much more complicated
probabilistic function and get potentially different answers. And on and on. But for my purposes, I have defined the goal above, and every comment I make about a rate stat being “right” or “wrong”,
“better” or “worse”, etc., is based on that assumption.
With that out of the way, we can start to tackle the issue of comparing players to different baselines. I have written a long article on my site entitled “Baselines” which talks at length about
various ways people have set baselines, why they have done so, which I prefer, etc, so I will not repeat that here. Instead, I will point out that based on the assumption I gave above, about adding a
player to an average team, the most obvious choice is to compare to the league average. This would be .500 in OW% terms(while most sabermetricians acknowledge OW% as faulty, it is still convenient to
use the terminology, so long as we understand it is just a shorthand and do not start building bridges based on it).
So the baseline I will look at is average. This is also convenient because the other baselines are not as straightforward to apply. Later we will see a rate stat, R+/PA, that requires not only R/PA
but also a comparison of OBA to the league average. If you want to apply a replacement baseline to this, all sorts of sticky problems arise. First of all, when most sabermetricians say the
“replacement level” is a .350 OW%, they are defining OW% by R/O as we did in the last installment. So you need to convert the .350 OW% into a runs/PA ratio. And then you still have to deal with the
OBA. Do you still use league average OBA in the R+/PA formula, and then compare an individual’s R+/PA to a replacement player’s R+/PA, or do you compare the player’s OBA directly to a replacement
player’s OBA? And what is a replacement player’s OBA anyway? That answer is tied directly to your answer to what is a replacement player’s R/PA, since R/O = (R/PA)/(1 - OBA). But how did you answer
that? What assumptions are you making about a replacement player? Is he a certain X% below the league average in hitting singles, doubles, etc? Or is he around 95% of the league average in terms of
singles with bigger losses in secondary skills? And how does his sacrifice bunt rate compare to the league? Is it higher? Does he hit into more double plays, or does he strike out more and hit into
Those are all useful questions to ask if you are serious about applying a replacement level type analysis. But they make life a lot more complicated. Average, while it may well be flawed, has the
advantage of being very clean. It is a mathematically defined fact rather then a calculated value based on a series of assumptions.
So we’re using average, if for nothing else then to make this discussion manageable. This does not mean that I advocate using average as the baseline for all of the types of questions you want to
answer, or even many types of questions. But I do think that average is a good starting point for theoretical discussion, especially since, again, it is the only choice for which we know all of the
parameters we need to know. Now what do I mean by applying a baseline anyway?
All it means is that we compare the player’s performance to the performance of a baseline (in this case average) player. If a player creates 100 runs in 400 outs, and an average player would create
75 runs in 400 outs, then he is +25 runs above average (or +.0625 RAA/PA). Now since this is a series primarily about rate stats, the second format is a rate, and is more useful to us. But if you
want to go from “rate” to “value” or include playing time, then you are going to want to use some sort of baseline.
Sometimes, it may be useful to use the baseline even if we do not convert our rate stat to take playing time into account. For example, suppose we are have determined that a team would score 800 runs
with our player and 750 without him. We could leave that as +50, which would be a rate if we have made some constant assumption about how much playing time he will get. For example, the simplest
version of Marginal Lineup Value(link) assumes that the player got 1/9 of the team plate appearances, and was expressed as the number of runs he would add over the course of a full season. While it
is not a format that one usually sees, a +50 MLV is still a rate--it's a rate of runs added/season.
And that leads to another point about rates. Since people are used to seeing a rate expressed as runs per out, or runs per PA, they will sometimes have a negative initial reaction to a rate which
does not look like that. Like the MLV rate. Or like RAA/PA, a very important rate stat we’ll discuss later. That can have negatives, of course (as can MLV). And you can no longer divide them. For
example, a player with 6 R/G in a 4.5 R/G is often written as 1.33. The relative stat in this way is instantly adjusted for league context, and people like percentages. But if you are working with
RAA/PA, you can’t express it as a percentage of the league, because the league is zero. You can’t say a player who is +.08 RAA/PA is -3.846 times better then a player who is -.02 RAA/PA. RAA/PA must
be compared relatively as differences.
I will expound on that topic more in an upcoming installment. The point here is just that a figure like that is every bit as much a potential choice of rate stat as the formats that people are used
to seeing. And that when you bring the baseline into it, the difference is just the total above the baseline divided by some unit of playing time, while a ratio needs to be manipulated to be in that
kind of format. So if your total stat of choice is RAR, you might want your rate stat of choice to be expressed in the same units. The difference allows you to do what the ratio cannot.
2 comments:
1. David SmythFebruary 12, 2006 at 5:58 AM
Nice article. But when are you gonna stop beating around the bush and get to the crux? :}
2. Actually, that's a good question. I think that this is overly verbose. The next installment will talk about R+/PA.
Comments are moderated, so there will be a lag between your post and it actually appearing. I reserve the right to reject any comment for any reason. | {"url":"http://walksaber.blogspot.com/2006/02/rate-stat-series-pt-5.html","timestamp":"2014-04-17T21:23:36Z","content_type":null,"content_length":"102798","record_id":"<urn:uuid:b34f49bc-d4e2-49b7-a395-c5abd4116d28>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A bag has 4 green marbles, 3 red marbles, and 3 yellow marbles. What is the probability that you pick a red marble, do not replace it, and pick another red marble?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
How many red marbles are there @4everjames ?
Best Response
You've already chosen the best response.
@waterineyes 3
Best Response
You've already chosen the best response.
Similar to last question, just some changes: In the beginning, the probability of a red marble is 3/10 After you've picked the first one and do not replace it, the probability of a red marble is
2/9. So your statement would be: Probability of red and red = Probability of 1st red * probability of 2nd red = (3/10) * (2/9) = 6/90 = 1/15
Best Response
You've already chosen the best response.
Yes.. And what are total number of marbles that bag has?
Best Response
You've already chosen the best response.
@waterineyes 10
Best Response
You've already chosen the best response.
@mubzz thanksz!!!! i get it now thanks to u and @waterineyes
Best Response
You've already chosen the best response.
An easy tip to solve all probability questions: When it says (prob of x) AND (prob of y) you multiply, i.e. (prob of x)*(prob of y) When it says (prob of x) OR (prob of y) you add, i.e. (prob of
x)+(prob of y)
Best Response
You've already chosen the best response.
ok just for review (A bag has 5 red marbles, 6 blue marbles and 4 black marbles. What is the probability of picking a red marble, replacing it, and then picking a black marble?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ff3a95de4b01c7be8c74ef7","timestamp":"2014-04-18T21:28:49Z","content_type":null,"content_length":"44800","record_id":"<urn:uuid:6952b6bc-f43a-4f6d-800c-dd85fafae306>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
2.6 Motion with Constant Acceleration
Before you press the "Brake" button, the car moves at constant velocity. You can find the distance traveled during this delay period by using Equation 2.3. Once the "Brake" button is pressed, the
motion is at constant acceleration. Note that the initial velocity and final velocity are known. The time is calculated by the applet. You can use the equations in Table 2.1 to determine the
acceleration and distance traveled by the car during this time interval. You may want to revisit this applet when you get to Section 4.8. You can then find the acceleration by determining the
friction force and using Newton's second law of motion. | {"url":"http://www.mhhe.com/physsci/physical/jones/ol02-6.htm","timestamp":"2014-04-21T09:36:27Z","content_type":null,"content_length":"3379","record_id":"<urn:uuid:a72fc322-68de-4afa-aa83-2df4123b1991>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cubic Splines
Cubic splines are used to fit a smooth curve to a series of points with a piecewise series of cubic polynomial curves. In addition to their use in interpolation, they are of particular interest to
engineers because the spline is defined as the shape that a thin flexible beam (of constant flexural stiffness) would take up if it was constrained to pass through the defined points. This post will
present an Excel User Defined Function (UDF) to generate a “natural” cubic spline for any series of 3 or more points. Later posts will look at alternative spline formulations, and applications of
the cubic spline to structural analysis.
A cubic spline is defined as the curve that for any two adjacent internal points:
1. The curve passes exactly through both points
2. The slope of the curve at the end points is equal to the slope of the adjacent segments
3. The curvature of the curve at the end points is equal to the curvature of the adjacent segments
Alternative provisions for the end segments will generate different spline curves over the full extent of the curve. The most common provision for the ends is that the curvature is zero at both
ends. This is known as a “natural cubic spline”. In a structural analysis context this corresponds to a beam that is free to rotate at both ends, but is constrained in position at the ends and a
number of internal points.
Further details of the theory of cubicl splines, and an algorithm for generating natural cubic splines are given in this Wikipedia article.
An excel spreadsheet with a UDF for generating cubic splines, based on the algorithm in the Wikipedia article, can be downloaded from: CSplineA.zip
The download is open source, and full VBA code for the UDF is freely accessible.
Example screen shots from this file are shown below:
“Dummy” data points at each end allow the curvature at the start and end points to be adjusted to the required value.
Bending moments are calculated by multiplying the curvature at each point by the beam flexural stiffness, EI.
25 Responses to Cubic Splines
1. Thank you for this spreadsheet/macro. Incredibly useful and very quick!
□ Thanks for providing this. I am going to be trying it out.
2. can someone explain me how to use it??
□ Peter – I have just posted something about how to use array formulas with the CSplineA function as an example.
If you still have any questions after reading that, could you be more specific about what your problems are.
3. answer?
□ question?
4. Hi, Thank you for your demo and information about this spline method. I have convert this csplineA excel file to C#. Please check at my site :)
Thank you
5. “Cubic Splines | Newton Excel Bach, not (just) an Excel
Blog” ended up being a superb blog post, can not wait to browse far more of ur postings.
Time to waste numerous time on the internet haha.
Thanks a lot ,Preston
6. Pingback: useful links | IL SUPEREROE
7. This looks like a very useful function, but when I open it in Excel:Mac 2008 there is an error in the spline results. Somewhere along the line, Excel thinks something is text when it should be a
number (#NAME?). Any idea on how to solve this problem for Macs? Thanks for your work on this, hope I can use it. Cheers
□ Unfortunately Excel for Mac 2008 does not have VBA, so no User Defined Function will work in that version. I believe that the latest Excel for Mac does have VBA restored, so UDFs should work
if you update, but I can’t guarantee it as I don’t have a Mac available for testing.
☆ Ah, figured as much. Thanks for the quick response. I’ll let you know if I find a workaround.
8. This is a great macro, thanks very much for sharing. I’m fairly new to spline interpolation so apologies if my question is obvious but is this a basis-spline? I read that a basis-spline would
only work on ascending values of x, order, but this macro works for non-ascending values of x. However, if this isn’t a basis-spline could you please briefly explain what type of spline you would
categorise it as? Thank you!
□ The x values do need to be in ascending order! The function will return a result with non-ascending x values (as long as no two adjacent values are equal), but the resulting curve makes no
For a curve where the x values may not be ascending the most common option used is a Bezier curve. There may be others, but I haven’t looked into it.
9. Thanks for posting this very useful code. One nit: you might mention that the interpolation x-values (Xint in the VBA code) needs to have at least three values in its range to get proper results.
I tried it for a single-cell Xint, which caused Xint to be passed in as a double. This flagged an error on the call to UBound(Xint) since that function works only on arrays. To get around that, I
If Not Typename(Xint) Like “*()” Then
ReDim Xint(1, 1)
End If
CSplineA returned results after this change, but then I found that calls with single cells for Xint returned grossly incorrect values. When I changed Xint to also include the XVal entries that
bracket my desired interpolation point, however, the problem was fixed.
□ Jim your code will create an empty array called Xint. I have modified the code to create an array XintTemp(1,1), copy Xint into that, then copy XintTemp into Xint. Also the value nint needs
to be set to 1.
I have only done quick testing, but it seems to be working OK.
Download from:
☆ Thanks for the quick reply.
10. Thanks Jim, I’ll have a look at that.
11. Hi sir. Currently I’m working on a project titled interpolation of planar curve with different parameterization and my supervisor told me to create a smooth B-spline curve using Microsoft Excel.
I am having difficulties with the task given and may i know if the example you have given above could be used to create the b-spline? Do you have any example for the B-spline Curve? Thank you in
□ The simplest way to create a a smooth B-spline in Excel is to create an XY (scatter) chart from a set of points, and select the smoothed line option to connect the points. The resulting curve
is an example of a B-spline.
The cubic splines described here are also B-splines, so you could use example from here as well.
☆ Thanks for the reply sir. Now, after doing the B-spline Basis function calculations for zeroth and 1st degree, how am i supposed to link the calculation to create a B-spline curve with
control points using Excel and Could you please enlighten me on the B-spline basis function calculations for 2nd degree and onwards ? I’m sorry for troubling you sir because my supervisor
not providing me enough informations. Thanks in advance.
This entry was posted in Beam Bending, Excel, Newton, UDFs, VBA and tagged continuous beam, Cubic spline, Excel, UDF, VBA. Bookmark the permalink. | {"url":"http://newtonexcelbach.wordpress.com/2009/07/02/cubic-splines/","timestamp":"2014-04-19T14:28:53Z","content_type":null,"content_length":"109212","record_id":"<urn:uuid:74a492c3-49ed-4626-8b8c-9b4ba542c403>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chaitin's Constant
A real number, represented by Ω (capital Omega) and also known as the Halting probability, whose digits are distributed so randomly that no attempt to find a rule for predicting them can ever be
found. Discovered by Gregory Chaitin, Ω is definable but not computable. It has no pattern or structure to it whatsoever, but consists instead of an infinitely long string of 0's and 1's in which
each digit is as unrelated to its predecessor as one coin toss is from the next. Although called a constant, it is not a constant in the sense that, for example, pi is, since its definition depends
on the arbitrary choice of computation model and programming language. For each such model or language, Ω is the probability that a randomly produced string will represent a program that, when run,
will eventually halt. To derive it, Chaitin considered all the possible programs that a hypothetical computer known as a Turing machine could run, and then looked for the probability that a program,
chosen at random from among all the possible programs, will halt. The work took him nearly 20 years, but he eventually showed that this halting probability turns Turing's question of whether a
program halts into a real number, somewhere between 0 and 1. Further, he showed that, just as there are no computable instructions for determining in advance whether a computer will halt, there are
also no instructions for determining the digits of Ω. Ω is uncomputable and unknowable: we don't know its value for any programming language and we never will. This is extraordinary enough in itself,
but Chaitin has found that Ω infects the whole of mathematics, placing fundamental limits on what we can know.
And Ω is just the beginning. There are more disturbing numbers – super-Omegas – whose degree of randomness is vastly greater even than that of Ω. If there were an omnipotent computer that could solve
the halting problem and evaluate Ω, this mega-brain would have its own unknowable halting probability called Ω'. And if there were a still more God-like machine that could find Ω', its halting
probability would be Ω". These higher Ωs, it has been recently discovered, are not meaningless abstractions. Ω', for instance, gives the probability that an infinite computation produces only a
finite amount of output. Ω" is equivalent to the probability that, during an infinite computation, a computer will fail to produce an output – for example, get no result from a computation and move
on to the next one – and that it will do this only a finite number of times. Ω and the Ω hierarchy are revealing to mathematicians an unsettling truth: the problems that we can hope ever to solve
form a tiny archipelago in a vast ocean of undecidability.
Related category | {"url":"http://www.daviddarling.info/encyclopedia/C/Chaitins_constant.html","timestamp":"2014-04-17T12:35:35Z","content_type":null,"content_length":"9122","record_id":"<urn:uuid:1468c77d-9e1e-4ab4-a942-63d52fda3b6f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computational Geometry and Topology
Monday, November 3
Computational Geometry and Topology
10:30 AM-12:30 PM
Room: Cheekwood
The traditional view of computational geometry is that it studies algorithms for discrete geometric problems such as computing the convex hull of a set of points. The emphasis is on combinatorial
methods and algorithms with fast asymptotic running time. A more recent development is the study of discrete topological problems motivated by questions of connectivity and continuity, a development
that complements traditionally strong numerical research.
This minisymposium offers an introduction to the wide spectrum of research in computational geometry and topology. The speakers will present leading edge research in geometric algorithm design and
demonstrate the continuity between geometry and topology.
Organizer: Herbert Edelsbrunner
University of Illinois, Urbana-Champaign
10:30 Kinetic Data Structures
Leonidas J. Guibas, Stanford University
11:00 Maintaining Delaunay Complexes under Motion in R^3
Michael A. Facello, Raindrop Geomagic Inc., Urbana, IL
11:30 Minimization of Mathematical Energies for Surfaces
John Sullivan, University of Minnesota, Minneapolis
12:00 Computing Homology Groups of Simplicial Complexes
Sumanta Guha, University of Wisconsin, Milwaukee
GD97 Homepage | Program Updates| Registration | Hotel Information | Transportation |
Program-at-a-Glance | Program Overview | Speaker Index |
MMD, 7/9/97 | {"url":"http://www.siam.org/meetings/archives/gd97/ms2.htm","timestamp":"2014-04-17T01:38:39Z","content_type":null,"content_length":"2226","record_id":"<urn:uuid:fab8e403-3a87-4d06-9e12-a468e6c76d94>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hatcher, Allen - Department of Mathematics, Cornell University
• Allen Hatcher Copyright c 2002 by Cambridge University Press
• On the Diffeomorphism Group of S1 Allen Hatcher
• Algebraic topology can be roughly defined as the study of techniques for forming algebraic images of topological spaces. Most often these algebraic images are groups,
• THE COMPLEX OF FREE FACTORS OF A FREE GROUP Allen Hatcher* and Karen Vogtmann*
• Stable Homology by Scanning Variations on a Theorem of Galatius
• A Short Exposition of the Madsen-Weiss Theorem Allen Hatcher
• ISSN 1472-2739 (on-line) 1472-2747 (printed) 1253 Algebraic & Geometric Topology
• Pants Decompositions of Surfaces Allen Hatcher
• Finiteness of Classifying Spaces of Relative Diffeomorphism Groups of 3-Manifolds
• Measured Lamination Spaces for 3-Manifolds Allen Hatcher
• Notes on Introductory Point-Set Topology Allen Hatcher
• Hans Samelson Lie Algebras
• Selected Chapters of Geometry ETH Zurich, summer semester 1940
• German Garden Elizabeth von Arnim
• Corrections to the book Algebraic Topology by Allen Hatcher Some of these are more in the nature of clarifications than corrections. Many of the
• Version 2.1, May 2009 Allen Hatcher
• The Adams spectral sequence was invented as a tool for computing stable homo-topy groups of spheres, and more generally the stable homotopy groups of any space.
• There are two Eilenberg-Moore spectral sequences that we shall consider, one for homology and the other for cohomology. In contrast with the situation for the
• Chapter 0 Preview 1 Chapter 0: A Preview
• Chapter 2 Quadratic Forms 1 2.1 Topographs
• The aim of this short preliminary chapter is to introduce a few of the most com-mon geometric concepts and constructions in algebraic topology. The exposition is
• Cohomology is an algebraic variant of homology, the result of a simple dualiza-tion in the definition. Not surprisingly, the cohomology groups Hi
• Topology of Cell Complexes Here we collect a number of basic topological facts about CW complexes for con-
• J. F. Adams, Algebraic Topology: a Student's Guide, Cambridge Univ. Press, 1972. J. F. Adams, Stable Homotopy and Generalised Homology, Univ. of Chicago Press, 1974.
• In Proposition 3.22 of the first edition of the book the ring structure in H was computed only for even n, but the calculation for odd n is not much harder so
• Something to add to the end of Section 1.2: Intuitively, loops are one-dimensional and homotopies between them are two-
• 236 Chapter 3 Cohomology Lemma 3.27. Let M be a manifold of dimension n and let A M be a compact
• Simplicial CW Structures Appendix 533 CW Complexes with Simplicial Structures
• Correction to Algebraic Topology by Allen Hatcher The following corrects the last two paragraphs on page 335, Poincare duality with local
• The fundamental group 1(X) is especially useful when studying spaces of low dimension, as one would expect from its definition which involves only maps from
• Allen Hatcher Copyright c 2002 by Cambridge University Press
• Spaces of Incompressible Surfaces Allen Hatcher
• Boundary Curves of Incompressible Surfaces Allen Hatcher
• 204 Chapter 3 Cohomology Note: The following 25 pages are a revision, written in November 2001, of the published
• Version 2.1, May 2009 Allen Hatcher
• Basepoints and Homotopy Section 4.A 421 In the first part of this section we will use the action of 1 on n to describe
• The Cyclic Cycle Complex of a Surface Allen Hatcher
• Allen Hatcher Copyright c 2002 by Cambridge University Press
• Isoperimetric Inequalities for Automorphism Groups of Free Groups
• CONFIGURATION SPACES OF RINGS AND WICKETS TARA E. BRENDLE AND ALLEN HATCHER
• The House with One Room The interesting feature of this 2 dimensional closed subspace of R3
• Notes on Basic 3-Manifold Topology Allen Hatcher
• Notes on Basic 3-Manifold Topology Allen Hatcher
• ISSN numbers are printed here 1 Algebraic & Geometric Topology [Logo here]
• There are many situations in algebraic topology where the relationship between certain homotopy, homology, or cohomology groups is expressed perfectly by an exact
• A List of Recommended Books in Topology Allen Hatcher
• 56 Chapter 1 The Fundamental Group We come now to the second main topic of this chapter, covering spaces. We
• RATIONAL HOMOLOGY OF AUT(Fn) Allen Hatcher* and Karen Vogtmann*
• 102 Chapter 2 Homology The most important homology theory in algebraic topology, and the one we shall
• CERF THEORY FOR GRAPHS Allen Hatcher and Karen Vogtmann
• Homotopy theory begins with the homotopy groups n(X), which are the nat-ural higher-dimensional analogs of the fundamental group. These higher homotopy
• Chapter 1 The Farey Diagram 1 1.1 Constructing the Farey Diagram
• The Classification of 3-Manifolds --A Brief Overview Allen Hatcher
• Poincare Duality Section 3.3 239 The Duality Theorem
• Diffeomorphism Groups of Reducible 3-Manifolds Allen Hatcher
• Triangulations of Surfaces Allen Hatcher
• Universal Coefficients for Homology Section 3.A 261 The main goal in this section is an algebraic formula for computing homology with
• 352 Chapter 4 Homotopy Theory CW Approximation
• Solution to Exercise 1 in Section 3.C. The CW complex hypothesis will be used only to have the homotopy extension property
• Topological Moduli Spaces of Knots Allen Hatcher
• Notes on Basic 3-Manifold Topology Allen Hatcher
• 350 Chapter 4 Homotopy Theory To fill in the missing step in this argument we will need a technical lemma about
• Bianchi Orbifolds of Small Discriminant Let OD be the ring of integers in the imaginary quadratic field Q(
• Chapter 1 The Farey Diagram 1 1.1 Constructing the Farey Diagram
• Chapter 0 Preview 1 Chapter 0: A Preview
• Math 3320 Prelim Solutions 1 1. In this problem let us call a rectangle whose length equals twice its width a domino.
• Chapter 3 Quadratic Fields 1 Quadratic Fields
• Chapter 2 Quadratic Forms 1 2.1 Topographs
• Math 3320 Take-Home Prelim 1 Rules: The only person you can communicate with about any of the problems is the | {"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/01/255.html","timestamp":"2014-04-18T11:51:24Z","content_type":null,"content_length":"18653","record_id":"<urn:uuid:adf479d8-683a-4686-b0b4-0041c6d05827>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lakewood, CO SAT Math Tutor
Find a Lakewood, CO SAT Math Tutor
...I use my own way to help students understand all kinds of rules easier and how to graph a function. I usually give all kinds of examples to students practice on each concept. I will prepare a
note for student in every session.
27 Subjects: including SAT math, calculus, physics, algebra 1
I am an expert in the LSAT, GMAT and GRE for graduate school admissions and the SAT and ACT for college admissions. This is my full-time career and I work hard to provide the best tutoring
possible. I have an honors degree from Brown University and an MBA from CU-Boulder with the number 1 rank in my class, as well as 99th percentile scores on these tests.
14 Subjects: including SAT math, geometry, GRE, algebra 1
...Throughout my high school and college careers I was constantly assisting friends, peers, and my younger sister with math and physics homework. I enjoyed helping other students to understand
the subject material they were struggling with. Through helping others, my excitement for learning was enhanced, which inspired me to pursue tutoring opportunities.
13 Subjects: including SAT math, calculus, physics, geometry
...I have been teaching at a University level for the last 8 years. I have spent the last four years studying mathematics education. I am currently taking a break from school, but do not want to
get away from teaching and tutoring mathematics.
13 Subjects: including SAT math, calculus, geometry, statistics
...I scored a 1300 on the SATs (660 Verbal, 640 Math), and over 700 on the GMAT. I took the ACT in June, 2012 and scored 32 in Math, which is in the 97th percentile. I took the ACT again in June
2013 and scored 32 overall (98th percentile) with subscores of 34 in science (99th percentile), 32 in R...
30 Subjects: including SAT math, chemistry, calculus, physics
Related Lakewood, CO Tutors
Lakewood, CO Accounting Tutors
Lakewood, CO ACT Tutors
Lakewood, CO Algebra Tutors
Lakewood, CO Algebra 2 Tutors
Lakewood, CO Calculus Tutors
Lakewood, CO Geometry Tutors
Lakewood, CO Math Tutors
Lakewood, CO Prealgebra Tutors
Lakewood, CO Precalculus Tutors
Lakewood, CO SAT Tutors
Lakewood, CO SAT Math Tutors
Lakewood, CO Science Tutors
Lakewood, CO Statistics Tutors
Lakewood, CO Trigonometry Tutors
Nearby Cities With SAT math Tutor
Arvada, CO SAT math Tutors
Aurora, CO SAT math Tutors
Broomfield SAT math Tutors
Centennial, CO SAT math Tutors
Cherry Hills Village, CO SAT math Tutors
Denver SAT math Tutors
Edgewater, CO SAT math Tutors
Englewood, CO SAT math Tutors
Golden, CO SAT math Tutors
Greenwood Village, CO SAT math Tutors
Littleton, CO SAT math Tutors
Northglenn, CO SAT math Tutors
Thornton, CO SAT math Tutors
Westminster, CO SAT math Tutors
Wheat Ridge SAT math Tutors | {"url":"http://www.purplemath.com/lakewood_co_sat_math_tutors.php","timestamp":"2014-04-19T23:26:48Z","content_type":null,"content_length":"24115","record_id":"<urn:uuid:ae0a0a6b-b64f-458b-bd30-b2bd6f091d78>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/irishchicka/asked","timestamp":"2014-04-16T22:38:04Z","content_type":null,"content_length":"113827","record_id":"<urn:uuid:dc615f47-2d25-4cd8-ae5e-5ec1c2f33333>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Runge Kutta method
February 22nd 2010, 10:59 AM #1
Mar 2008
Runge Kutta method
Hi everyone
I am an engineering student and I am having a little difficulty with a bit of exam practice involving solving an ODE with Runge-Kutta. I am able to use both in most examples, but this one is a
little different.
Any help you can give me would be amazing, so thanks for anything you can help me with!
The Question:
over the interval $(0,\pi)$
and with the boundary conditions $u(0)=1, \frac{du}{dx}=0$
I have solved by using an auxhilary equation already, as outlined in the first part of the equation.
The second part of the equation is where I have the trouble with:
"For a system of first order equations on $(0,\pi]$
$\frac{du}{dx} = f(u, z, x)$
$\frac{dz}{dx} = g(u,z,x)$
with initial conditions $u(0)=u_0$ and $z(0)=z_0$, deruve the second order Runge-Kutta method based on using the trapezium rule to give approximations $u_k$ and $z_k$ to $u(x_k)$ and $z(x_k)$ for
$k = 1,...N$"
I have devised two first order ODE's:
I think these are correct, but I am unsure on how to use them correctly. Every example we have from class is using a two variable function and I cannot find anything on the internet about it (
maybe using the wrong names ).
I look forward to what you guys have to say, cheers guys.
Hi everyone
I am an engineering student and I am having a little difficulty with a bit of exam practice involving solving an ODE with Runge-Kutta. I am able to use both in most examples, but this one is a
little different.
Any help you can give me would be amazing, so thanks for anything you can help me with!
The Question:
over the interval $(0,\pi)$
and with the boundary conditions $u(0)=1, \frac{du}{dx}=0$
I have solved by using an auxhilary equation already, as outlined in the first part of the equation.
The second part of the equation is where I have the trouble with:
"For a system of first order equations on $(0,\pi]$
$\frac{du}{dx} = f(u, z, x)$
$\frac{dz}{dx} = g(u,z,x)$
with initial conditions $u(0)=u_0$ and $z(0)=z_0$, deruve the second order Runge-Kutta method based on using the trapezium rule to give approximations $u_k$ and $z_k$ to $u(x_k)$ and $z(x_k)$ for
$k = 1,...N$"
I have devised two first order ODE's:
I think these are correct, but I am unsure on how to use them correctly. Every example we have from class is using a two variable function and I cannot find anything on the internet about it (
maybe using the wrong names ).
I look forward to what you guys have to say, cheers guys.
Certainly looks like you've split the equation up into a first order system correctly. I think the method you are trying to find is called 'The Improved Euler Method'. It would take too long to
derive the method here, but you'd be looking at using
$u_{k+1} = h(f(u_k,z_k,x_k)+f(u_k+g(u_k,z_k,x_k),z_k+f(u_k,z_ k,x_k),x_k))$
$z_{k+1} = h(g(u_k,z_k,x_k)+g(u_k+g(u_k,z_k,x_k),z_k+f(u_k,z_ k,x_k),x_k))$
If you want a bit of code to solve this, just let me have your email and I'll send one to you. Looks like you'll be doing well in that exam
February 26th 2010, 01:54 PM #2
Feb 2010 | {"url":"http://mathhelpforum.com/differential-equations/130142-runge-kutta-method.html","timestamp":"2014-04-19T22:16:48Z","content_type":null,"content_length":"41274","record_id":"<urn:uuid:b53f6850-34b8-4ba3-819a-60f79ce7a02a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00215-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 20
, 2000
"... KLAIM is an experimental programming language that supports a programming paradigm where both processes and data can be moved across di#erent computing environments. This paper presents the
mathematical foundations of the KLAIM type system; this system permits checking access rights violations of mo ..."
Cited by 51 (20 self)
Add to MetaCart
KLAIM is an experimental programming language that supports a programming paradigm where both processes and data can be moved across di#erent computing environments. This paper presents the
mathematical foundations of the KLAIM type system; this system permits checking access rights violations of mobile agents. Types are used to describe the intentions (read, write, execute, :::) of
processes relative to the di#erent localities with which they are willing to interact, or to which they want to migrate. Type checking then determines whether processes comply with the declared
intentions, and whether they have been assigned the necessary rights to perform the intended operations at the speci#ed localities. The KLAIM type system encompasses both subtyping and recursively de
#ned types. The former occurs naturally when considering hierarchies of access rights, while the latter is needed to model migration of recursive processes. c 2000 Elsevier Science B.V. All rights
- J. FUNCT. PROGRAMMING , 1998
"... Many polyvariant program analyses have been studied in the 1990s, including k-CFA, polymorphic splitting, and the cartesian product algorithm. The idea of polyvariance is to analyze functions
more than once and thereby obtain better precision for each call site. In this paper we present an equivalen ..."
Cited by 41 (7 self)
Add to MetaCart
Many polyvariant program analyses have been studied in the 1990s, including k-CFA, polymorphic splitting, and the cartesian product algorithm. The idea of polyvariance is to analyze functions more
than once and thereby obtain better precision for each call site. In this paper we present an equivalence theorem which relates a co-inductively defined family of polyvariant ow analyses and a
standard type system. The proof embodies a way of understanding polyvariant flow information in terms of union and intersection types, and, conversely, a way of understanding union and intersection
types in terms of polyvariant flow information. We use the theorem as basis for a new flow-type system in the spirit of the CIL -calculus of Wells, Dimock, Muller, and Turbak, in which types are
annotated with flow information. A flow-type system is useful as an interface between a owanalysis algorithm and a program optimizer. Derived systematically via our equivalence theorem, our flow-type
system should be a g...
- Journal of Functional Programming , 2000
"... Algorithms for checking subtyping between recursive types lie at the core of many programming language implementations. But the fundamental theory of these algorithms and how they relate to
simpler declarative specifications is not widely understood, due in part to the difficulty of the available in ..."
Cited by 37 (4 self)
Add to MetaCart
Algorithms for checking subtyping between recursive types lie at the core of many programming language implementations. But the fundamental theory of these algorithms and how they relate to simpler
declarative specifications is not widely understood, due in part to the difficulty of the available introductions to the area. This tutorial paper offers an "end-to-end" introduction to recursive
types and subtyping algorithms, from basic theory to efficient implementation, set in the unifying mathematical framework of coinduction. 1. INTRODUCTION Recursively defined types in programming
languages and lambda-calculi come in two distinct varieties. Consider, for example, the type X described by the equation X = Nat!(Nat\ThetaX): An element of X is a function that maps a number to a
pair consisting of a number and a function of the same form. This type is often written more concisely as X.Nat!(Nat\ThetaX). A variety of familiar recursive types such as lists and trees can be
defined analogou...
- In OOPSLA '93 Conference Proceedings , 1993
"... Over the last several years, much interesting work has been done in modelling object-oriented programming languages in terms of extensions of the bounded second-order lambda calculus, F .
Unfortunately, it has recently been shown by Pierce ([Pie92]) that type checking F is undecidable. Moreover, he ..."
Cited by 35 (2 self)
Add to MetaCart
Over the last several years, much interesting work has been done in modelling object-oriented programming languages in terms of extensions of the bounded second-order lambda calculus, F .
Unfortunately, it has recently been shown by Pierce ([Pie92]) that type checking F is undecidable. Moreover, he showed that the undecidability arises in the seemingly simpler problem of determining
whether one type is a subtype of another. In [Bru93a, Bru93b], the first author introduced a statically-typed, functional, object-oriented programming language, TOOPL, which supports classes,
objects, methods, instance variables, subtypes, and inheritance. The semantics of TOOPL is based on F , so the question arises whether type checking in this language is decidable. In this paper we
show that type checking for TOOPLE, a minor variant of TOOPL (Typed Object-Oriented Programming Language), is decidable. The proof proceeds by showing that subtyping is decidable, that all terms of
TOOPLE have minimum types...
- In Proceedings of the 12th Annual IEEE Symposium on Logic in Computer Science (LICS , 1997
"... A subtyping 0 is entailed by a set of subtyping constraints C, written C j= 0 , if every valuation (mapping of type variables to ground types) that satisfies C also satisfies 0 . We study the
complexity of subtype entailment for simple types over lattices of base types. We show that: ..."
Cited by 29 (1 self)
Add to MetaCart
A subtyping 0 is entailed by a set of subtyping constraints C, written C j= 0 , if every valuation (mapping of type variables to ground types) that satisfies C also satisfies 0 . We study the
complexity of subtype entailment for simple types over lattices of base types. We show that: ffl deciding C j= 0 is coNP-complete. ffl deciding C j= ff fi for consistent, atomic C and ff; fi atomic
can be done in linear time. The structural lower (coNP-hardness) and upper (membership in coNP) bounds as well as the optimal algorithm for atomic entailment are new. The coNP-hardness result
indicates that entailment is strictly harder than satisfiability, which is known to be in PTIME for lattices of base types. The proof of coNP-completeness gives an improved algorithm for deciding
entailment and puts a precise complexitytheoretic marker on the intuitive "exponential explosion" in the algorithm. Central to our results is a novel characterization of C j= ff fi for atomic, co...
- ACM Transactions on Programming Languages and Systems , 1995
"... A constrained type consists of both a standard type and a constraint set. Such types enable efficient type inference for objectoriented languages with polymorphism and subtyping, as demonstrated
by Eifrig, Smith, and Trifonov. Until now, it has been unclear how expressive constrained types are. ..."
Cited by 20 (13 self)
Add to MetaCart
A constrained type consists of both a standard type and a constraint set. Such types enable efficient type inference for objectoriented languages with polymorphism and subtyping, as demonstrated by
Eifrig, Smith, and Trifonov. Until now, it has been unclear how expressive constrained types are. In this paper we prove that for a language without polymorphism, constrained types accept the same
programs as the type system of Amadio and Cardelli with subtyping and recursive types. This result gives a precise connection between constrained types and the standard notion of type. 1 Introduction
A constrained type consists of both a standard type and a constraint set. For example, x:xx : (v ! w) n fv v ! wg Here, v and w are type variables. This typing says that the -term x:xx has every type
of the form v ! w where v; w satisfy the constraint v v ! w. Jens Palsberg, Laboratory for Computer Science, Massachusetts Institute of Technology, NE43-340, 545 Technology Square, Cambridg...
, 2003
"... We show that the first-order theory of structural subtyping of non-recursive types is decidable. Let Σ be a language consisting of function symbols (representing type constructors) and C a
decidable structure in the relational language L containing a binary relation ≤. C represents primitive types; ..."
Cited by 18 (8 self)
Add to MetaCart
We show that the first-order theory of structural subtyping of non-recursive types is decidable. Let Σ be a language consisting of function symbols (representing type constructors) and C a decidable
structure in the relational language L containing a binary relation ≤. C represents primitive types; ≤ represents a subtype ordering. We introduce the notion of Σ-term-power of C, which generalizes
the structure arising in structural subtyping. The domain of the Σ-term-power of C is the set of Σ-terms over the set of elements of C. We show that the decidability of the first-order theory of C
implies the decidability of the first-order theory of the Σterm-power of C. This result implies the decidability of the first-order theory of structural subtyping of non-recursive types.
- In Proceedings of the 25th International Colloquium on Automata, Languages, and Programming (ICALP , 1998
"... . We study entailment of structural and nonstructural recursive subtyping constraints. Constraints are formal inequalities between type expressions, interpreted over an ordered set of possibly
infinite labeled trees. The nonstructural ordering on trees is the one introduced by Amadio and Cardelli fo ..."
Cited by 15 (0 self)
Add to MetaCart
. We study entailment of structural and nonstructural recursive subtyping constraints. Constraints are formal inequalities between type expressions, interpreted over an ordered set of possibly
infinite labeled trees. The nonstructural ordering on trees is the one introduced by Amadio and Cardelli for subtyping with recursive types. The structural ordering compares only trees with common
shape. A constraint set entails an inequality if every assignment of meanings (trees) to type expressions that satisfies all the constraints also satisfies the inequality. In this paper we prove that
nonstructural subtype entailment is PSPACEhard, both for finite trees (simple types) and infinite trees (recursive types). For the structural ordering we prove that subtype entailment over infinite
trees is PSPACE-complete, when the order on trees is generated from a lattice of type constants. Since structural subtype entailment over finite trees has been shown to be coNP-complete these are the
first comple...
, 2000
"... Recent work h& s h wn equivalences between various type systems and flow logics. Ideally, th translations upon wh= h such equivalences are basedshd&@ be faithful in th sense the information is
not lost in round-trip translations from flows to types and back or from types to flows and back. Building ..."
Cited by 11 (2 self)
Add to MetaCart
Recent work h& s h wn equivalences between various type systems and flow logics. Ideally, th translations upon wh= h such equivalences are basedshd&@ be faithful in th sense the information is not
lost in round-trip translations from flows to types and back or from types to flows and back. Building on t h work of Nielson Nielson and of Palsberg Pavlopoulou, we present t h firstfaithT#
translations between a class of finitary polyvariant flow analyses and a type system supporting polymorph@@ in th form of intersection and union types. Additionally, our flow/type correspondence
solves several open problems posed by Palsberg Pavlopoulou: (1) it expresses call-string based polyvariance (such as k-CFA) as well as argument based polyvariance; (2) it enjoys a subject reduction
property for flows as well as for types; and (3) it supports a flow-oriented perspectiverath# thh a type-oriented one. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=61241","timestamp":"2014-04-19T21:05:34Z","content_type":null,"content_length":"39205","record_id":"<urn:uuid:d2a9300a-8336-4723-b4e8-6beb010658a1>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dow Jones Industrials: A Different Risk Perspective
The chart above shows the use of an over-looked measure for determining the risk of owning equities. The graphic depicts the Ulcer Index as applied to the Dow Jones Industrial Average (DJIA) since
October 1928 - which is when easily accessible archived records are available.
The index is sometimes applied as a metric for indicating the strength or weakness of a portfolio manager's performance. Most pertinently it addresses the issue of how the portfolio has suffered in
terms of its maximum draw-down. The notion of draw-down is useful for determining risk and volatility of the returns that one would actually experience in holding a portfolio (in this case the 30
stocks of the DJIA). It differs from the more commonly used Sharpe Ratio which is used and is based on the standard deviations of the returns.
The standard deviation measures the amount of variation around the average and is probably the most widely used measure of portfolio performance and investment risk. But the standard deviation
suffers from two major weaknesses for assessing the real nature of risk in holding financial assets. On the one hand it takes the same view of variability which is above average (which for a long
investor is desirable) as it does for variability below the average which is clearly not desirable for the investor who has a long portfolio.
The second and more fundamental problem with the Sharpe Ratio is that it fails to register the sequencing of returns. When calculating the Sharpe Ratio one can tabulate say the weekly or monthly
returns from a portfolio in one column and then measure the standard deviation of the returns, convert it to an annualized rate and then divide the annualized returns for the portfolio by the
standard deviation. This gives rise to the Sharpe Ratio which was popularized by Nobel laureate William Sharpe and which is one of the most widely followed benchmarks for assessing the skills (and
inherent risks) of a portfolio management strategy.
The problem with this approach can be glimpsed by thinking of randomly sorting the returns or sorting them in ascending or descending form. The actual returns experienced by an investor under many
sorting scenarios will be vastly different with sequences of losses/gains that will have entirely different characteristics to the actual series of returns. And yet the standard deviation of the
returns will be identical.
What is missing from the standard deviation metric is any sense of how much risk and discomfort is experienced by someone holding a portfolio as a result of the actual sequence of the returns. As
losses will often cluster this can only be properly reflected by reference to the notion of a draw-down and not in relation to variability or deviation from the average return of the series.
Draw downs
In essence the draw-down is the amount which a portfolio loses, tracked on a periodic basis, from its current level in relation to the high water mark of the portfolio's returns. The high-water mark
itself is a moving amount and in regard to the DJIA it can basically be seen as a continuous recording of the maximum value of the index that has been achieved as of the date of measurement. From
this high-water mark one can calculate the percentage change for each snapshot in time of the portfolio with respect to the current maximum value that the index has attained.
Sometimes this is depicted metaphorically by the notion of measuring the distance from the peak to the present valley as the time series develops. The notion of a maximum draw-down is simply to keep
track of the present valley (assuming that the index is not currently making a new high) in regard to the highest peak value.
The chart above uses the technique described by Peter Martin who developed the Ulcer Index and who describes the construction in some detail here. The Ulcer Index is described as follows:
Ulcer Index measures the depth and duration of percentage draw-downs in price from earlier highs. Technically, it is the square root of the mean of the squared percentage drops in value. The
greater a draw-down in value, and the longer it takes to recover to earlier highs, the higher the UI. The squaring effect penalizes large draw-downs proportionately more than small draw-downs.
The simple adaptation that I have introduced to the technique is to re-calculate the Ulcer Index for every trailing 52 week period from the extended DJIA time series. The high water-mark itself is
calculated from the very beginning of the series but the calculation of the squared percentage drops in value is done on the basis of summing the trailing 52 weeks only. This sum of squared
percentage drops in value is then divided by 52 and the square root is taken for the resulting value. Each of the values obtained forms a data point in the series displayed in the graph above.
The Ulcer index is far more useful as a measurement of investor discomfort (which is why the originator decided to call it the
index), since it calculates actual re-tracements in one's account equity. There is no escaping the actual sequencing of returns by just measuring simple variability as is implied in using the
standard deviation and the Sharpe Ratio.
Also evident on the graphic is the time it takes for the portfolio to regain its value in relation to its historic highs at the point that the measurement is made.
Just from observing the data it can be seen that the 1930s still far exceeds the discomfort that would have been experienced by an investor over the past year.
However it is also significant to see that the recent values registered on the Ulcer Index are in excess of anything seen since the 1930's and surpass the market drops in the 1970's and those seen in
the early 2000's. | {"url":"http://seekingalpha.com/article/162662-dow-jones-industrials-a-different-risk-perspective","timestamp":"2014-04-19T04:27:28Z","content_type":null,"content_length":"76153","record_id":"<urn:uuid:e161f4cc-0dd4-4810-88b6-383b899d6821>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability Problem
June 7th 2009, 08:05 AM
Probability Problem
Dear Forum , I am having trouble figuring out the following , any help would be appreciated...
A shipment of 60 inexpensive watches, including 9 that are defective, is sent to a department store. The receiving department selects 10 at random (from the 60 received) for testing, and rejects
the entire shipment if 1 or more in the sample is found defective. What is the probability that the shipment will be rejected?
thanks - AC- (Wondering)
June 7th 2009, 08:45 AM
Dear Forum , I am having trouble figuring out the following , any help would be appreciated...
A shipment of 60 inexpensive watches, including 9 that are defective, is sent to a department store. The receiving department selects 10 at random (from the 60 received) for testing, and rejects
the entire shipment if 1 or more in the sample is found defective. What is the probability that the shipment will be rejected?
thanks - AC- (Wondering)
Let X be the number of defective units in the sample of 10.
P(X= or > 1)= P(X=1) + P(X=2) +...+ P(X=9) since 9 is the maximum number of defective units in the shipment.
P(X=1) is the probability of getting EXACTLY one defective unit, then all the other nine work.
= (9/60) (51/59)(50/58) (49/57) continue until you have 10 terms.
Doing this for all nine cases is a lot of work so we write:
P(X>=1) = 1 - P(X <1)
P(X < 1) = P(X=0) which is the probability of getting 0 defective units. This means all 10 units chosen at random will work:
51/60 is the probability of the first one working
50/59 is the probability of the second one working
... write them all out, and multiply them since we are assuming independence.
1 - the product above is your answer.
Good luck!
June 7th 2009, 10:21 AM
Dear Forum , I am having trouble figuring out the following , any help would be appreciated...
A shipment of 60 inexpensive watches, including 9 that are defective, is sent to a department store. The receiving department selects 10 at random (from the 60 received) for testing, and rejects
the entire shipment if 1 or more in the sample is found defective. What is the probability that the shipment will be rejected?
thanks - AC- (Wondering)
I have a long solution if the receiving find the first defective they will reject so the probability is
(d)+(n,d)+(n,n,d)+(n,n,n,d)+(n,n,n,n,d)+(n,n,n,n,n ,d)+........(n,n,n,n,n,n,n,n,n,d) that is it
so the probability that they will find the defective from the first one selected then they will reject so the probability of this is
or they will find the defective one from the second selection so the probability will be
$\left(\frac{51}{60}\right)\left(\frac{9}{59}\right )$
or they will find the defective one from the third selection so the probability will be
$\left(\frac{51}{60}\right)\left(\frac{50}{59}\righ t)\left(\frac{9}{58}\right)$
or from the fourth selection
$\left(\frac{51}{60}\right)\left(\frac{50}{59}\righ t)\left(\frac{49}{58}\right)\left(\frac{9}{57}\rig ht)$
or from the fifth selection
$\left(\frac{51}{60}\right)\left(\frac{50}{59}\righ t)\left(\frac{49}{58}\right)\left(\frac{48}{57}\ri ght)\left(\frac{9}{56}\right)$
find them all then find the sum of them all or equal add
June 7th 2009, 10:25 AM
Let X be the number of defective units in the sample of 10.
P(X= or > 1)= P(X=1) + P(X=2) +...+ P(X=9) since 9 is the maximum number of defective units in the shipment.
P(X=1) is the probability of getting EXACTLY one defective unit, then all the other nine work.
= (9/60) (51/59)(50/58) (49/57) continue until you have 10 terms.
Doing this for all nine cases is a lot of work so we write:
P(X>=1) = 1 - P(X <1)
P(X < 1) = P(X=0) which is the probability of getting 0 defective units. This means all 10 units chosen at random will work:
51/60 is the probability of the first one working
50/59 is the probability of the second one working
... write them all out, and multiply them since we are assuming independence.
1 - the product above is your answer.
Good luck!
your solution is correct if the question said what is the probability they will get one defective but the question said what is the probability they will reject if they find the defective (any
time ) they will reject so for them they will not continue in selecting if they find one defective just one they will reject as I understand from the question that's it
June 7th 2009, 10:28 AM
Is that not easy? | {"url":"http://mathhelpforum.com/statistics/92091-probability-problem-print.html","timestamp":"2014-04-17T19:56:27Z","content_type":null,"content_length":"11873","record_id":"<urn:uuid:3f1f09f8-7893-41d5-8fd3-8a2118d3b723>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Question on pic :)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fd00893e4b0e0466317b9c4","timestamp":"2014-04-17T19:31:23Z","content_type":null,"content_length":"53644","record_id":"<urn:uuid:38570f08-50bd-467c-bb88-2d6b5937ae92>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00624-ip-10-147-4-33.ec2.internal.warc.gz"} |
Program that takes characters from user in an array and prints them with their ASCII values.(Do not use 'atoi' function)
void main()
char a[10];
int i,j,n,a1[10];
printf("\n Please Give The Value of N: ");
printf("\n enter value of a[%d] :",i);
printf("\n value of %c = %d ",a[i],a1[i]);
*********************** OUTPUT *********************************************
Please Give The Value of N: 5
enter value of a[0] :a
value of a = 97
enter value of a[1] :b
value of b = 98
enter value of a[2] :c
value of c = 99
enter value of a[3] :d
value of d = 100
enter value of a[4] :A
value of A = 65 | {"url":"http://www.dailyfreecode.com/code/characters-user-array-prints-them-ascii-1451.aspx","timestamp":"2014-04-18T23:20:34Z","content_type":null,"content_length":"38559","record_id":"<urn:uuid:b45e5794-eaa9-4e70-b435-d43d4cadb04a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
defining piecewise function in matlab
How would you write piecewise functions in matlab that can take vector inputs?
Here's a function that I'm trying to write.
function y=g(x)
if x==0
If I call g([0,pi/2]), I want it to return [0,2/pi], but what I get instead is [NaN,2/pi]. I'm guessing when I write x==0, matlab is comparing the entire input to 0. | {"url":"http://www.physicsforums.com/showthread.php?t=442055","timestamp":"2014-04-19T07:39:39Z","content_type":null,"content_length":"22428","record_id":"<urn:uuid:0ec5303c-2561-4c2d-9963-ed818223a45e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry is the one of the subject that make students wonder when they will ever use it again .yet; it has many applications in daily life.
Geometry is especially used in home building or improvement projects .if you want to find the floor area of a house, you can use geometry. These information is useful for telling an estate , how big
your house is when you want to put it on the market .if you want to purchase a piece of wood ,you have to estimate the amount of varshines you need the calculating the surface area of the wood.
Importance of study of geometry
Geometry is considered the important field of study, because it has many applications in daily life .for example, a sport car move in a circular path and it applies the concepts of geometry. Stairs
are made in the homes in consideration to angle of geometer and stairs and designed to 90 degree. When you throw a round ball in a round basket ball, it is also an application of geometry .Moreover,
geometry is widely used the field of many ways such as architecture, decorators, engineers etc. In the architecture for building design and map marking, in addition, geometrical shapes are circle,
rectangle, polygon, square, are used in the artists. The most interesting example is the nature of speaks of geometry and you can shapes in all things or nature.
I have recently faced lot of problem while learning
difference of two squares formula
, But thank to online resources of math which helped me to learn myself easily on net.
In daily life there is a lot of use of geometry by Architects, Decorates,
Engineers and many other professionals in determining distances, volume, angles , areas etc and it helps in understanding the proportion of thing in the universe. There is a wide use of geometry in
textile and fashion designing and countless other areas.
Geometry is used because we need to help us the house hold tasks like putting carpet in the room ,if you need to know that the shape of the room is and then you need to know the area formula for that
the shape so therefore it is used in the way.
Geometry is at work everywhere you go. Without geometry, we would not be able to build things, manufacture things or play sports with must success. Geometry not only makes in every day life possible,
it makes them easier by providing us with an exact science to calculate measurement of shape. | {"url":"http://geometryworld.blogspot.com/2013/04/use-of-geometry-in-daily-life.html","timestamp":"2014-04-18T10:33:37Z","content_type":null,"content_length":"75794","record_id":"<urn:uuid:15320827-605e-4a50-9b7d-e1ec1e8c4726>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
How do you calculate flowrate exiting a pipe?
Those connections aren't going to have too much of a measurable effect on your flowrate in this system. The main factors are:
-Where is the pump getting it's feed water (i.e. what elevation)
-Where is the pump discharging the water
-What does the pump curve look like
Probably the pump is rated for a specific flow rate due to the relatively constant geometry of pool pumping systems. Ask the vendor/pool store.
The pump controls the flow, if it puts out 100 gallons per minute for your system conditions, then each wye section will see approximately 50 gallons per minute in both the 2 inch sections and the 1
inch sections. The reducer is there to increase the velocity of the water so that the pool gets a bit of circulation going, so the water doesn't remain stagnant. It does this to help the suspended
particulates reach the filter rather than settling to the bottom.
Your flow rate will split more or less evenly among the two sections after the wye, it wont be exact (that's life) due to the fact that your system isn't exactly a precision-built one, but it'll be
pretty close.
It is important here to note that the reducer doesn't act to reduce the flowrate (**read below), it transitions the pipe to a smaller diameter pipe where the flow velocity is increased, the effect of
the water having to change direction is measurable, but in this case probably insignificant.
**You are probably aware that pumps work on a curve. If you know your system conditions, you can calculate where on that curve your pump is sitting (a bit iterative, since flowrate is found on the
curve based on system head, system head is partially determined by pipe friction losses, and pipe friction losses are determined by flow rate!). The losses from all of your elevation changes,
fittings, elbows, and friction due to flow rate will allow you to calculate the total head of the system, and thus get a good idea of where on its curve your pump should be operating.
To answer your question about the elbows/valves/reducers effect on flow rate:
When we discuss things like this, we don't discuss how they affect flow rate, we care about how they affect losses.
Elbows will have the least effect of the three (arguably, if the valves are ball valves, the elbows and valves will result in similar losses) with the valve coming in a close second.
As I mentioned before, the reducer will have only a slight effect on the system head (similar to an elbow), however because the downstream pipe is now smaller diameter, the flow velocity within it
will be increased (since the whole system maintains an equal flow rate). This increased velocity results in additional friction due to the interaction with the viscous water and the pipe walls (and
its imperfections), resulting in added system head that the pump must overcome, driving the pump back on its curve and resulting in a flowrate that is lower than if those 1" sections were 2"
sections, but discharging at a higher velocity.
In short: You need the pump curve to determine the flowrate without actually physically measuring it. | {"url":"http://www.physicsforums.com/showthread.php?t=671289","timestamp":"2014-04-19T07:41:34Z","content_type":null,"content_length":"27854","record_id":"<urn:uuid:c579c8df-2729-4461-9365-a2060018851f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
convex hull code of "competitive programming 1"
up vote 2 down vote favorite
I'm trying to understand the "convex hull" algorithm in Steven Halim and Felix Halim's book Competitive Programming. The "convex hull" problem is, given a collection P of n points in a plane, to find
a subset CH(P) that forms the vertices of a convex polygon containing all the other points. Here's an image outlining their approach:
Their algorithm starts by sorting the points based on their angle with respect to a "pivot", namely, the bottommost and rightmost point in P.
I am having some problems understanding their angle_comp function — what it does, and what its purpose is. Can anyone help me to understand it?
typedef struct {
double x, y ;
} point;
point pivot;
// A function to compute the distance between two points:
double dist2 (point a, point b)
double dx = a.x - b.x ;
double dy = a.y - b.y ;
return sqrt (dx *dx + dy * dy) ;
// A function to compare the angles of two points with respect to the pivot:
bool angle_comp (point a, point b)
if (fabs(area2(pivot, a, b) - 0) < 10e-9)
return dist2(pivot, a) < dist2(pivot, b);
int d1x = a.x - pivot.x, d1y = a.y - pivot.y;
int d2x = b.x - pivot.x, d2y = b.y - pivot.y;
return (atan2((double) d1y, (double) d1x)
- atan2 ((double) d2y, (double)d2x))
< 0;
c++ convex-hull
add comment
1 Answer
active oldest votes
If I understand your question correctly, you want to know why the sort function is important? It is because your code there uses Graham's scan, a method for finding the convex hull.
In order for Graham's scan to be more efficient, the points must be sorted by their angle relative to a fixed point.
The angle_comp function compares the angles of the two points A and B relative to the pivot point. This function, when plugged into std::sort, allows us to sort all the points around
the pivot based on their angle relative to or distance from the pivot.
For two points A and B around a pivot. If point A and B have the same angle, or if one of the other both are near the pivot, than we need an alternative way to sort the two points. We
sort the points by their distance from the pivot instead.
up vote 2 down Else, we find out where the point A and B are relative to our pivot. We do this by subtracting the pivot from our points. So if for instance, our pivot is (4, 3) and A is (5, 7), than
vote accepted A is 1 unit right and 4 units up from our pivot. If our pivot is (0, 0), than A would be (1, 4). Hopefully that's understandable.
After we have the relative point, which is D, we than calculate the angle of the point relative to our pivot, or origin. atan2 takes two parameters, the y-value of our point, and the
x-value of our point, and spits out the angle of our point in radians. For atan2, 0 radians is defined as any point (N, 0) away from the origin, and as the radians increase, the point
goes counter-clockwise around the pivot, or origin.
We then subtract the angle of D, away from the angle of D2. If the angle of D2 is greater than the angle of D1, we return with true, and std::sort can use that returned data to sort
the angles counter-clockwise.
add comment
Not the answer you're looking for? Browse other questions tagged c++ convex-hull or ask your own question. | {"url":"http://stackoverflow.com/questions/18040036/convex-hull-code-of-competitive-programming-1/18040164","timestamp":"2014-04-24T09:34:32Z","content_type":null,"content_length":"65415","record_id":"<urn:uuid:8777df4b-3e1c-4f95-9ecb-6fc188caea02>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
a mean average of 60 on seven exam is needed to pass a course. on her first exams, paula received grades 51, 72, 80, 62, 57, and 69
B) an average of 70 is needed to get a C in the course. is it possible for Paula to get a C? if so what grade must she receive on the seventh exam?
C) if her lowest grade of the exams already taken is to be dropped, what grade must she receive on her last exam to pass the course?
D)if her lowest grade of the exams already taken is to be dropped, what grade must she receive on her last exam to get a C in the course?
someone please help... | {"url":"http://www.mathisfunforum.com/post.php?tid=20280","timestamp":"2014-04-17T12:45:41Z","content_type":null,"content_length":"16009","record_id":"<urn:uuid:4f001832-16d8-488a-88ed-c494c7ce3aca>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
A DAML Ontology of Time Jerry R. Hobbs with contributions from George Ferguson, James Allen, Richard Fikes, Pat Hayes, Drew McDermott, Ian Niles, Adam Pease, Austin Tate, Mabry Tyson, and Richard
Waldinger November 2002 1. Introduction A number of sites, DAML contractors and others, have developed ontologies of time (e.g., DAML-S, Cycorp, CMU, Kestrel, Teknowledge). A group of us have decided
to collaborate to develop a representative ontology of time for DAML, which could then be used as is or elaborated on by others needing such an ontology. It is hoped that this collaboration will
result in an ontology that will be adopted much more widely than any single site's product would be. We envision three aspects to this effort: 1. An abstract characterization of the concepts and
their properties, expressed in first-order predicate calculus. 2. A translation of the abstract ontology into DAML code, to whatever extent possible given the current state of DAML expressivity. 3.
Mappings between the DAML ontology and individual sites' ontologies. DAML is under development and is thus a moving target, and that is why separating 1 and 2 is desirable. Level 1 can stabilize
before DAML does. A mapping in 3 may be an isomorphism, or it may be something more complicated. The reason for 3 is so DAML users can exploit the wide variety of resources for temporal reasoning
that are available. Moreover, it will aid the widespread use of the ontology if it can be linked easily to, for example, the temporal portion of Teknowledge's IEEE Standard Upper Ontology effort or
to Cycorp's soon-to-be widely used knowledge base. The purposes of the temporal ontology are both for expressing temporal aspects of the contents of web resources and for expressing time-related
properties of web services. The following document outlines the principal features of a representative DAML ontology of time. It is informed by ontology efforts at a number of sites and reflects but
elaborates on a tentative consensus during discussions at the last DAML meeting. The first three areas are spelled out in significant detail. The last three are just sketches of work to be done.
There are a number of places where it is stated that the ontology is silent about some issue. This is done to avoid controversial choices in the ontolgy where more than one treatment would be
reasonable and consistent. Often these issues involve identifying a one-dimensional entity and a zero-dimensional entity with one another. In general, functions are used where they are total and have
a unique value; predicates are used otherwise. The order of arguments usually follows the subject-object-object of preposition order in the most natural use in an English sentence (except for "Hath",
where topicalization applies). A note on notation: Conjunction (&) takes precedence over implication(-->) and equivalence (<-->). Formulas are assumed to be universally quantified on the variables
appearing in the antecedent of the highest-level implication. Thus, p1(x) & p2(y) --> q1(x,y) & q2(y) is to be interpreted as (A x,y)[[p1(x) & p2(y)] --> [q1(x,y) & q2(y)]] At the end of each section
there is a subsection on MAPPINGS. These are sketches of the relations between some highly developed temporal ontologies and the one outlined here. 2. Topological Temporal Relations 2.1. Instants and
Intervals: There are two subclasses of temporal-entity: instant and interval. instant(t) --> temporal-entity(t) interval(T) --> temporal-entity(T) These are the only two subclasses of temporal
entities. (A T)[temporal-entity(T) --> [instant(T) v interval(T)]] As we will see, intervals are, intuitively, things with extent and instants are, intuitively, point-like in that they have no
interior points. (In what follows, lower case t is used for instants, upper case T for intervals and for temporal-entities unspecified as to subtype. This is strictly for the reader's convenience,
and has no formal significance.) _begins_ and _ends_ are relations between instants and temporal entities. begins(t,T) --> instant(t) & temporal-entity(T) ends(t,T) --> instant(t) & temporal-entity
(T) For convenience, we can say that the beginning and end of an instant is itself. The converses of these rules are also true. instant(t) <--> begins(t,t) instant(t) <--> ends(t,t) The beginnings
and ends of temporal entities, if they exist, are unique. temporal-entity(T) & begins(t1,T) & begins(t2,T) --> t1=t2 temporal-entity(T) & ends(t1,T) & ends(t2,T) --> t1=t2 As will be seen in Section
2.4, in one approach to infinite intervals, a positively infinite interval has no end, and a negatively infinite interval has no beginning. Hence, we use the relations "begins" and "ends" in the core
ontology, rather than defining functions "beginning-of" and "end-of", since the functions would not be total. They can be defined in an extension of the core ontology that posits instants at positive
and negative infinity. _inside_ is a relation between an instant and an interval. inside(t,T) --> instant(t) & interval(T) This concept of inside is not intended to include beginnings and ends of
intervals, as will be seen below. It will be useful in characterizing clock and calendar terms to have a relation between instants and intervals that says that the instant is inside or the beginning
of the interval. (A t,T)[begins-or-in(t,T) <--> [begins(t,T) v inside(t,T)]] time-between is a relation among a temporal entity and two instants. time-between(T,t1,t2) --> temporal-entity(T) &
instant(t1) & instant(t2) The two instants are the beginning and end points of the temporal entity. (A t1,t2)[t1 =/= t2 --> (A T)[time-between(T,t1,t2) <--> begins(t1,T) & ends(t2,T)]] The ontology
is silent about whether the time from t to t, if it exists, is identical to the instant t. The ontology is silent about whether intervals _consist of_ instants. The core ontology is silent about
whether intervals are uniquely determined by their beginnings and ends. This issue is dealt with in Section 2.4. We can define a proper-interval as one whose beginning and end are not identical. (A
T)[proper-interval(T) <--> interval(T) & (A t1,t2)[begins(t1,T) & ends(t2,T) --> t1 =/= t2]] A half-infinite or infinite interval, by this definition, is proper. The ontology is silent about whether
there are any intervals that are not proper intervals. 2.2. Before: There is a before relation on temporal entities, which gives directionality to time. If temporal-entity T1 is before
temporal-entity T2, then the end of T1 is before the beginning of T2. Thus, before can be considered to be basic to instants and derived for intervals. (A T1,T2)[before(T1,T2) <--> (E t1,t2)[ends
(t1,T1) & begins(t2,T2) & before(t1,t2)]] The before relation is anti-reflexive, anti-symmetric and transitive. before(T1,T2) --> T1 =/= T2 before(T1,T2) --> ~before(T2,T1) before(T1,T2) & before
(T2,T3) --> before(T1,T3) The end of an interval is not before the beginning of the interval. interval(T) & begins(t1,T) & ends(t2,T) --> ~before(t2,t1) The beginning of a proper interval is before
the end of the interval. proper-interval(T) & begins(t1,T) & ends(t2,T) --> before(t1,t2) The converse of this is a theorem. begins(t1,T) & ends(t2,T) & before(t1,t2) --> proper-interval(T) If one
instant is before another, there is a time between them. instant(t1) & instant(t2) & before(t1,t2) --> (E T) time-between(T,t1,t2) The ontology is silent about whether there is a time from t to t. If
an instant is inside a proper interval, then the beginning of the interval is before the instant, which is before the end of the interval. This is the principal property of "inside". inside(t,T) &
begins(t1,T) & ends(t2,T) --> before(t1,t) & before(t,t2) The converse of this condition is called Convexity and is discussed in Section 2.4. The relation "after" is defined in terms of "before".
after(T1,T2) <--> before(T2,T1) The basic ontology is silent about whether time is linearly ordered. Thus it supports theories of time, such as the branching futures theory, which conflate time and
possibility or knowledge. This issue is discussed further in Section 2.4. The basic ontology is silent about whether time is dense, that is, whether between any two instants there is a third instant.
Thus it supports theories in which time consists of discrete instants. This issue is discussed further in Section 2.4. 2.3. Interval Relations: The relations between intervals defined in Allen's
temporal interval calculus (Allen, 1984; Allen and Kautz, 1985; Allen and Hayes, 1989; Allen and Ferguson, 1997) can be defined in a relatively straightforward fashion in terms of "before" and
identity on the beginning and end points. It is a bit more complicated than the reader might at first suspect, because allowance has to be made for the possibility of infinite intervals. Where one of
the intervals could be infinite, the relation between the end points has to be conditionalized on their existence. The standard interval calculus assumes all intervals are proper, and we will do that
here too. The definitions of the interval relations in terms of "before" relations among their beginning and end points, when they exist, are given by the following axioms. In these axioms, t1 and t2
are the beginning and end of interval T1; t3 and t4 are the beginning and end of T2. (A T1,T2)[int-equals(T1,T2) <--> [proper-interval(T1) & proper-interval(T2) & (A t1)[begins(t1,T1) <--> begins
(t1,T2)] & (A t2)[ends(t2,T1) <--> ends(t2,T2)]]] int-before(T1,T2) <--> proper-interval(T1) & proper-interval(T2) & before(T1,T2) (A T1,T2)[int-meets(T1,T2) <--> [proper-interval(T1) &
proper-interval(T2) & (E t)[ends(t,T1) & begins(t,T2)]]] (A T1,T2)[int-overlaps(T1,T2) <--> [proper-interval(T1) & proper-interval(T2) & (E t2,t3)[ends(t2,T1) & begins(t3,T2) & before(t3,t2) & (A t1)
[begins(t1,T1) --> before(t1,t3)] & (A t4)[ends(t4,T2) --> before(t2,t4)]]]] (A T1,T2)[int-starts(T1,T2) <--> [proper-interval(T1) & proper-interval(T2) & (E t2)[ends(t2,T1) & (A t1)[begins(t1,T1)
<--> begins(t1,T2)] & (A t4)[ends(t4,T2) --> before(t2,t4)]]]] (A T1,T2)[int-during(T1,T2) <--> [proper-interval(T1) & proper-interval(T2) & (E t1,t2)[begins(t1,T1) & ends(t2,T1) & (A t3)[begins
(t3,T2) --> before(t3,t1)] & (A t4)[ends(t4,T2) --> before(t2,t4)]]]] (A T1,T2)[int-finishes(T1,T2) <--> [proper-interval(T1) & proper-interval(T2) & (E t1)[begins(t1,T1) & (A t3)[begins(t3,T2) -->
before(t3,t1)] & (A t4)[ends(t4,T2) <--> ends(t4,T1)]]]] The inverse interval relations can be defined in terms of these relations. int-after(T1,T2) <--> int-before(T2,T1) int-met-by(T1,T2) <-->
int-meets(T2,T1) int-overlapped-by(T1,T2) <--> int-overlaps(T2,T1) int-started-by(T1,T2) <--> int-starts(T2,T1) int-contains(T1,T2) <--> int-during(T2,T1) int-finished-by(T1,T2) <--> int-finishes
(T2,T1) In addition, it will be useful below to have a single predicate for "starts or is during". This is called "starts-or-during". starts-or-during(T1,T2) <--> [int-starts(T1,T2) v int-during
(T1,T2)] It will also be useful to have a single predicate for intervals intersecting in at most an instant. nonoverlap(T1,T2) <--> [int-before(T1,T2) v int-after(T1,T2) v int-meets(T1,T2) v
int-met-by(T1,T2)] We could have as easily defined these in terms of "before" relations on the beginnings and ends of the intervals. So far, the concepts and axioms in the ontology of time would be
appropriate for scalar phenomena in general. 2.4. Optional Extensions: In the basic ontology we have tried to remain neutral with respect to controversial issues, while producing a consistent and
useable axiomatization. In specific applications one may want to have stronger properties and thus take a stand on some of these issues. In this section, we describe some options, with the axioms
that would implement them. These axioms and any subsequent theorems depending on them are prefaced with a 0-argument proposition that says the option is being exercised. Thus the axiom for total
ordering is prefaced by the proposition Total-Order() --> Then to adopt the option of total ordering, one merely has to assert Total-Order() Total or Linear Ordering: In many applications, if not
most, it will be useful to assume that time is linearly or totally ordered. The axiom that expresses this is as follows: Total-Order() --> (A t1,t2)[instant(t1) & instant(t2) --> [before(t1,t2) v t1
= t2 v before(t2,t1)]] This eliminates models of time with branching futures and other conflations of time and possibility or limited knowledge. Infinity: There are two common ways of allowing
infinitely long intervals. Both are common enough that it is worth a little effort to construct the time ontology in a way that accommodates both. The statements of the axioms have been complicated
modestly in order to localize the difference between the two approaches to the choice between two pairs of simple existence axioms, which are themselves conditioned on 0-argument propositions
indicating the choice of that option. In the first approach, one posits time instants at positive and negative infinity. Half-infinite intervals are then intervals that have one of these as an
endpoint. Rather than introduce constants for these in the core ontology, we will have two predicates -- "posinf" and "neginf" -- which are true of only these points. The 0-argument proposition
corresponding to the choice of this approach will be Pts-at-Inf() In the second approach, there are intervals that have no beginning and/or end. "posinf-interval(T)" says that T is a half-infinite
interval with no end. "neginf-interval(T)" says that T is a half-infinite interval with no beginning. The 0-argument proposition corresponding to this option will be No-Pts-at-Inf() In the first
approach, "posinf-interval" and "neginf-interval" will not be true of anything. In the second approach "posinf" and "neginf" will not be true of anything. The axioms that specify the properties of
posinf, neginf, posinf-interval, and neginf-interval will be conditioned on the existence of such temporal entities. Thus, if an approach does not include them, the condition will never be satisfied.
These axioms can thus be part of the core theory. Axioms in the core theory will make the two approaches mutually exclusive. Then which approach one takes amounts on which of two pairs of existence
axioms one uses. This choice is further localized to the decision between asserting "Pts-at-Inf()" and asserting "No-Pts-at-Inf()". The arguments of the predicates "posinf" and "neginf", if they
exist, are instants. posinf(t) --> instant(t) neginf(t) --> instant(t) The principal property of the point at positive infinity is that every other instant is before it. (A t,t1)[instant(t1) & posinf
(t) --> [before(t1,t) v t1 = t]] The next axiom entails that there are infinitely many instants after any given instant other than the point at positive infinity. (A t1)[instant(t1) & ~posinf(t1) -->
(E t2)[instant(t2) & before(t1,t2)]] Note that these two axioms are valid in an approach that does not admit a point at positive infinity; the antecedent of Axiom 2.4-4 will never be satisfied, and
the second conjunct in the antecedent of Axiom 2.4-5 will always be satisfied, guaranteeing that after every instant there will be another instant. The principal property of the point at negative
infinity is that it is before every other instant. (A t,t1)[instant(t1) & neginf(t) --> [before(t,t1) v t1 = t]] The next axiom entails that there are infinitely many instants before any given
instant other than the point at negative infinity. (A t1)[instant(t1) & ~neginf(t1) --> (E t2)[instant(t2) & before(t2,t1)]] Likewise these axioms are valid in an approach that does not admit a point
at negative infinity. In the second approach instants at positive and negative infinity are not posited, but intervals can have the properties "posinf-interval" and "neginf-interval". Because of
Axiom 2.1-4, if an interval has an end, it is not a positive infinite interval. Thus, a positive infinite interval cannot have an end. An instant inside a postive half-infinite interval has
infinitely many instants after it. (A t1,T)[posinf-interval(T) & inside(t1,T) --> (E t2)[before(t1,t2) & inside(t2,T)]] This axiom is valid in the first approach as well, since "posinf-interval" will
never be true and the antecedent will never be satisfied. Because of Axiom 2.1-3, if an interval has a beginning, it is not a negative infinite interval. Thus, a negative infinite interval cannot
have a beginning. Corresponding to Axiom 2.4-8 is the following axiom for "neginf-interval": (A t1,T)[neginf-interval(T) & inside(t1,T) --> (E t2)[before(t2,t1) & inside(t2,T)]] It may be useful to
have two more predicates. An interval is (at least) a half-infinite interval if either "posinf-interval" or "neginf-interval" is true of it. (A T)[halfinf-interval(T) <--> [posinf-interval(T) v
neginf-interval(T)]] An interval is an infinite interval if it is both positively and negatively infinite. (A T)[inf-interval(T) <--> [posinf-interval(T) & neginf-interval(T)]] Again these axioms are
valid in the first approach because the antecedents will never be true. Finally for the core ontology, we probably want to stipulate that one either uses the "posinf" approach or the
"posinf-interval" approach. This is accomplished by the following axiom. [(E t) posinf(t)] <--> ~[(E T) posinf-interval(T)] Similarly, [(E t) neginf(t)] <--> ~[(E T) neginf-interval(T)] Note that one
could use one approach for negative infinity and the other for positive infinity, although this development does not support it. This completes the treatment of infinite time in the core ontology.
The following two axioms give points at infinity if we are using the first approach, indicated by the proposition "Pts-at-Inf()" in the antecedent. Pts-at-Inf() --> (E t) posinf(t) Pts-at-Inf() -->
(E t) neginf(t) That is, there are instants out at positive and negative infinity, respectively, when the Points at Infinity approach is taken. Again, to adopt this approach, simply assert Pts-at-Inf
() When one adopts this approach, one can also, for convenience, extend the language to include the two constants, PositiveInfinity and NegativeInfinity, where posinf(PositiveInfinity) neginf
(NegativeInfinity) One can also extend the language to include the functions "beginning-of" and "end-of", defined as follows: beginning-of(T) = t <--> begins(t,T) end-of(T) = t <--> ends(t,T) We
stipulated the uniqueness of "begins" and "ends" in Section 2.1, and Axioms 2.4-12 and 2.4-13 rule out intervals with no beginnings or ends, so the functions will be total. The following two axioms
guarantee the existence of half infinite intervals if one takes the "No Points at Infinity" approach. No-Pts-at-Inf() --> (A t)[instant(t) --> (E T)[posinf-interval(T) & begins(t,T)]] No-Pts-at-Inf()
--> (A t)[instant(t) --> (E T)[neginf-interval(T) & ends(t,T)]] To specify that we are using the second approach, we would assert No-Pts-at-Inf() Suppose we wish to map between the two ontologies.
Suppose the predicates and constants in the theory using the first approach are subscripted with 1 and the predicates in the theory using the second approach are subscripted with 2. The domains of
the two theories are the same. All predicates and functions of the two theories are equivalent with the exception of "begin", "ends", "beginning-of", "end-of", "posinf", "neginf", "posinf-interval",
and "neginf-interval". These are related by the following two articulation axioms. posinf1(end-of1(T)) <--> posinf-interval2(T) neginf1(beginning-of1(T)) <--> neginf-interval2(T) Density: In some
applications it is useful to have the property of density, that is, the property that between any two distinct instants there is a third distinct instant. The axiom for this is as follows, where the
0-argument predicate indicating the exercising of this option is "Dense()": Dense() --> (A t1,t2)[instant(t1) & instant(t2) & before(t1,t2) --> (E t)[instant(t) & before(t1,t) & before(t,t2)]] This
is weaker than the mathematical property of continuity, which we will not axiomatize here. Convexity: In Section 2.2 we gave the axiom 2.2-8: inside(t,T) & begins(t1,T) & ends(t2,T) --> before(t1,t)
& before(t,t2) The converse of this condition is called Convexity and may be stronger than some users will want if they are modeling time as a partial ordering. (See Esoteric Note below.) To choose
the option of Convexity, simply assert the 0-argument proposition "Convex()". Convex() --> [begins(t1,T) & ends(t2,T) & before(t1,t) & before(t,t2) --> inside(t,T)] In the rest of this development
anny property that depends on Convexity will be conditioned on the proposition "Convex()". Convexity implies that intervals are contiguous with respect to the before relation, in that an instant
between two other instants inside an interval is also inside the interval. Convex() --> [before(t1,t2) & before(t2,t3) & inside(t1,T) & inside(t3,T) --> inside(t2,T)] Extensional Collapse: In the
standard development of interval calculus, it is assumed that any intervals that are int-equals are identical. That is, intervals are uniquely determined by their beginning and end points. We can
call this the property of Extensional Collapse, and indicate it by the 0-argument proposition "Ext-Collapse()". Ext-Collapse() --> (A T1,T2)[int-equals(T1,T2) --> T1 = T2] If we think of different
intervals between the end points as being different ways the beginning can lead to the end, then Extensional Collapse can be seen as collapsing all these into a single "before" relation. In the rest
of this development we will point it out whenever any concept or property depends on Extensional Collapse. We often think of time as isomorphic to the real numbers. The set of real numbers is one
model of the theory presented here, including when one assumes total ordering, no points at infinity, density, convexity, and extensional collapse. Esoteric Note: Convexity, Extensional Collapse, and
Total Ordering are independent properties. This can be seen by considering the following four models based on directed graphs, where the arcs define the before relation: 1. An interval is any subset
of the paths between two nodes. (For example, time is partially ordered and an interval is any path from one node to another.) 2. An interval is the complete set of paths between two nodes. 3. An
interval consists of the beginning and end nodes and all the arcs between the beginning and end nodes but no intermediate nodes. So inside(t,T) is never true. (This is a hard model to motivate.) 4.
The instants are a set of discrete, linearly ordered nodes. There are multiple arcs between the nodes. The intervals are paths from one node to another, including the nodes. (For example, the
instants may be the successive states in the situation calculus and the intervals sequences of actions mapping one state into the next. Different actions can have the same start and end states.)
Model 1 has none of the three properties. Model 2 has Convexity and Extensional Collapse, but is not Totally Ordered. Model 3 is Totally Ordered and has Extensional Collapse but not Convexity. Model
4 is Totally Ordered and Convex, but lacks Extensional Collapse. 2.5. Linking Time and Events: The time ontology links to other things in the world through four predicates -- at-time, during, holds,
and time-span. We assume that another ontology provides for the description of events -- either a general ontology of event structure abstractly conceived, or specific, domain-dependent ontologies
for specific domains. The term "eventuality" will be used to cover events, states, processes, propositions, states of affairs, and anything else that can be located with respect to time. The possible
natures of eventualities would be spelled out in the event ontologies. The term "eventuality" in this document is only an expositional convenience and has no formal role in the time ontology. The
predicate at-time relates an eventuality to an instant, and is intended to say that the eventuality holds, obtains, or is taking place at that time. at-time(e,t) --> instant(t) The predicate "during"
relates an eventuality to an interval, and is intended to say that the eventuality holds, obtains, or is taking place throughout that interval. during(e,T) --> interval(T) If an eventuality obtains
during an interval, it obtains at every instant inside the interval and during every subinterval. during(e,T) & inside(t,T) --> at-time(e,t) during(e,T) & int-during(T1,T) --> during(e,T1) Note that
this means that an intermittant activity, like writing a book, does not hold "during" the interval from the beginning to the end of the activity. Rather the "convex hull" of the activity, as defined
in Section 6, holds "during" the interval. Whether a particular process is viewed as instantaneous or as occuring over an interval is a granularity decision that may vary according to the context of
use, and is assumed to be provided by the event ontology. Often the eventualities in the event ontology are best thought of as propositions, and the relation between these and times is most naturally
called "holds". "holds(e,T)" would say that e holds at instant T or during interval T. The predicate "holds" would be part of the event ontology, not part of the time ontology, although its second
argument would be be provided by the time ontology. The designers of the event ontology may or may not want to relate "holds" to "at-time" and "during" by axioms such as the following: holds(e,t) &
instant(t) <--> at-time(e,t) holds(e,T) & interval(T) <--> during(e,T) Similarly, the event ontology may provide other ways of linking events with times, for example, by including a time parameter in
predications. p(x,t) The time ontology provides ways of reasoning about the t's; their use as arguments of predicates from another domain would be a feature of the ontology of the other domain. The
predicate time-span relates eventualities to instants or intervals. For contiguous states and processes, it tells the entire instant or interval for which the state or process obtains or takes place.
In Section 6 we will develop a treatment of discontinuous temporal sequences, and it will be useful to remain open to having these as time spans of eventualities as well. time-span(T,e) -->
temporal-entity(T) & tseq(T) time-span(T,e) & interval(T) --> during(e,T) time-span(t,e) & instant(t) --> at-time(e,t) time-span(T,e) & interval(T) & ~inside(t,T) & ~begins(t,T) & ~ends(t,T) -->
~at-time(e,t) time-span(t,e) & instant(t) & t1 =/= t --> ~at-time(e,t1) Whether the eventuality obtains at the beginning and end points of its time span is a matter for the event ontology to specify.
The silence here on this issue is the reason "time-span" is not defined in terms of necessary and sufficient conditions. The event ontology could extend temporal functions and predicates to apply to
events in the obvious way, e.g., ev-begins(t,e) <--> time-span(T,e) & begins(t,T) This would not be part of the time ontology, but would be consistent with it. Different communities have different
ways of representing the times and durations of states and events (processes). In one approach, states and events can both have durations, and at least events can be instantaneous. In another
approach, events can only be instantaneous and only states can have durations. In the latter approach, events that one might consider as having duration (e.g., heating water) are modeled as a state
of the system that is initiated and terminated by instantaneous events. That is, there is the instantaneous event of the beginning of the heating at the beginning of an interval, that transitions the
system into a state in which the water is heating. The state continues until another instantaneous event occurs---the stopping of the heating at the end of the interval. These two perspectives on
events are straightforwardly interdefinable in terms of the ontology we have provided. This is a matter for the event ontology to specify. This time ontology is neutral with respect to the choice.
MAPPINGS: Teknowledge's SUMO has pretty much the same ontology as presented here, though the names are slightly different. An instant is a TimePoint, an interval is a TimeInterval, beginning-of is
BeginFn, and so on. SUMO implements the Allen calculus. Cyc has functions #startingPoint and #endingPoint that apply to intervals, but also to eventualities. Cyc implements the Allen calculus. Cyc
uses a holdIn predicate to relate events to times, but to other events as well. Cyc defines a very rich set of derived concepts that are not defined here, but could be. For instant Kestral uses
Time-Point, for interval they use Time-Interval, for beginning-of they use start-time-point, and so on. PSL axiomatizes before as a total ordering. 3. Measuring Durations 3.1. Temporal Units: This
development assumes ordinary arithmetic is available. There are at least two approaches that can be taken toward measuring intervals. The first is to consider units of time as functions from
Intervals to Reals. Because of infinite intervals, the range must also include Infinity. minutes: Intervals --> Reals U {Infinity} minutes([5:14,5:17)) = 3 The other approach is to consider temporal
units to constitute a set of entities -- call it TemporalUnits -- and have a single function _duration_ mapping Intervals x TemporalUnits into the Reals. duration: Intervals x TemporalUnits --> Reals
U {Infinity} duration([5:14,5:17), *Minute*) = 3 The two approaches are interdefinable: seconds(T) = duration(T,*Second*) minutes(T) = duration(T,*Minute*) hours(T) = duration(T,*Hour*) days(T) =
duration(T,*Day*) weeks(T) = duration(T,*Week*) months(T) = duration(T,*Month*) years(T) = duration(T,*Year*) Ordinarily, the first is more convenient for stating specific facts about particular
units. The second is more convenient for stating general facts about all units. The constraints on the arguments of duration are as follows: duration(T,u) --> proper-interval(T) & temporal-unit(u)
The temporal units are as follows: temporal-unit(*Second*) & temporal-unit(*Minute*) & temporal-unit(*Hour*) & temporal-unit(*Day*) & temporal-unit(*Week*) & temporal-unit(*Month*) & temporal-unit
(*Year*) The aritmetic relations among the various units are as follows: seconds(T) = 60 * minutes(T) minutes(T) = 60 * hours(T) hours(T) = 24 * days(T) days(T) = 7 * weeks(T) months(T) = 12 * years
(T) The relation between days and months (and, to a lesser extent, years) will be specified as part of the ontology of clock and calendar below. On their own, however, month and year are legitimate
temporal units. In this development durations are treated as functions on intervals and units, and not as first class entities on their own, as in some approaches. In the latter approach, durations
are essentially equivalence classes of intervals of the same length, and the length of the duration is the length of the members of the class. The relation between an approach of this sort (indicated
by prefix D-) and the one presented here is straightforward. (A T,u,n)[duration(T,u) = n <--> (E d)[D-duration-of(T) = d & D-duration(d,u) = n]] At the present level of development of the temporal
ontology, this extra layer of representation seems superfluous. It may be more compelling, however, when the ontology is extended to deal with the combined durations of noncontiguous aggregates of
intervals. 3.2. Concatenation and Hath: The multiplicative relations above don't tell the whole story of the relations among temporal units. Temporal units are _composed of_ smaller temporal units. A
larger temporal unit is a concatenation of smaller temporal units. We will first define a general relation of concatenation between an interval and a set of smaller intervals. Then we will introduce
a predicate "Hath" that specifies the number of smaller unit intervals that concatenate to a larger interval. Concatenation: A proper interval x is a concatenation of a set S of proper intervals if
and only if S covers all of x, and all members of S are subintervals of x and are mutually disjoint. (The third conjunct on the right side of <--> is because begins-or-in covers only beginning-of and
inside.) concatenation(x,S) <--> proper-interval(x) & (A z)[begins-or-in(z,x) --> (E y)[member(y,S) & begins-or-in(z,y)]] & (A z)[end-of(x) = z --> (E y)[member(y,S) & end-of(y) = z]] & (A y)[member
(y,S) --> [int-starts(y,x) v int-during(y,x) v int-finishes(y,x)]] & (A y1,y2)[member(y1,S) & member(y2,S) --> [y1=y2 v nonoverlap(y1,y2)]] The following properties of "concatenation" can be proved
as theorems: There are elements in S that start and finish x: concatenation(x,S) --> (E! y1)[member(y1,S) & int-starts(y1,x)] concatenation(x,S) --> (E! y2)[member(y2,S) & int-finishes(y2,x)] Except
for the first and last elements of S, every element of S has elements that precede and follow it. These theorems depend on the property of Convexity. Convex() --> [concatenation(x,S) --> (A y1)
[member(y1,S) --> [int-finishes(y1,x) v (E! y2)[member(y2,S) & int-meets(y1,y2)]]]] Convex() --> [concatenation(x,S) --> (A y2)[member(y2,S) --> [int-starts(y2,x) v (E! y1)[member(y1,S) & int-meets
(y1,y2)]]]] The uniqueness (E!) follows from nonoverlap. Hath: The basic predicate used here for expressing the composition of larger intervals out of smaller temporal intervals of unit length is
"Hath", from statements like "30 days hath September" and "60 minutes hath an hour." Its structure is Hath(N,u,x) meaning "N proper intervals of duration one unit u hath the proper interval x." That
is, if Hath(N,u,x) holds, then x is the concatenation of N unit intervals where the unit is u. For example, if x is some month of September then "Hath(30,*Day*,x)" would be true. "Hath" is defined as
follows: Hath(N,u,x) <--> (E S)[card(S) = N & (A z)[member(z,S) --> duration(z,u) = 1] & concatenation(x,S)] That is, x is the concatenation of a set S of N proper intervals of duration one unit u.
The type constraints on its arguments can be proved as a theorem: N is an integer (assuming that is the constraint on the value of card), u is a temporal unit, and x is a proper interval: Hath(N,u,x)
--> integer(N) & temporal-unit(u) & proper-interval(x) This treatment of concatenation will work for scalar phenomena in general. This treatment of Hath will work for measurable quantities in
general. 3.3. The Structure of Temporal Units: We now define predicates true of intervals that are one temporal unit long. For example, "week" is a predicate true of intervals whose duration is one
week. second(T) <--> seconds(T) = 1 minute(T) <--> minutes(T) = 1 hour(T) <--> hours(T) = 1 day(T) <--> days(T) = 1 week(T) <--> weeks(T) = 1 month(T) <--> months(T) = 1 year(T) <--> years(T) = 1 We
are now in a position to state the relations between successive temporal units. minute(T) --> Hath(60,*Second*,T) hour(T) --> Hath(60,*Minute*,T) day(T) --> Hath(24,*Hour*,T) week(T) --> Hath
(7,*Day*,T) year(T) --> Hath(12,*Month*,T) The relations between months and days are dealt with in Section 4.4. MAPPINGS: Teknowledge's SUMO has some facts about the lengths of temporal units in
terms of smaller units. Cyc reifies durations. Cyc's notion of time covering subsets aims at the same concept dealt with here with Hath. Kestrel uses temporal units to specify the granularity of the
time representation. PSL reifies and axiomatizes durations. PSL includes a treatment of delays between events. A delay is the interval between the instants at which two events occur. 4. Clock and
Calendar 4.1. Time Zones: What hour of the day an instant is in is relative to the time zone. This is also true of minutes, since there are regions in the world, e.g., central Australia, where the
hours are not aligned with GMT hours, but are, e.g., offset half an hour. Probably seconds are not relative to the time zone. Days, weeks, months and years are also relative to the time zone, since,
e.g., 2002 began in the Eastern Standard time zone three hours before it began in the Pacific Standard time zone. Thus, predications about all clock and calendar intervals except seconds are relative
to a time zone. This can be carried to what seems like a ridiculous extreme, but turns out to yield a very concise treatment. The Common Era (C.E. or A.D.) is also relative to a time zone, since 2002
years ago, it began three hours earlier in what is now the Eastern Standard time zone than in what is now the Pacific Standard time zone. What we think of as the Common Era is in fact 24 (or more)
slightly displaced half-infinite intervals. (We leave B.C.E. to specialized ontologies.) The principal functions and predicates will specify a clock or calendar unit interval to be the nth such unit
in a larger interval. The time zone need not be specified in this predication if it is already built into the nature of the larger interval. That means that the time zone only needs to be specified
in the largest interval, that is, the Common Era; that time zone will be inherited by all smaller intervals. Thus, the Common Era can be considered as a function from time zones (or "time standards",
see below) to intervals. CE(z) = T Fortunately, this counterintuitive conceptualization will usually be invisible and, for example, will not be evident in the most useful expressions for time, in
Section 4.5 below. In fact, the CE predication functions as a good place to hide considerations of time zone when they are not relevant. (The BCE era is similarly time zone dependent, although this
will almost never be relevant.) Esoteric Aside: Strictly speaking, the use of CE as a function depends on Extensional Collapse. If we don't want to assume that, then we can use a corresponding
predicate -- CEPred(e,z) -- to mean era e is the Common Era in time zone z. We have been refering to time _zones_, but in fact it is more convenient to work in terms of what we might call the "time
standard" that is used in a time zone. That is, it is better to work with *PST* as a legal entity than with the *PST* zone as a geographical region. A time standard is a way of computing the time,
relative to a world-wide system of computing time. For each time standard, there is a zone, or geographical region, and a time of the year in which it is used for describing local times. Where and
when a time standard is used have to be axiomatized, and this involves interrelating a time ontology and a geographical ontology. These relations can be quite complex. Only the entities like *PST*
and *EDT*, the time standards, are part of the _time_ ontology. If we were to conflate time zones (i.e., geographical regions) and time standards, it would likely result in problems in several
situations. For example, the Eastern Standard zone and the Eastern Daylight zone are not identical, since most of Indiana is on Eastern Standard time all year. The state of Arizona and the Navajo
Indian Reservation, two overlapping geopolitical regions, have different time standards -- one is Pacific and one is Mountain. Time standards that seem equivalent, like Eastern Standard and Central
Daylight, should be thought of as separate entities. Whereas they function the same in the time ontology, they do not function the same in the ontology that articulates time and geography. For
example, it would be false to say those parts of Indiana shift in April from Eastern Standard to Central Daylight time. In this treatment it will be assumed there is a set of entities called time
standards. Some relations among time standards are discussed in Section 4.5. 4.2. Clock and Calendar Units: The aim of this section is to explicate the various standard clock and calendar intervals.
A day as a calender interval begins at and includes midnight and goes until but does not include the next midnight. By contrast, a day as a duration is any interval that is 24 hours in length. The
day as a duration was dealt with in Section 3. This section deals with the day as a calendar interval. Including the beginning but not the end of a calendar interval in the interval may strike some
as arbitrary. But we get a cleaner treatment if, for example, all times of the form 12:xx a.m., including 12:00 a.m. are part of the same hour and day, and all times of the form 10:15:xx, including
10:15:00, are part of the same minute. It is useful to have three ways of saying the same thing: the clock or calendar interval y is the nth clock or calendar interval of type u in a larger interval
x. This can be expressed as follows for minutes: minit(y,n,x) If the property of Extensional Collapse holds, then y is uniquely determined by n and x, it can also be expressed as follows: minitFn
(n,x) = y For stating general properties about clock intervals, it is useful also to have the following way to express the same thing: clock-int(y,n,u,x) This expression says that y is the nth clock
interval of type u in x. For example, the proposition "clock-int(10:03,3,*Minute*,[10:00,11:00))" holds. Here u can be a member of the set of clock units, that is, one of *Second*, *Minute*, or
*Hour*. In addition, there is a calendar unit function with similar structure: cal-int(y,n,u,x) This says that y is the nth calendar interval of type u in x. For example, the proposition "cal-int
(12Mar2002,12,*Day*,Mar2002)" holds. Here u can be one of the calendar units *Day*, *Week*, *Month*, and *Year*. The unit *DayOfWeek* will be introduced below in Section 4.3. The relations among
these modes of expression are as follows: sec(y,n,x) <--> secFn(n,x) = y sec(y,n,x) <--> clock-int(y,n,*Second*,x) minit(y,n,x) <--> minitFn(n,x) = y minit(y,n,x) <--> clock-int(y,n,*Minute*,x) hr
(y,n,x) <--> hrFn(n,x) = y hr(y,n,x) <--> clock-int(y,n,*Hour*,x) da(y,n,x) <--> daFn(n,x) = y da(y,n,x) <--> cal-int(y,n,*Day*,x) mon(y,n,x) <--> monFn(n,x) = y mon(y,n,x) <--> cal-int
(y,n,*Month*,x) yr(y,n,x) <--> yrFn(n,x) = y yr(y,n,x) <--> cal-int(y,n,*Year*,x) Weeks and months are dealt with separately below. The am/pm designation of hours is represented by the function hr12.
hr12(y,n,*am*,x) <--> hr(y,n,x) hr12(y,n,*pm*,x) <--> hr(y,n+12,x) A distinction is made above between clocks and calendars because they differ in how they number their unit intervals. The first
minute of an hour is labelled with 0; for example, the first minute of the hour [10:00,11:00) is 10:00. The first day of a month is labelled with 1; the first day of March is March 1. We number
minutes for the number just completed; we number days for the day we are working on. Thus, if the larger unit has N smaller units, the argument n in clock-int runs from 0 to N-1, whereas in cal-int n
runs from 1 to N. To state properties true of both clock and calendar intervals, we can use the predicate cal-int and relate the two notions with the axiom cal-int(y,n,u,x) <--> clock-int(y,n-1,u,x)
Note that the Common Era is a calendar interval in this sense, since it begins with 1 C.E. and not 0 C.E. The type constraints on the arguments of cal-int are as follows: cal-int(y,n,u,x) -->
interval(y) & integer(n) & temporal-unit(u) & interval(x) We allow x to be any interval, not just a calendar interval. When x does not begin at the beginning of a calendar unit of type u, we take y
to be the nth _full_ interval of type u in x. Thus, the first year of World War II, in this sense, is 1940, the first full year, and not 1939, the year it began. The first week of the year will be
the first full week. We can express this constraint as follows: cal-int(y,n,u,x) --> starts-or-during(y,x) Each of the calendar intervals is that unit long; for example, a calendar year is a year
long. cal-int(y,n,u,x) --> duration(y,u) = 1 There are properties relating to the labelling of clock and calendar intervals. If N u's hath x and y is the nth u in x, then n is between 1 and N.
cal-int(y,n,u,x) & Hath(N,u,x) --> 0 < n <= N There is a 1st small interval, and it starts the large interval. Hath(N,u,x) --> (E! y) cal-int(y,1,u,x) Hath(S,N,u,x) & cal-int(y,1,u,x) --> int-starts
(y,x) There is an Nth small interval, and it finishes the large interval. Hath(N,u,x) --> (E! y) cal-int(y,N,u,x) Hath(N,u,x) & cal-int(y,N,u,x) --> int-finishes(y,x) All but the last small interval
have a small interval that succeeds and is met by it. cal-int(y1,n,u,x) & Hath(N,u,x) & n < N --> (E! y2)[cal-int(y2,n+1,u,x) & int-meets(y1,y2)] All but the first small interval have a small
interval that precedes and meets it. cal-int(y2,n,u,x) & Hath(N,u,x) & 1 < n --> (E! y1)[cal-int(y1,n - 1,u,x) & int-meets(y1,y2)] 4.3. Weeks A week is any seven consecutive days. A calendar week, by
contrast, according to a commonly adopted convention, starts at midnight, Saturday night, and goes to the next midnight, Saturday night. There are 52 weeks in a year, but there are not usually 52
calendar weeks in a year. Weeks are independent of months and years. However, we can still talk about the nth week in some larger period of time, e.g., the third week of the month or the fifth week
of the semester. So the same three modes of representation are appropriate for weeks as well. wk(y,n,x) <--> wkFn(n,x) = y wk(y,n,x) <--> cal-int(y,n,*Week*,x) As it happens, the n and x arguments
will often be irrelevant, when we only want to say that some period is a calendar week. The day of the week is a calendar interval of type *Day*. The nth day-of-the-week in a week is the nth day in
that interval. dayofweek(y,n,x) <--> day(y,n,x) & (E n1,x1) wk(x,n1,x1) The days of the week have special names in English. dayofweek(y,1,x) <--> Sunday(y,x) dayofweek(y,2,x) <--> Monday(y,x)
dayofweek(y,3,x) <--> Tuesday(y,x) dayofweek(y,4,x) <--> Wednesday(y,x) dayofweek(y,5,x) <--> Thursday(y,x) dayofweek(y,6,x) <--> Friday(y,x) dayofweek(y,7,x) <--> Saturday(y,x) For example, Sunday
(y,x) says that y is the Sunday of week x. Since a day of the week is also a calendar day, it is a theorem that it is a day long. dayofweek(y,n,x) --> day(y) One correspondance will anchor the cycle
of weeks to the rest of the calendar, for example, saying that January 1, 2002 was the Tuesday of some week x. (A z)(E x) Tuesday(dayFn(1,monFn(1,yrFn(2002,CE(z)))),x) We can define weekdays and
weekend days as follows: weekday(y,x) <--> [Monday(y,x) v Tuesday(y,x) v Wednesday(y,x) v Thursday(y,x) v Friday(y,x)] weekendday(y,x) <--> [Saturday(y,x) v Sunday(y,x)] As before, the use of the
functions wkFn and dayofweekFn depend on Extensional Collapse. 4.4. Months and Years The months have special names in English. mon(y,1,x) <--> January(y,x) mon(y,2,x) <--> February(y,x) mon(y,3,x)
<--> March(y,x) mon(y,4,x) <--> April(y,x) mon(y,5,x) <--> May(y,x) mon(y,6,x) <--> June(y,x) mon(y,7,x) <--> July(y,x) mon(y,8,x) <--> August(y,x) mon(y,9,x) <--> September(y,x) mon(y,10,x) <-->
October(y,x) mon(y,11,x) <--> November(y,x) mon(y,12,x) <--> December(y,x) The number of days in a month have to be spelled out for individual months. January(m,y) --> Hath(31,*Day*,m) March(m,y) -->
Hath(31,*Day*,m) April(m,y) --> Hath(30,*Day*,m) May(m,y) --> Hath(31,*Day*,m) June(m,y) --> Hath(30,*Day*,m) July(m,y) --> Hath(31,*Day*,m) August(m,y) --> Hath(31,*Day*,m) September(m,y) --> Hath
(30,*Day*,m) October(m,y) --> Hath(31,*Day*,m) November(m,y) --> Hath(30,*Day*,m) December(m,y) --> Hath(31,*Day*,m) The definition of a leap year is as follows: (A z)[leap-year(y) <--> (E n,x)[year
(y,n,CE(z)) & [divides(400,n) v [divides(4,n) & ~divides(100,n)]]]] We leave leap seconds to specialized ontologies. Now the number of days in February can be specified. February(m,y) & leap-year(y)
--> Hath(29,*Day*,m) February(m,y) & ~leap-year(y) --> Hath(28,*Day*,m) A reasonable approach to defining month as a unit of temporal measure would be to specify that the beginning and end points
have to be on the same days of successive months. The following rather ugly axiom captures this. month(T) <--> (E t1,t2,d1,d2,n,m1,m2,n1,y1,y2,n2,e) [begins(t1,T) & ends(t2,T) & [begins-or-in(t1,d1)
& begins-or-in(t2,d2) & da(d1,n,m1) & mon(m1,n1,y1) & yr(y1,n2,e) & da(d2,n,m2) & [mon(m2,n1+1,y1) v (E y2)[n1=12 & mon(m2,1,y2) & yr(y2,n2+1,e)]]]] The last disjunct takes care of months spaning
December and January. So the month as a measure of duration would be related to days as a measure of duration only indirectly, mediated by the calendar. It is possible to prove that months are
between 28 and 31 days. To say that July 4 is a holiday in the United States one could write (A d,m,y)[da(d,4,m) & July(m,y) --> holiday(d,USA)] Holidays like Easter can be defined in terms of this
ontology coupled with an ontology of the phases of the moon. Other calendar systems could be axiomatized similarly. and the BCE era could also be axiomatized in this framework. These are left as
exercises for interested developers. 4.5. Time Stamps: Standard notation for times list the year, month, day, hour, minute, and second. It is useful to define a predication for this. time-of
(t,y,m,d,h,n,s,z) <--> begins-or-in(t,secFn(s,minFn(n,hrFn(h,daFn(d, monFn(m,yrFn(y,CE(z)))))))) Alternatively (and not assuming Extensional Collapse), time-of(t,y,m,d,h,n,s,z) <--> (E
s1,n1,h1,d1,m1,y1,e) [begins-or-in(t,s1) & sec(s1,s,n1) & min(n1,n,h1) & hr(h1,h,d1) & da(d1,d,m1) & mon(m1,m,y1) & yr(y1,y,e) & CEPred(e,z)] For example, an instant t has the time 5:14:35pm PST,
Wednesday, February 6, 2002 if the following properties hold for t: time-of(t,2002,2,6,17,14,35,*PST*) (E w,x)[begins-or-in(t,w) & Wednesday(w,x)] The second line says that t is in the Wednesday w of
some week x. The relations among time zones can be expressed in terms of the "time-of" predicate. Two examples are as follows: h < 8 --> [time-of(t,y,m,d,h,n,s,*GMT*) <--> time-of
(t,y,m,d-1,h+16,n,s,*PST*)] h >= 8 --> [time-of(t,y,m,d,h,n,s,*GMT*) <--> time-of(t,y,m,d,h-8,n,s,*PST*)] time-of(t,y,m,d,h,n,s,*EST*) <--> time-of(t,y,m,d,h,n,s,*CDT*) The "time-of" predicate will
be convenient for doing temporal arithmetic. The predicate "time-of" has 8 arguments. It will be convenient in cases where exact times are not known or don't need to be specified to have functions
that identify each of the slots of a time description. These functions are also useful for partial implementations of the time ontology in versions of DAML based on description logic, such as
DAML+OIL. We introduce functions that allow "time-of" to be expressed as a collection of values of the functions: "second-of", "minute-of", etc. However, these functions cannot be applied to instants
directly, since an instant can have many "time-of" predications, one for each time zone and alternate equivalent descriptions involving, for example, 90 minutes versus 1 hour and 30 minutes.. Thus,
we need an intervening "temporal description". An instant can have many temporal descriptions, and each temporal description has a unique value for "second-of", "minute-of", etc.Thus, "time-of
(t,2002,2,6,17,14,35,PST)", or 5:14:35pm PST, February 6, 2002, would be expressed by asserting of the instant t the property "temporal-description(d,t)", meaning that d is a temporal description of
t, and asserting for d the properties "year-of(d) = 2002", "month-of(d) = 2", etc. Coarser granularities on times can be expressed by leaving the finer-grained units unspecified. These functions can
be defined by the following axiom: (A t,y,m,d,h,n,s,z)[time-of(t,y,m,d,h,n,s,z) <--> (E d1)[temporal-description(d1,t) & year-of(d1) = y & month-of(d1) = m & day-of(d1) = d & hour-of(d1) = h &
minute-of(d1) = n & second-of(d1) = s & time-zone-of(d1) = z]] The domain of the functions is an entity of type "temporal description". The range of the functions is inherited from the constraints on
the arguments of "time of". (A d1,y)[year-of(d1) = y --> (E t)[temporal-description(d1,t)]] (A d1,m)[month-of(d1) = m --> (E t)[temporal-description(d1,t)]] (A d1,d)[day-of(d1) = d --> (E t)
[temporal-description(d1,t)]] (A d1,h)[hour-of(d1) = h --> (E t)[temporal-description(d1,t)]] (A d1,n)[minute-of(d1) = n --> (E t)[temporal-description(d1,t)]] (A d1,s)[second-of(d1) = s --> (E t)
[temporal-description(d1,t)]] (A d1,z)[time-zone-of(d1) = z --> (E t)[temporal-description(d1,t)]] MAPPINGS: Teknowledge's SUMO distinguishes between durations (e.g., HourFn) and clock and calendar
intervals (e.g., Hour). Time zones are treated as geographical regions. The treatment of dates and times via functions follows Cyc's treatment. Kestrel's roundabout attempts to state rather
straightforward facts about the clock and calendar are an excellent illustration of the lack of expressivity in DAML+OIL. The ISO standard for dates and times can be represented straightforwardly
with the time-of predicate or the unitFn functions. 5. Temporal Granularity Useful background reading for this note includes Bettini et al. (2002), Fikes and Zhou, and Hobbs (1985). Very often in
reasoning about the world, we would like to treat an event that has extent as instantaneous, and we would like to express its time only down to a certain level of granularity. For example, we might
want to say that the election occurs on November 5, 2002, without specifying the hours, minutes, or seconds. We might want to say that the Thirty Years' War ended in 1648, without specifying the
month and day. For the most part, this can be done simply by being silent about the more detailed temporal properties. In Section 2.5 we introduced the predication "time-span(T,e)" relating events to
temporal entities, the relation "temporal-description(d,t)" relating a temporal entity to a description of the clock and calendar intervals it is included in, and the functions "second-of(d)",
"minute-of(d)", "hour-of(d)", "day-of(d)", "month-of(d)", and "year-of(d)". Suppose we know that an event occurs on a specific day, but we don't know the hour, or it is inappropriate to specify the
hour. Then we can specify the day-of, month-of, and year-of properties, but not the hour-of, minute-of, or second-of properties. For example, for the election e, we can say time-span(t,e),
temporal-description(d,t), day-of(d) = 5, month-of(d) = 11, year-of(d) = 2002 and no more. We can even remain silent about whether t is an instant or an interval. Sometimes it may be necessary to
talk explicitly about the granularity at which we are viewing the world. For that we need to become clear about what a granularity is, and how it functions in a reasoning system. A granularity G on a
set of entities S is defined by an indistinguishability relation, or equivalently, a cover of S, i.e. a set of sets of elements of S such that every element of S is an element of at least one element
of the cover. We will identify the granularity G with the cover. (A G,S)[cover(G,S) <--> (A x)[member(x,S) <--> (E s)[member(s,G) & member(x,s)]]] Two elements of S are indistinguishable with respect
to G if they are in the same element of G. (A x1,x2,G)[indisting(x1,x2,G) <--> (E s)[member(s,G) & member(x1,s) & member(x2,s)]] A granularity can be a partition of S, in which case every element of
G is an equivalence class. The indistinguishability relation is transitive in this case. A common case of this is where the classes are defined by the values of some given function f. (A G,S)[G =
f-gran(S,f) <--> [cover(G,S) & (A x1,x2)[indisting(x1,x2,G) <--> f(x1) = f(x2)]]] For example, if S is the set of descriptions of instants and f is the function "year-of", then G will be a
granularity on the time line that does not distinguish between two instants within the same calendar year. The granularities defined by Bettini et al. (2002) are essentially of this nature. They will
be discussed further after we have introduced temporal aggregates in Section 6 below. A granularity can also consist of overlapping sets, in which case the indistinguishability relation is not
transitive. A common example of this is in domains where there is some distance function d, and any two elements that are closer than a given distance a to each other are indistinguishable. We will
suppose d takes two entities and a unit u as its arguments and returns a real number. (A G,S)[G = d-gran(S,a) <--> [cover(G,S) & (A x1,x2)[indisting(x1,x2,G) <--> d(x1,x2,u) < a]]] For example,
suppose S is the set of instants, d is duration of the interval between the two instants, the unit u is *Minute*, and a is 1. Then G will be the granularity on the time line that does not distinguish
between instants that are less than a minute apart. Note that this is not transitive, because 9:34:10 is indistinguishable from 9:34:50, which is indistinguishable from 9:35:30, but the first and
last are more than a minute apart and are thus distinguishable. Both of these granularities are uniform over the set, but we can imagine wanting variable granularities. Suppose we are planning a
robbery. Before the week preceeding the robbery, we may not care what time any events occur. All times are indistinguishable. The week preceeding the robbery we may care only what day events take
place on. On the day of the robbery we may care about the hour in which an event occurs, and during the robbery itself we may want to time the events down to ten-second intervals. Such a granularity
could be defined as above; the formula would only be more complex. The utility of viewing the world under some granularity is that the task at hand becomes easier to reason about, because
distinctions that are possible in the world at large can be ignored in the task. One way of cashing this out in a theorem-proving framework is to treat the relevant indistinguishability relation as
equality. This in effect reduces the number of entities in the universe of discourse and makes available rapid theorem-proving techniques for equality such as paramodulation. We can express this
assumption with the axiom (5.1) (A x1,x2)[indisting(x1,x2,G) --> x1 = x2] for the relevant G. For a temporal ontology, if 0-length intervals are instants, this axiom has the effect of collapsing some
intervals into instants. There are several nearly equivalent ways of viewing the addition of such an axiom -- as a context shift, as a theory mapping, or an an extra antecedent condition. Context
shift: In some formalisms, contexts are explicitly represented. A context can be viewed as a set of sentences that are true in that context. Adding axiom (5.1) to that set of sentences shifts us to a
new context. Theory mapping: We can view each granularity as coinciding with a theory. Within each theory, entities that are indistinguishable with respect to that granularity are viewed as equal, so
that, for example, paramodulation can replace equals with equals. To reason about different granularities, there would be a "mediator theory" in which all the constant, function and predicate symbols
of the granular theories are subscripted with their granularities. So equality in a granular theory G would appear as the predicate "=_G" in the mediator theory. In the mediator theory paramodulation
is allowed with "true" equality, but not with the granular equality relations =_G. However, invariances such as if x =_G y, then [p_G(x) implies p_G(y)] hold in the mediator theory. Extra antecedent
condition: Suppose we have a predicate "under-granularity" that takes a granularity as its one argument and is defined as follows: (A g)[under-granularity(g) <--> (A x1,x2)[indisting(x1,x2,g) --> x1
= x2]] Then we can remain in the theory of the world at large, rather than moving to a subtheory. If we are using a granularity G, rather than proving a theorem P, we prove the theorem
under-granularity(G) --> P If the granularity G is transitive, and thus partitions S, adding axiom (5.1) should not get us into any trouble. However, if G is not transitive and consists of
overlapping sets, such as the episilon neighborhood granularity, then contradictions can result. When we use (5.1) with such a granularity, we are risking contradiction in the hopes of efficiency
gains. Such a tradeoff must be judged on a case by case basis, depending on the task and on the reasoning engine used. 6. Aggregates of Temporal Entities 6.1. Describing Aggregates of Temporal
Entities In annotating temporal expressions in newspapers, Laurie Gerber encountered a number of problematic examples of temporal aggregates, including expressions like "every 3rd Monday in 2001",
"every morning for the last 4 years", "4 consecutive Sundays", "the 1st 9 months of 1997", "3 weekdays after today", "the 1st full day of competition", and "the 4th of 6 days of voting". We have
taken these as challenge problems for the representation of temporal aggregates, and attempted to develop convenient means for expressing the possible referents of these expressions. In this section,
we will assume the notation of set theory. Sets and elements of sets will be ordinary individuals, and relations such as "member" will be relations between such individuals. In particular, we will
use the relation "member" between an element of a set and the set, and the relation "subset" between two sets. We will use "Phi" to refer to the empty set. We will use the notation "{x}" for the
singleton set containing the element x. We will use the symbol "U" to refer to the union operation between two sets. The function "card" will map a set into its cardinality. In addition, for
convenience, we will make moderate use of second-order formulations, and quantify over predicate symbols. This could be eliminated with the use of an "apply" predicate and axiom schemas
systematically relating predicate symbols to corresponding individuals, e.g., the axiom schema for unary predicates p, (A x)[apply(*p*,x) <--> p(x)] It will be convenient to have a relation "ibefore"
that generalizes over several interval and instant relations, covering both "int-before" and "int-meets" for proper intervals. (A T1,T2)[ibefore(T1,T2) <--> [before(T1,T2) v [proper-interval(T1) &
proper-interval(T2) & int-meets(T1,T2)]]] It will also be useful to have a relation "iinside" that generalizes over all temporal entities and aggregates. We first define a predicate "iinside-1" that
generalizes over instants and intervals and covers "int-starts", "int-finishes" and "int-equals" as well as "int-during" for intervals. We break the definition into several cases. (A T1,T2)[iinside-1
(T1,T2) <--> [T1=T2 v [instant(T1) & proper-interval(T2) & inside(T1,T2)] v [(E t) begins(t,T1) & ends(t,T1) & proper-interval(T2) & inside(t,T2)] v [proper-interval(T1) & proper-interval(T2) &
[int-starts(T1,T2) v int-during(T1,T2) v int-finishes(T1,T2) v int-equals(T1,T2)]]]] The third disjunct in the definition is for the case of 0-length intervals, should they be allowed and distinct
from the corresponding instants. A temporal aggregate is first of all a set of temporal entities, but it has further structure. The relation "ibefore" imposes a natural order on some sets of temporal
entities, and we will use the predicate "tseq" to describe those sets. (A s)[tseq(s) <--> (A t)[member(t,s) --> temporal-entity(t)] & (A t1,t2)[member(t1,s) & member(t2,s) --> [t1 = t2 v ibefore
(t1,t2) v ibefore(t2,t1)]]] That is, a temporal sequence is a set of temporal entities totally ordered by the "ibefore" relation. A temporal sequence has no overlapping temporal entities. It will be
useful to have the notion of a temporal sequence whose elements all have a property p. (A s,p)[tseqp(s,p) <--> tseq(s) & (A t)[member(t,s) --> p(t)]] A uniform temporal sequence is one all of whose
members are of equal duration. (A s)[uniform-tseq(s) <--> (A t1,t2,u)[member(t1,s) & member(t2,s) & temporal-unit(u)] --> duration(t1,u) = duration(t2,u)] The same temporal aggregate can be broken up
into a set of intervals in many different ways. Thus it is useful to be able to talk about temporal sequences that are equivalent in the sense that they cover the same regions of time. (A s1,s2)
[equiv-tseq(s1,s2) <--> tseq(s1) & tseq(s2) & (A t)[temporal-entity(t) --> [(E t1)[member(t1,s1) & iinside-1(t,t1)] <--> (E t2)[member(t2,s2) & iinside-1(t,t2)]]]] That is, s1 and s2 are equivalent
temporal sequences when any temporal entity inside one is also inside the other. A minimal temporal sequence is one that is minimal in that its intervals are maximal, so that the number of intervals
in minimal. We can view a week as a week or as 7 individual successive days; the first would be minimal. We can go from a nonminimal to a minimal temporal sequence by concatenating intervals that
meet. (A s)[min-tseq(s) <--> (A t1,t2)[member(t1,s) & member(t2,s) --> [t1 = t2 v (E t)[ibefore(t1,t) & ibefore(t,t2) & ~member(t,s)]]]] That is, s is a minimal temporal sequence when any two
distinct intervals in s have a temporal entity not in s between them. A temporal sequence s1 is a minimal equivalent temporal sequence to temporal sequence s if s1 is minimal and equivalent to s. (A
s1,s)[min-equiv-tseq(s1,s) <--> min-tseq(s1) & equiv-tseq(s1,s)] We can now generalize "iinside-1" to the predicate "iinside", which covers both temporal entities and temporal sequences. A temporal
entity is "iinside" a temporal sequence if it is "iinside-1" one of the elements of its minimal equivalent temporal sequence. (A t,s)[iinside(t,s) <--> [temporal-entity(t) & temporal-entity(s) &
iinside-1(t,s)] v [temporal-entity(t) & tseq(s) & (E s1,t1)[min-equiv-tseq(s1,s) & member(t1,s1) & iinside-1(t,t1)]]] We can define a notion of "isubset" on the basis of "iinside". (A s,s0)[isubset
(s,s0) <--> [tseq(s) & tseq(s0) & (A t)[member(t,s) --> iinside(t,s0)]]] That is, every element of temporal sequence s is inside some element of the minimal equivalent temporal sequence of s0. We can
also define a relation of "idisjoint" between two temporal sequences. (A s1,s2)[idisjoint(s1,s2) <--> [tseq(s1) & tseq(s2) & ~(E t,t1,t2)[member(t1,s1) & member(t2,s2) & iinside(t,t1) & iinside
(t,t2)]]] That is, temporal sequences s1 and s2 are disjoint if there is no overlap between the elements of one and the elements of the other. The first temporal entity in a temporal sequence is the
one that is "ibefore" any of the others. (A t,s)[first(t,s) <--> [tseq(s) & member(t,s) & (A t1)[member(t1,s) --> [t1 = t v ibefore(t,t1)]]]] The predicate "last" is defined similarly. (A t,s)[last
(t,s) <--> [tseq(s) & member(t,s) & (A t1)[member(t1,s) --> [t1 = t v ibefore(t1,t)]]]] More generally, we can talk about the nth element of temporal sequence. (A t,s)[nth(t,n,s) <--> [tseq(s) &
member(t,s) & natnum(n) & (E s1)[(A t1)[member(t1,s1) <--> [member(t1,s) & ibefore(t1,t)]] & card(s1) = n-1]]] That is, the nth element of a temporal sequence has n-1 elements before it. It is a
theorem that the first is the nth when n is 1, and that the last is the nth when n is the cardinality of s. (A t,s)[first(t,s) <--> nth(t,1,s)] (A t,s)[last(t,s) <--> nth(t,card(s),s)] Later in this
development it will be convenient to have a predicate "nbetw" that says there are n elements in a sequence between two given elements. (A t1,t2,s,n)[nbetw(t1,t2,s,n) <--> (E s1)[card(s1) = n & (A t)
[member(t,s1) <--> ibefore(t1,t) & ibefore(t,t2) & member(t,s)]]] It may sometimes be of use to talk about the convex hull of a temporal sequence. (A t,s)[convex-hull(t,s) <--> [tseq(s) & interval(t)
& (A t1)[first(t1,s) --> int-starts(t1,t)] & (A t2)[last(t2,s) --> int-finishes(t2,t)]]] Note,however, that we cannot simply dispense with temporal sequences and talk only about their convex hulls.
"Every Monday in 2003" has as its convex hull the interval from January 6 to December 29, 2003, but if we use that interval to represent the phrase, we lose all the important information in the
notice "The group will meet every Monday in 2003." The predicate "ngap" will enable us to define "everynthp" below. Essentially, we are after the idea of a temporal sequence s containing every nth
element of s0 for which p is true. The predicate "ngap" holds between two elements of s and says that there are n-1 elements between them that are in s0 and not in s for which p is true. (A
t1,t2,s,s0,p,n) [ngap(t1,t2,s,s0,p,n) <--> [member(t1,s) & member(t2,s) & tseqp(s,p) & tseq(s0) & isubset(s,s0) & natnum(n) & (E s1)[card(s1) = n-1 & idisjoint(s,s1) & (A t)[member(t,s1) <-->
[iinside(t,s0) & p(t) & ibefore(t1,t) & ibefore(t,t2)]]]]] The predicate "everynthp" says that a temporal sequence s consists of every nth element of the temporal sequence s0 for which property p is
true. It will be useful in describing temporal aggregates like "every third Monday in 2001". (A s,s0,p,n)[everynthp(s,s0,p,n) <--> [tseqp(s,p) & tseq(s0) & natnum(n) & (E t1)[first(t1,s) & ~(E t)
[iinside(t,s0) & ngap(t,t1,s,s0,p,n)]] & (E t2)[last(t2,s) & ~(E t)[iinside(t,s0) & ngap(t2,t,s,s0,p,n)]] & (A t1)[last(t1) v (E t2) ngap(t1,t2,s,s0,p,n0)]]] That is, the first element in s has no p
element n elements before it in s0, the last element in s has no p element n elements after it, and every element but the last has a p element n elements after it. The variable for the temporal
sequence s0 is, in a sense, a context parameter. When we say "every other Monday", we are unlikely to mean every other Monday in the history of the Universe. The parameter s0 constrains us to some
particular segment of time. (Of course, that segment could in principle be the entire time line.) The definition of "everyp" is simpler. (A s,s0,p)[everyp(s,s0,p) <--> (A t)[member(t,s) <--> [iinside
(t,s0) & p(t)]]] It is a theorem that every p is equivalant to every first p. (A s,s0,p)[everyp(s,s0,p) <--> everynthp(s,s0,p,1)] We could similarly define "every-other-p", but the resulting
simplification from "everynthp(s,s0,p,2)" would not be sufficient to justify it. Now we will consider a number of English expressions for temporal aggregates and show how they would be represented
with the machinery we have built up. "Every third Monday in 2001": In Section 4.3, "Monday" is a predicate with two arguments, the second being for the week it is in. Let us define "Monday1" as
describing a Monday in any week. (A d)[Monday1(d) <--> (E w) Monday(d,w)] Then the phrase "every third Monday in 2001" describes a temporal sequence S for which the following is true. (E y,z)[yr
(y,2001,CE(z)) & everynthp(S,{y},Monday,3)] Note that this could describe any of three temporal sequences, depending on the offset determining the first element of the set. "Every morning for the
last four years": Suppose "nowfn" maps a document d into the instant or interval viewed as "now" from the point of view of that document, and suppose D is the document this phrase occurs in. Suppose
also the predicate "morning" describes that part of each day that is designated a "morning". Then the phrase describes a temporal sequence S for which the the following is true. (E T,t)[duration
(T,*Year*) = 4 & ends(t1,T) & iinside(t1,nowfn(D)) & everyp(S,{T},morning)] "Four consecutive Mondays": This describes a temporal sequence S for which the following is true. (E s0)[everyp
(S,s0,Monday1) & card(S) = 4] "The first nine months of 1997": This describes the temporal sequence S for which the following is true. (E z)(A m)[member(m,S) <--> month(m,n,yrFn(1997,CD(z))) & 1 =< n
=< 9] Note that this expression is ambiguous between the set of nine individual months, and the interval spanning the nine months. This is a harmless ambiguity because the minimal equivalent temporal
sequence of the first is the singleton set consisting of the second. "The first full day of competition": For the convenience of this example, let us assume an ontology where "competition" is a
substance or activity type and "full" relates intervals to such types. Then the phrase describes an interval D for which the following is true. (E s)[(A d)[member(d,s) <--> (E n,T)[day(d,n,T) &
competition(c) & full(d,c)]] & first(D,s)] "Three weekdays after January 10": Suppose the predicate "weekday1" describes any weekday of any week, similar to "Monday1". Then this phrase describes the
temporal aggregate S for which the following is true. (E d,y,T)[da(d,11,moFn(1,y)) & everyp(S,{T},weekday1) & int-starts(d,T) & card(S) = 3] That is, January 11, the day after January 10, starts the
interval from which the three successive weekdays are to be taken. The last of these weekdays is the day D for which "last(D,S)" is true. If we know that January 10 is a Friday, we can deduce that
the end of S is Wednesday, January 15. "The fourth of six days of voting": Let us, for the sake of the example, say that voting is a substance/activity type and can appear as the first argument of
the predicate "during". Then a voting day can be defined as follows: (A d)[voting-day(d) <--> (E v,n,T)[da(d,n,T) & voting(v) & during(v,d)]] Then the phrase describes an interval D for which the
following is true. (E s,s0)[everyp(s,s0,voting-day) & card(s) = 6 & nth(D,4,s)] Betti et al.'s (19??) concept of granularity is simply a temporal sequence in our terminology. All of the examples they
give are uniform temporal sequences. For example, their "hour" granularity within an interval T is the set S such that "everyp(S,T,hr1)", where "hr1" is to "hr" as "Monday1" is to "Monday". Their
notion of one granularity "grouping into" another can be defined for temporal sequences. (A s1,s2)[groups-into(s1,s2) <--> tseq(s1) & tseq(s2) & iinside(s1,s2) & (A t)[member(t,s2) --> (E s)[subset
(s,s1) & min-equiv-tseq({t},s)]]] That is, temporal sequence s1 groups into temporal sequence s2 if every element of s2 is made up of a concatenation of elements of s1 and nothing else is in s1.
Betti et al. also define a notion of "groups-periodically-into", relative to a period characterized by integers r. Essentially, every r instances of a granule in the coarser granularity groups a
subset of the same number of granules in the finer granularity. (A s1,s2,r)[groups-periodically-into(s1,s2,r) <--> groups-into(s1,s2) & natnum(r) & (A t1,t2,s3)[member(t1,s2) & member(t2,s2) & nbetw
(t1,t2,s2,r-1) & subset(s3,s1) & groups-into(s3,{t1})] --> (E s4)[subset(s4,s1) & groups-into(s4,{t2}) & card(s3) = card(s4)]] To know the time of an event down to a granularity of one clock hour (in
the context of S0) is to know which element it occurred during in the set S such that "everyp(S,S0,hr1)". A transitive granularity, as defined in Section 5, is a temporal sequence. 6.2. Durations of
Temporal Aggregates The concept of "duration", defined in Section 3, can be extended to temporal sequences in a straightforward manner. If a temporal sequence is the empty set, Phi, its duration is
zero. (A u)[temporal-unit(u) --> duration(Phi,u) = 0] The duration of a singleton set is the duration of the temporal entity in it. (A t,u)[temporal-entity(t) & temporal-unit(u) --> duration({t},u) =
duration(t,u)] The duration of the union of two disjoint temporal sequences is the sum of their durations. (A s1,s2,u)[tseq(s1) & tseq(s2) & idisjoint(s1,s2) & temporal-unit(u) --> duration(s1 U
s2,u) = duration(s1,u) + duration(s2,u)] We need to use the predicate "idisjoint" to ensure that there is no overlap between intervals in s1 and intervals in s2. The duration of the convex hull of a
temporal sequence is of course not the same as the duration of the temporal sequence. Sometimes one notion is appropriate, sometimes the other. For determining what workers hired on an hourly basis
should be paid, we want to know the duration of the temporal sequence of the hours that they worked, whereas for someone on an annual salary, the appropriate measure is the duration of its convex
hull. It is a theorem that the duration of the convex hull of a temporal sequence is at least as great as that of the temporal sequence. (A t,s,u)[convex-hull(t,s) --> duration(t,u) >= duration(s,u)]
6.3. Duration Arithmetic five business days after January 8, 2003. 6.4. Rates 7. Deictic Time Deictic temporal concepts, such as ``now'', ``today'', ``tomorrow night'', and ``last year'', are more
common in natural language texts than they will be in descriptions of Web resources, and for that reason we are postponing a development of this domain until the first three are in place. But since
most of the content on the Web is in natural language, ultimately it will be necessary for this ontology to be developed. It should, as well, mesh well with the annotation standards used in automatic
tagging of text. We expect that the key concept in this area will be a relation _now_ between an instant or interval and an utterance or document. now(t,d) The concept of "today" would also be
relative to a document, and would be defined as follows: today(T,d) <--> (E t,n,x)[now(t,d) & begins-or-in(t,T) & da(T,n,x)] That is, T is today with respect to document d if and only if there is an
instant t in T that is now with respect to the document and T is a calendar day (and thus the nth calendar day in some interval x). Present, past and future can be defined in the obvious way in terms
of now and before. Another feature of a treatment of deictic time would be an axiomatization of the concepts of last, this, and next on anchored sequences of temporal entities. 8. Vague Temporal
Concepts In natural language a very important class of temporal expressions are inherently vague. Included in this category are such terms as "soon", "recently", and "a little while". These require
an underlying theory of vagueness, and in any case are probably not immediately critical for the Semantic Web. This area will be postponed for a little while. References Allen, J.F. (1984). Towards a
general theory of action and time. Artificial Intelligence 23, pp. 123-154. Allen, James F., and Henry A. Kautz. 1985. ``A Model of Naive Temporal Reasoning'', {\it Formal Theories of the Commonsense
World}, ed. by Jerry R. Hobbs and Robert C. Moore, Ablex Publishing Corp., pp. 251-268. Allen, J.F. and P.J. Hayes (1989). Moments and points in an interval-based temporal logic. Computational
Intelligence 5, pp. 225-238. Allen, J.F. and G. Ferguson (1997). Actions and events in interval temporal logic. In Oliveiro Stock (ed.), Spatial and Temporal Reasoning, Kluwer Academic Publishers,
pp. 205-245. Claudio Bettini, X. Sean Wang, and Sushil Jajodia, "Solving multi-granularity temporal constraint networks", Artificial Intelligence, vol. 140 (2002), pp. 107-152. Richard Fikes and Qing
Zhou, "A Reusable Time Ontology" Jerry Hobbs, "Granularity", IJCAI-85, or http://www.ai.sri.com/~hobbs/granularity-web.pdf | {"url":"http://www.cs.rochester.edu/~ferguson/daml/daml-time-nov2002.txt","timestamp":"2014-04-21T07:04:25Z","content_type":null,"content_length":"90864","record_id":"<urn:uuid:6dec351c-3bfb-46ce-978d-4ac2438b5097>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number of integer combinations x_1 < ... < x_n ?
up vote 4 down vote favorite
I asked this question earlier on math.stackexchange.com but didn't get an answer:
Let $0 < a_1 < ... < a_n$ be integers. Is there a closed formula (or some other result) for the number $N(a_1,...,a_n)$ of integer combinations $0 < x_1 < ... < x_n$ such that $x_i \le a_i$ $(i=
1,...,n)$ ?
Of course, if $a_i = a_n - n + i$ for all $i$, then $N = \binom{a_n}{n}$.
I considered the following model: Let $B_1 \subseteq ... \subseteq B_n$ be nested boxes. $B_i$ contains $a_i$ balls that are labeled by $1,...,a_i$. Choose one ball from each box (without repetition)
and afterwards sort the balls. Then $N$ equals the number of different combinations that can be obtained in this way.
Example: $n=3$ For the chosen balls $b_i \in B_i$ there are the following possibilities:
1) $b_3 \in B_3 \setminus B_2$, $b_2 \in B_2 \setminus B_1$, $b_1 \in B_1$. The balls are already sorted and there are $(a_3-a_2)(a_2-a_1)a_1$ possibilities.
2) $b_3,b_2 \in B_2 \setminus B_1$, $b_1 \in B_1$. After sorting $b_2,b_3$ there are $\frac{(a_2-a_1)(a_2-a_1-1)}{2!}\cdot a_1$ possibilities.
3) $b_3,b_2,b_1 \in B_1$. After sorting there are $\frac{a_1 (a_1-1)(a_1-2)}{3!}$ possibilities.
4) $b_3 \in B_3 \setminus B_2$, $b_2,b_1 \in B_1$. After sorting $b_1, b_2$ there are $(a_3-a_2) \cdot \frac{a_1 (a_1-1)}{2!}$ pssibilities.
5) $b_3 \in B_2 \setminus B_1$, $b_1,b_2 \in B_1$. After sorting $b_1, b_2$ there are $(a_2-a_1) \cdot \frac{a_1 (a_1-1)}{2!}$ pssibilities.
Generalizing this pattern yields the formula
$$N(a_1,...,a_n) = \sum_{\nu} \prod_{i=1}^n \binom{a_i-a_{i-1}}{\nu_i!}$$
$(a_0 := 0)$ where $\nu_i$ is the number of balls choosen from $B_i \setminus B_{i-1}$. The sum is taken over all $\nu=(\nu_1,...,\nu_n)$ such that $0 \le \nu_i \le i$ and $\nu_1 + ... + \nu_n = n$.
But this is far from a closed formula. I do not even know the exact number of summands.
Also note that there is a recursion formula
$$N(a_1,...,a_n) = N(a_1,...,a_{n-1}) + N(a_1,...,a_{n-1},a_n -1)$$
but I wasn't able to guess a closed form thereof.
Edit: Thank you all very much for your answers. Each one deserves to be accepted. Unfortunately this isn't possible in MO. I therefore accepted William's since Proctor's formula in the linear case
seems to be most helpful in the application I have in mind.
co.combinatorics pr.probability
Possible duplicate mathoverflow.net/questions/83714/combination-and-probability . Gjergji's suggestion there may be worth researching. Gerhard "Ask Me About System Design" Paseman, 2012.01.17 –
Gerhard Paseman Jan 17 '12 at 21:55
Gerhard, do you know about the "determinantal formula" mentioned by Gjergji ? In the wikipedia article (and some other) I only read something about a determinant that is related to a graph. How do
I construct the graph from my data ? – Ralph Jan 17 '12 at 22:30
I'm sorry, I don't know. If you edit this question so as to provide a reference request (after doing some searches yourself), you may get some more knowledgable answers, and likely the other
question (not this one) will be closed as a duplicate. Gerhard "Ask Me About System Design" Paseman, 2012.01.17 – Gerhard Paseman Jan 17 '12 at 22:35
add comment
3 Answers
active oldest votes
Robin Pemantle and Herb Wilf give a short recurrence as an answer to this question, and a more compact formula when the sequence $a_n$ is linear, in a freely available paper from the
up vote 6 EJC in 2009: vol. 16 (2009), #R60, "Counting Nondecreasing Integer Sequences that Lie Below a Barrier." Link: http://www.combinatorics.org/Volume_16/PDF/v16i1r60.pdf .
down vote
Thanks for the link. Proctor's formula in the linear case is really helpful. – Ralph Jan 18 '12 at 21:39
The formula that Pemantle and Wilf attribute to Proctor is equivalent to a much older formula: the number of paths in the plane from the origin to the point $(a,b)$, where $a > pb$,
4 that stay strictly below the line $x=py$, with steps $(1,0)$ and $(0,1$), is $$\frac{a-pb}{a+b}\binom{a+b}{a}.$$ This formula was apparently first stated by E. Barbier in 1887. A
reference is Marc Renault, Four Proofs of the Ballot Theorem, Mathematics Magazine 80 (2007), 345--352; available online at webspace.ship.edu/msrenault/ballotproblem/…. – Ira Gessel
Jan 19 '12 at 0:32
add comment
The simplest formula is the determinant $$ \left| \binom{a_i+j-i}{ j-i+1}\right|_{i,j=1,\dots,n}. $$ For example, when $n=3$ this is $$ \begin{vmatrix} \displaystyle \binom{a_1}{1}&\
displaystyle \binom{a_1+1}{2}&\displaystyle \binom{a_1+2}{3}\\ 1 &\displaystyle \binom{a_2}{1}&\displaystyle \binom{a_2+1}{2}\\ 0 & 1 &\displaystyle \binom{a_3}{1} \end{vmatrix} $$ In
general there are 1's in the diagonal below the main diagonal and 0's below that, so when expanded there are $2^{n-1}$ terms. It is unlikely that there is a simpler formula.
This formula is most easily proved by inclusion-exclusion. We start with the set of positive integer sequences $(x_1,\dots, x_n)$ satisfying $1\le x_i\le a_i$ for each $i$ and use
inclusion-exclusion to count those sequences satisfying none of the conditions $x_i\ge x_{i+1}$, using the fact that it's easy to count the sequences satisfying any subset of them. For
example, with $n=3$ the determinant expands to $$a_1a_2a_3 - a_1\binom{a_2+1}{2} -\binom{a_1+2}{2} a_3 + \binom{a_1+2}{3}.$$ Here the term $a_1\binom{a_2+1}{2}$, for example, counts
sequences $(x_1,x_2,x_3)$ satisfying $a_1\ge x_1\ge 1$ and $a_2\ge x_2\ge x_3\ge 1$. The crucial fact that makes this work is that since $a_2 < a_3$, the condition $a_2\ge x_2\ge x_3\ge 1$
up vote implies that $a_3\ge x_3$.
11 down
vote As Vladimir noted, an equivalent formula replaces strong with weak inequalities. This formula (in a more general form) was given by B. R. Handa and S. G. Mohanty, ``On $q$-binomial
coefficients and some statistical applications," SIAM J. Math. Anal. 11 (1980), 1027--1035. A recent proof of their formula, with further references, is in my paper with Nicholas Loehr, Note
on enumeration of partitions contained in a given shape, Linear Algebra Appl. 432 (2010), 583--585. A preprint version can be found on my home page, http://people.brandeis.edu/~gessel/
While I'm plugging my own papers, I'll note a paper of mine closely related to the paper of Pemantle and Wilf that William mentioned: A probabilistic method for lattice path enumeration, J.
Statist. Plann. Inference 14 (1986), 49--58, also available on my home page.
1 Dear Ira, I was wondering, - is there any representation-theoretic interpretation of this number (as a dimension of a naturally defined subspace in some construction over the Hilbert
scheme etc. - in the spirit of the (Fuss-)Catalan case)? – Vladimir Dotsenko Jan 18 '12 at 20:26
Thank you very much for your determinantal formula and the enlightening explanation in case $n=3$. It's interesting to see how this approach (that is in some sense opposite to the
"intuitive way" of trying to count $x_1 < x_2 \le a_2$ for given $x_1$) solves the problem. – Ralph Jan 18 '12 at 21:40
BTW: Do you know if there is a sign-free formula for the number the combinations ? In particular, do you know if there is a formula similar to my sum formula above in the literature ?
Thanks in advance! – Ralph Jan 18 '12 at 21:42
Vladimir: I don't know of any representation-theoretic interpretation of this number. This formula is a special case of a determinant formula for counting plane partitions of a given
2 shape with upper and lower bounds for the parts in each row (it's the case of one column). This more general formula is related to "flagged Schur functions", which are related to Schubert
polynomials, but that's as far as my knowledge goes in this direction. Ralph: I have not seen a formula like your sum in the literature, and I don't know of a sign-free formula. – Ira
Gessel Jan 18 '12 at 23:37
Thanks! This nice determinantal formula also happens to be very useful for computations towards another recent Mathoverflow question (mathoverflow.net/questions/85409). – Noam D. Elkies
Jan 20 '12 at 3:13
add comment
(This is a bit too long for a comment, though not exhaustive at all.)
This number, especially if you make appropriate changes of your notation to replace $< $ by $\le$ (replace $a_i$ by $a_i-i$, and $x_i$ by $x_i-i$, that is), admits an interpretation in
terms of Young lattice (inclusion partial order on Young diagrams), or, equivalently, in terms of lattice paths below the graph of $i\to a_i$).
up vote 4 In addition to the binomial coefficient example (in these updated terms it is the number of Young diagrams inside a rectangle, or lattice paths inside a rectangle, so manifestly a binomial
down vote coefficient), the answer which is very well known applies to the diagram $(n,n-1,\ldots,1)$, when it is the $n$th Catalan number, and more generally, for $(kn,k(n-1),\ldots,k)$, when it is
the $n$th Fuss–Catalan number. This altogether suggests that there might be some hook-length-kind formula formula which I am missing, and maybe this incomplete answer will make someone who
knows that formula to explain it...
Thanks for drawing my attention to Young lattices. Also the transformation from $<$ to $\le$ as you explained will be helpful in translating Proctor's formula. – Ralph Jan 18 '12 at 21:38
add comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics pr.probability or ask your own question. | {"url":"http://mathoverflow.net/questions/85927/number-of-integer-combinations-x-1-x-n/86007","timestamp":"2014-04-21T09:58:53Z","content_type":null,"content_length":"76278","record_id":"<urn:uuid:c14025b9-61ea-4f40-976b-522933141418>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reconsiderations on the Equivalence of Convergence between Mann and Ishikawa Iterations for Asymptotically Pseudocontractive Mappings
Journal of Applied Mathematics
Volume 2013 (2013), Article ID 274931, 5 pages
Research Article
Reconsiderations on the Equivalence of Convergence between Mann and Ishikawa Iterations for Asymptotically Pseudocontractive Mappings
Department of Mathematics and Physics, Shijiazhuang Tiedao University, Shijiazhuang 050043, China
Received 28 December 2012; Accepted 11 June 2013
Academic Editor: D. R. Sahu
Copyright © 2013 Haizhen Sun and Zhiqun Xue. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work is properly cited.
Our aim in this paper is to illustrate that the proof of main theorem of Rhoades and Şoltuz (2003) concerning the equivalence between the convergences of Ishikawa and Mann iterations for uniformly
-Lipschitzian asymptotically pseudocontractive maps is incorrect and to provide its correct version.
1. Introduction and Preliminary
In 2003, Rhoades and Şoltuz [1] proved the equivalence between convergences of Ishikawa and Mann iterations for an asymptotically pseudocontractive map. This result provided significant improvements
of recent some important results. Their result is as follows.
Theorem R-S (see [1, Theorem 8]). Let be a closed convex subset of an arbitrary Banach space and and defined by (3) and (4) with and satisfying (5). Let be an asymptotically pseudocontractive and
Lipschitzian map with selfmap of . Let be the fixed point of . If , the following two assertions are equivalent:(i)Mann type iteration (3) converges to ,(ii)Ishikawa iteration (4) converges to .
However, after careful reading of the paper of Rhoades and Şoltuz [1], we find that there exists a serious gap in the proof of Theorem 8 of [1], which happens to be main theorem of the paper. Note:
in the proof of Theorem 8 of [1] the following mistakes occurred. “Using (6) with ” in line 19 of page 684 cannot obtain
The reason is that the following conditions are not equivalent:(a1) is asymptotically pseudocontractive map,(a2), where and are from (1).
The aim of this paper is for us to provide its correct version. For this, we need the following definitions and lemmas.
Throughout this paper, suppose that is an arbitrary real Banach space and is a nonempty closed convex subset of . Let denote the normalized duality mapping from to defined by where , , and denote the
dual space of , the generalized duality pairing, and the single-valued normalized duality mapping, respectively.
Definition 1 (see [1]). Let be a mapping.
is called uniformly -Lipschitz if there is a constant such that, for all ,
is called asymptotically nonexpansive with a sequence and if for each such that
is called asymptotically pseudocontractive map with a sequence and if, for each , there exists such that
Obviously, an asymptotically nonexpansive mapping is both asymptotically pseudocontractive and uniformly -Lipschitz. Conversely, it is not true in general.
Definition 2 (see [2]). For arbitrary given , the sequences defined by are called modified Mann and Ishikawa iterations, respectively, where , are two real sequences of and satisfy some conditions.
Lemma 3 (see [2]). Let be a real Banach space and be a normalized duality mapping. Then for all and .
Lemma 4 (see [3]). Let be a strictly increasing and continuous function with , and let , , and be three nonnegative real sequences satisfying the following inequality: where with , . Then as .
2. Main Results
Now we prove the following theorem which is the main result of this paper.
Theorem 5. Let be a real Banach space, be a nonempty closed convex subset of , and be a uniformly -Lipschitz asymptotically pseudocontractive mapping with a sequence such that . Let be two real
numbers sequences in and satisfy the conditions (i) as ; (ii). For some , let and be modified Mann and Ishikawa iterative sequences defined by (6) and (7), respectively. If , , and there exists a
strictly increasing continuous function with such that
where , then the following two assertions are equivalent:(1-1) the modified Mann iteration (6) converges strongly to the fixed point of;(1-2) the modified Ishikawa iteration (7) converges strongly
to the fixed point of.
Proof. We only need to prove (1-1) (1-2), that is, as as . Without loss of generality, . Since is a uniformly -Lipschitz, then .
Step 1. For any , is bounded.
Set , for all , , then :
And there exists and such that . Indeed, if as , then, ; if with , then, for , there exists a sequence such that as with . Hence there exists a natural number such that for , and then we redefine and
Set , and then, from , we obtain that
Denote , . Next, we want to prove that . If , then . Now assume that it holds for some ; that is, . We prove that . Suppose that it is not the case, and then . Now denote
Because as , without loss of generality, let for any . So we have so
Using Lemma 3 and the above formula, we obtain Since as , without loss of generality, let . Then (17) implies that
and this is a contradiction. Hence ; that is, is a bounded sequence.
Step 2. We show that as .
By Step 1, we obtain that is a bounded sequence, and denote . Applying (6), (7), and Lemma 3, we have Observe that
where as .
Substituting (20) and (21) into (19), we obtain Since as , without loss of generality, we may assume that for any . Then, (22) implies that Let , , . Then (24) leads to By Lemma 4, we obtain . That
is, as . From the inequality , we get as . This completes the proof.
Remark 6. The error in the proof of Theorem 8 of [1] has been pointed out and corrected, but it is not easy what the author really wants to obtain the proof of Theorem 8 in [1] at present.
Remark 7. The proof method of Theorem 5 is quite different from that of [1] and others.
This work was supported by Hebei Provincial Natural Science Foundation (Grant no. A2011210033). And authors thank the reviewers for good suggestions and valuable comments of the paper.
1. B. E. Rhoades and S. M. Şoltuz, “The equivalence between the convergences of Ishikawa and Mann iterations for an asymptotically pseudocontractive map,” Journal of Mathematical Analysis and
Applications, vol. 283, no. 2, pp. 681–688, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
2. S. S. Zhang, “Iterative approximation problem of fixed points for asymptotically nonexpansive mappings in Banach spaces,” Acta Mathematicae Applicatae Sinica, vol. 24, no. 2, pp. 236–241, 2001.
View at Zentralblatt MATH · View at MathSciNet
3. C. Moore and B. V. C. Nnoli, “Iterative solution of nonlinear equations involving set-valued uniformly accretive operators,” Computers & Mathematics with Applications, vol. 42, no. 1-2, pp.
131–140, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet | {"url":"http://www.hindawi.com/journals/jam/2013/274931/","timestamp":"2014-04-16T16:55:45Z","content_type":null,"content_length":"420550","record_id":"<urn:uuid:03f584c8-7eef-4b3a-9c59-0076e0ef6e1d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
D. J. Bernstein
Authenticators and signatures
A state-of-the-art public-key signature system
The design of this signature system evolved as follows:
• The general idea of signature systems: 1976 Diffie Hellman.
• Roots modulo n as signatures: 1977 Rivest Shamir Adleman; independently Rabin, unpublished.
• Hashing: 1979 Rabin. Finally a system that was hard to break.
• Small exponent: 1979 Rabin. Fast verification.
• Exponent 2: 1979 Rabin. Faster verification.
• Message prefix r in signature: 1979 Rabin. In retrospect, allows tight security proofs.
• Extra factors e and f, so all r's work: 1980 Williams. Faster signing.
• Choosing r as a function of z and m: 1997 Barwood; independently 1997 Wigley. Deterministic signing.
• Signature expansion: 1997 Bernstein. Even faster verification. (Shamir stated in 2003 that he had come up with the idea earlier, and had announced it in a talk, but had not published it. So I
originally said ``1997 Bernstein; independently Shamir, unpublished.'' But other people remember Shamir announcing a different idea. In response, Shamir promised to send me slides from his talk,
and said that those slides actually included both ideas. I haven't received those slides. Is this another example of Shamir's well-known habit of misrepresenting his results? If not, why hasn't
Shamir sent me those slides?)
• Signing without Euclid: 2000 Bernstein. Simpler signing.
• Signature compression to 1/2 size: 2003? Bleichenbacher.
• Key compression to 1/3 size: 2003 Coppersmith.
• Small r: 2003 Katz Wang. Shorter signatures.
I'll take the blame for any problems with the current parameters (1537-bit moduli; H0; H1). | {"url":"http://cr.yp.to/sigs/credits.html","timestamp":"2014-04-21T10:10:11Z","content_type":null,"content_length":"2020","record_id":"<urn:uuid:98159b3f-de7c-4350-a153-1b7c9b7c09da>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Folding a piece of paper in half
A few weeks ago, I asked the following question:
“If you fold a (humungous) piece of paper in half 50 times, how thick would the pile be? 1cm, 1m, 1km, 1,000km, 1,000,000km…?”
Here I discuss the possibilities of folding paper in half 50, 25, 15 and then 9 times.
50 folds
How thick?
With each fold, the thickness doubles, from 1 to 2 to 4 to 8 sheets thick and so on. After 50 folds, the pile would be 2^50 folds thick. Now, knowing 2^10 = 1024 ≈ 1000 = 10^3, and using a year 9
index rule (a^b)^c = a^bc, we get
2^50 = (2^10)^5 ≈ (10^3)^5 = 10^15
Typical printer paper is 0.1mm thick, so the thickness of the pile would be
10^15 x 0.1mm
= 10^14mm
= 10^11 x 10^3mm
= 10^11m
= 10^8 x 10^3km
= 10^8km
= 100,000,000km
This is about two thirds of the distance from the earth to the sun!
How heavy?
Then I started wondering how heavy such a piece of paper would be.
Knowing the thickness of the paper, we then need its length, width and density.
I revisited a link to a report that in 2002, Britney Gallivan, a high school student in the US, had folded a single piece of paper in half 12 times. (Note added 21 March 2014: The record is now 13
folds. See here for details.)
She also derived equations for the minimum length of paper necessary to fold a piece of paper n times, either in alternate directions (N-S, then E-W, then N-S etc) or in a single direction (N-S, N-S
etc) – as she did herself, using a (very long) toilet roll.
Let’s work out the weight of paper needed for single direction folding a piece of paper 50 times…
If t is the thickness of the paper, and n the number of folds, one of Britney’s formulas tells us that the length is
L = π t 4^n/6
(I have simplified the latter formula a little, in a way that makes no difference for large n. See the above article for details.)
For 50 folds, these equations mean that
L ≈ 7×10^25m
(This is more than 10% of the width of the universe!)
The pile should not be too narrow, otherwise it would buckle and crumple. Let’s say the width of the paper should be at least ¼ of the height, that is 2.5×10^10m.
So the volume of paper is
V = length x width x thickness
≈ 7×10^25m × 2.5×10^10m x 10^-4m
≈ 2×10^32 m^3
Paper varies in density, but printer paper is typically 80gsm and (as noted above) is about 0.1mm thick, meaning a density of about 800 kg/m^3. So
Mass = volume x density
≈ 1.6×10^35 kg
This is about 800 times the mass of our sun. The upper limit for stellar size is believed to be about 600 times the mass of the sun, and then only for one formed without elements heavier than helium.
Paper is just under 50% carbon, so the mass of carbon would be about 0.5x1.6×10^35 kg = 8×10^34 kg, still 300 times the mass of the sun. This piece of paper would be the heaviest known
concentration of carbon in the entire universe. So this pile of paper would indeed crumple – it would collapse gravitationally under its own weight and become a supergiant sun, but be surrounded by a
gas nebula uniquely rich in carbon.
So, I should have asked whether it is possible, even in theory, to fold a piece of paper in half 50 times – clearly not.
Incidentally, the term ‘weight’ turns out to be a wildly inappropriate term here, as weight is normally used to mean the pull of the earth on the object. This object would have a mass about 25
billion times that of the earth. I should have said ‘mass’.
25 folds
Let’s step back a little, and make our calculation only totally unrealistic – rather than completely fantastical.
Repeating the above calculations for 25 folds produces
length = 5.9×10^10 m = 59 million km
pile thickness = 3.6 km
width = 840 m (i.e. ¼ of the thickness)
mass = 4×10^12 kg
In 2009, the global production of paper was 377 million tonnes = 3.77×10^11kg.
If all paper produced globally for a decade was made into one piece of paper for this magnificent obsession, then we could fold this piece of paper 25 times. Of course there would be some non-trivial
issues still to deal with:
• The earth is 40075 km around at the equator. The paper would be so long it would wrap around the equator 1471 times.
“I want all 840 of you to stand a metre apart from each other. Now, move towards me, then keep going round the earth 736 times, and stop when you get back to me.”
“Great. Now, I’ll stay here, where the ends meet. You make sure the fold is neat, then pick the paper up there and go 378 times around the earth.”
• The last fold would mean taking a pile of paper 1.8km thick and about 3.6 km long and lifting it up 1.8km over the other half of the paper.
• And it must not fall in the water.
I think it’s fair to say it’s not going to happen.
15 folds
Let’s try for something which just might be feasible.
Repeating the above calculations for 15 folds produces
length = 56 km
pile thickness = 3.3 m
width = 82 cm (i.e. ¼ of the thickness)
mass = 3.7 tonnes
A jumbo roll, as produced in a paper mill, weighs 15 tonnes, and is about 6 metres in width and over 30 km in length. So we would
• cut off a quarter of the roll to make a 1.64m strip 30 km in length, weighing 3.7 tonnes.
• fold it lengthways in half once
• fold it in half 14 more times end over end as normal
There would also be significant issues with allowing for suitable amounts of slack so the paper can fold without scrunching up or, worse, tearing.
It might be possible, but would be an industrial size undertaking.
The humble toilet roll
So what could we do ourselves? Even we can debunk the supposed limit of seven folds.
A toilet roll may be 20 to 30 metres long, so you’ll need 10 to 15 meters to lay out the roll, assuming you immediately fold it double.
I was able to fold a normal toilet roll 9 times, in a scaled down version of what Britney did. Her formula says we need about 14 metres.
Do try this at home. | {"url":"http://mathsevangelist.wordpress.com/2012/10/11/folding-a-piece-of-paper-in-half/","timestamp":"2014-04-19T06:54:23Z","content_type":null,"content_length":"59302","record_id":"<urn:uuid:15ec003e-eedb-47a0-aa63-48f273cb12b8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intel Core 2 Quad Q9550S: A New 65W Quad-Core
Adobe Photoshop CS4 Performance
To measure performance under Photoshop CS4 we turn to the Retouch Artists’ Speed Test. The test does basic photo editing; there are a couple of color space conversions, many layer creations, color
curve adjustment, image and canvas size adjustment, unsharp mask, and finally a gaussian blur performed on the entire image.
The whole process is timed and thanks to the use of Intel's X25-M SSD as our test bed hard drive, performance is far more predictable than back when we used to test on mechanical disks.
Time is reported in seconds and the lower numbers mean better performance. The test is multithreaded and can hit all four cores in a quad-core machine.
Before we get to performance let's look at idle power consumption:
Now let’s look at performance:
The best performers in this benchmark, by far, are the Core i7 processors. If you’re building the ultimate Photoshop machine, Core i7 is what you want. The entry level Core i7-920 is faster than the
Core 2 Extreme QX9770, despite the latter being priced at over $1000.
Note that the Core 2 Quad Q9550S performs identically to a Core 2 Quad Q9500; right in between a Q9650 and a Q9450, just as you’d expect.
Next we measured the average power consumption of the entire machine during the Photoshop benchmark, the results are reported in watts. Lower numbers are better here:
Now we see the benefit of the new -S parts; the Core 2 Quad Q9550S draws less power than any other chip we tested, including the much smaller, cooler running Q9400. While the Q9550S still uses the
original 820M transistor Penryn core, the Q9400 is a smaller 456M transistor part.
While we don’t have a real Q9550 to compare to, if you look at the power consumption of the Q9650 and the Q9450 you can estimate that a Q9550 would be somewhere in between - perhaps around 165W. That
would put the average energy savings of the 9550S at 10W.
We can also look at the maximum power consumed during the course of the test:
The Q9550S’ advantage amounts to around 15W under peak power draw.
Efficiency is equally important, here we’re looking at total energy consumed by the system over the life of the test. Energy consumed takes into account how long the test takes to complete, which
will be shorter on faster machines.
Here the Q9550S is marginally better than its Penryn siblings. There’s about a 5% drop in total energy consumed compared to a Q9650. And this is where the argument for energy efficiency falls short
with the Q9550S; look at the total energy consumed on the Core i7, it blows the Q9550S out of the water.
While the Core i7-920 draws as much as 6% more power than the Q9550S, it also completes the benchmark in nearly 14% less time. If you want the best performance per watt, skip the Core 2 Quad Q9550S
and buy the Core i7-920.
62 Comments
• JPForums - Wednesday, January 28, 2009 - link
In general when people say average, they are talking about the Arithmetic mean.
Arithmetic mean = 1/n*(X1 + ... + Xn)
or in English a list of numbers divided by the number of items in the list.
In your case this would mean summing the list of power measurements taken at one second intervals and dividing by the number of measurements which would be the integer number of seconds. You
could then calculate joules by multiplying that average by the total time.
The only way the sum of power measurements and calculation made by multiplying the average power by the time would be different is if the number of measurements for the sum and the average are
different. In this scenario, the calculation made with more data points would be more accurate (think integration).
So the question becomes: How did you calculate your average? It appears that your average has more data points given that the total test time is measured to 1/10 seconds and your summation was
only one second intervals.
That said, the difference between the summation and the result calculated from the average should be small as you stated. The Q9550S results, for instance, only differs by 113(3244-3131) joules
and the Core i7-920 differs by a mere 86(2818-2732) joules. Even the Phenom X4 9950 only has a delta of 69(5474-5405) joules. However, the Phenom II 940 has a delta of 898(4697-3799) joules.
This massive difference leads me to suspect that either the average power, the total time, or the total energy for this processor was reported incorrectly. If we assume the average power and the
maximum power are the same, then the delta shrinks to 220(4697-4490) joules. Alternately, if we assume that the Phenom II 940 is the same speed as the Phenom 9950, the delta shrinks to 207
(4697-4477) joules. Both of the assumptions seem unreasonable to me, and neither get the delta as small as it should be. So I ask, now that I've presented a reasonable case, please recheck your
total energy numbers as Ryun suggested. Reply
• harijan - Tuesday, January 27, 2009 - link
It still it doesn't make sense. How can it use 4700 joules yet average 157 watts over 24 seconds? Or have a max of 188 Watts?
4697 Joules / 24.2 seconds = 195 Watts average Reply
• Woops, you're completely right :) The issue wasn't with the power measurement but with the performance. The performance data for the run that I measured power under was incorrect. A re-run fixes
that problem. The Q9450 was also impacted slightly.
It's worth mentioning that the performance and power data are taken at two different times. First the performance data, then the power data. The performance during the power run is close but not
always identical to the performance during the performance run. There's going to be some variation depending on the test.
Take care,
Anand Reply
• GourdFreeMan - Wednesday, January 28, 2009 - link
I have to agree. There is something wrong with Anand's methodology. Also, look at his specious reasoning for the difference in processor ranking between his "average" power and the energy
consumed in the Fallout 3 section, where the tests are run for the same time interval. He is measuring total system power, so the improved idle efficiency of the Nehalems should already be
incorporated in those numbers. Average power draw is by definition total energy consumed divided by time interval over which it is consumed. Either taking instantaneous measurements and treating
them as averages for each second or simple human error could be responsible for the discrepancy. Reply
• JPForums - Wednesday, January 28, 2009 - link
I wouldn't call it suspicious, just a flaw in the procedure. If you have a sine wave and a cosine wave at the same frequency, amplitude, and offset measured once per period, one will look much
larger than the other even though they average out to be exactly the same. Likewise, if you have two computers drawing the same average power, but you happen to record one during mostly high
fluctuations and the other during mostly low fluctuations, you'll get two very different results.
You need more samples to get accurate results. The best method would be to record a power graph using the smallest period possible. Then, integrate the power under the curve. Convert the units to
seconds to get energy in joules. Divide by the number of samples to get the average power. Reply
• GourdFreeMan - Wednesday, January 28, 2009 - link
JPForums, I appreciate your efforts to elucidate my remarks to Anand, but I must comment on two things. First, the word I used was "specious" not "suspicious". There is a difference, just as
there is a difference between "average power draw" and "an average of periodically sampled power draws". These two are only guaranteed to coincide if the samples themselves are average powers or
in the limit as the period they are sampled over approaches 0. (The latter remark is directed at your definition of average in the first paragraph of your other post). Reply
• Ryun - Tuesday, January 27, 2009 - link
I was expecting a much bigger delta compared to the 95W quads in wattage. Reply
• Ryun - Tuesday, January 27, 2009 - link
Meant to end with, "Thanks for the review."
So, thanks. =) Reply
• harijan - Tuesday, January 27, 2009 - link
no idle power usage numbers? Reply
• michael2k - Tuesday, January 27, 2009 - link
Unfortunately 65W is still too hot for a 17" MacBook Pro. | {"url":"http://www.anandtech.com/Show/Index/2714?cPage=6&all=False&sort=0&page=2&slug=","timestamp":"2014-04-20T21:58:08Z","content_type":null,"content_length":"75967","record_id":"<urn:uuid:e2a09028-e90d-4547-8434-4c43e37ec3b8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
This module defines Typeable indexes and convenience functions. Should probably be considered private to Data.IxSet.
data Ix a Source
Ix is a Map from some Typeable key to a Set of values for that key. Ix carries type information inside.
forall key . (Typeable key, Ord key) => Ix (Map key (Set a)) (a -> [key])
Typeable1 Ix
(Data ctx a, Sat (ctx (Ix a))) => Data ctx (Ix a)
Data a => Data (Ix a)
insert :: (Ord a, Ord k) => k -> a -> Map k (Set a) -> Map k (Set a)Source
Convenience function for inserting into Maps of Sets as in the case of an Ix. If they key did not already exist in the Map, then a new Set is added transparently.
delete :: (Ord a, Ord k) => k -> a -> Map k (Set a) -> Map k (Set a)Source
Convenience function for deleting from Maps of Sets. If the resulting Set is empty, then the entry is removed from the Map. | {"url":"http://hackage.haskell.org/package/ixset-1.0.3/docs/Data-IxSet-Ix.html","timestamp":"2014-04-17T10:27:17Z","content_type":null,"content_length":"15032","record_id":"<urn:uuid:dfac6a87-a67b-4ce8-b828-19776952197b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra 1B--Check Please!
Posted by Mackenzie on Thursday, June 7, 2012 at 8:40am.
1.) Determine the equation of the axis of symmetry.
y = -5x2 - x + 9
Answer 1.) x=3/2
2.) Determine the equation of the axis of symmetry.
y = -6x2 + 3x - 4
Answer 2.) x=-2/10
• Algebra 1B--Check Please! - MathMate, Thursday, June 7, 2012 at 10:08am
I assume you have not done calculus before.
To find the axis of symmetry of a polynomial of second degree (quadratic), we only have to complete the squares, which will then give the coordinates of the vertex in the form V(h,k) where
and a is another constant.
Starting with
y = -5x2 - x + 9
we write
=-5(x+1/10)²+5/100 +9
=-5(x+1/10)² +9.05
Therefore h=-1/10, k=9.05, and
or the axis of symmetry is x=-1/10
I will leave #2 for you as practice.
• Algebra 1B--Check Please! - Mackenzie, Thursday, June 7, 2012 at 10:17am
Thanks! so number two is wrong?
• Algebra 1B--Check Please! - MathMate, Thursday, June 7, 2012 at 10:33am
#2 is not correct.
You can follow the steps above and try again.
Related Questions
Algebra II - Could someone please check these. For ellipse: (x-2)^2/49 + (y+1)^2...
Algebra - For each equation, determine whether its graph is symmetric with ...
Algebra - Restated:For each equation, determine whether its graph is symmetric ...
Math(Please check) - Determine whether the graph of the following equation is ...
Math(Please, Please, help) - For f(x) = 2x-3 and g(x)= 2x^2 find, a) (f + g)(x...
Algebra II - Determine whether the grapg of the given equation is symmetric with...
ALGEBRA 1 - Please check for me.Without drawing the graph of the given equation ...
ALGEBRA 1 - Please check for me.Without drawing the graph of the given equation ...
Trig - I need help solving the two problems below. Thanks For each equation, ...
Trig - I need help solving the two problems below. Thanks For each equation, ... | {"url":"http://www.jiskha.com/display.cgi?id=1339072828","timestamp":"2014-04-18T16:57:35Z","content_type":null,"content_length":"9431","record_id":"<urn:uuid:c908ca15-c303-4dd4-b193-273718073ef0>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of energy state
and other
(from the
ἐνέργεια -
, "activity, operation", from ἐνεργός -
, "active, working) is a
scalar physical quantity
, an attribute of objects and systems that is conserved in nature. In
textbooks energy is often defined as the ability to do
Several different forms of energy, including, but not limited to, kinetic, potential, thermal, gravitational, sound energy, light energy, elastic, electromagnetic, chemical, nuclear, and mass have
been defined to explain all known natural phenomena.
While one form of energy may be transformed to another, the total energy remains the same. This principle, the conservation of energy, was first postulated in the early 19th century, and applies to
any isolated system. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time.
Although the total energy of a system does not change with time, its value may depend on the frame of reference. For example, a seated passenger in a moving airplane has zero kinetic energy relative
to the airplane, but non-zero kinetic energy relative to the earth.
The word "energy" derives from
Greek ἐνέργεια
), which appears for the first time in the work
Nicomachean Ethics
in the 4th century BC. In 1021 AD, the
Arabian physicist
, in the
Book of Optics
, held
rays to be streams of minute
energy particles
, stating that "the smallest parts of light" retain "only properties that can be treated by geometry and verified by
" and that "they lack all sensible qualities except energy. In 1121,
, in
The Book of the Balance of Wisdom
, proposed that the
gravitational potential energy
of a body varies depending on its distance from the centre of the Earth.
The concept of energy emerged out of the idea of vis viva, which Leibniz defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To
account for slowing due to friction, Leibniz claimed that heat consisted of the random motion of the constituent parts of matter — a view shared by Isaac Newton, although it would be more than a
century until this was generally accepted. In 1807, Thomas Young was the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy"
in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy." It was argued for some years whether energy was a substance (the caloric) or merely a physical quantity,
such as momentum.
He^[who?] amalgamated all of these laws into the laws of thermodynamics, which aided in the rapid development of explanations of chemical processes using the concept of energy by Rudolf Clausius,
Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan.
During a 1961 lecture for undergraduate students at the California Institute of Technology, Richard Feynman, a celebrated physics teacher and Nobel Laureate, said this about the concept of energy:
There is a fact, or if you wish, a law, governing natural phenomena that are known to date. There is no known exception to this law; it is exact, so far we know. The law is called conservation of
energy; it states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a
mathematical principle; it says that there is a numerical quantity, which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a
strange fact that we can calculate some number, and when we finish watching nature go through her tricks and calculate the number again, it is the same.| | |The Feynman Lectures on Physics
Since 1918 it has been known that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time. That is,
energy is conserved because the laws of physics do not distinguish between different moments of time (see Noether's theorem).
Energy in various contexts since the beginning of the universe
The concept of energy and its transformations is useful in explaining and predicting most natural phenomena. The
of transformations in energy (what kind of energy is transformed to what other kind) is often described by
(equal energy spread among all available
degrees of freedom
) considerations, since in practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or
matter will randomly move into more concentrated forms or smaller spaces.
The concept of energy is widespread in all sciences.
• In biology, energy is an attribute of the biological structures that is responsible for growth and development of a biological cell or an organelle of a biological organism. Energy is thus often
said to be stored by cells in the structures of molecules of substances such as carbohydrates (including sugars) and lipids, which release energy when reacted with oxygen.
• In chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of
these kinds of structure, it is invariably accompanied by an increase or decrease of energy of the substances involved.
• In geology and meteorology, continental drift, mountain ranges, volcanos, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior. While
meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes, are all a result of energy transformations brought about by solar energy on the planet Earth.
• In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena
(including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into
various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen).
Energy transformations in the universe over time are characterized by various kinds of potential energy which has been available since the Big Bang, later being "released" (transformed to more active
types of energy such as kinetic or radiant energy), when a triggering mechanism is available.
Familiar examples of such processes include nuclear decay, in which energy is released which was originally "stored" in heavy isotopes (such as uranium and thorium), by nucleosynthesis, a process
which ultimately uses the gravitational potential energy released from the gravitational collapse of supernovae, to store energy in the creation of these heavy elements before they were incorporated
into the solar system and the Earth. This energy is triggered and released in nuclear fission bombs. In a slower process, heat from nuclear decay of these atoms in the core of the Earth releases
heat, which in turn may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the heat energy, which may be released to active kinetic
energy in landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store which has been produced ultimately from the same radioactive heat sources.
Thus, according to present understanding, familiar events such as landslides and earthquakes release energy which has been stored as potential energy in the Earth's gravitational field or elastic
strain (mechanical potential energy) in rocks; but prior to this, represents energy that has been stored in heavy atoms since the collapse of long-destroyed stars created these atoms.
In another similar chain of transformations beginning at the dawn of the universe, nuclear fusion of hydrogen in the Sun releases another store of potential energy which was created at the time of
the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store
of potential energy which can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some
of the fusion energy is then transformed into sunlight. Such sunlight from our Sun may again be stored as gravitational potential energy after it strikes the Earth, as (for example) water evaporates
from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbine/generators to produce electricity). Sunlight also drives many weather
phenomena, save those generated by volcanic events. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some
of their thermal energy suddenly to power a few days of violent air movement. Sunlight is also captured by plants as chemical potential energy, when carbon dioxide and water are converted into a
combustible combination of carbohydrates, lipids, and oxygen. Release of this energy as heat and light may be triggered suddenly by a spark, in a forest fire; or it may be available more slowly for
animal or human metabolism, when these molecules are ingested, and catabolism is triggered by enzyme action. Through all of these transformation chains, potential energy stored at the time of the Big
Bang is later released by intermediate events, sometimes being stored in a number of ways over time between releases, as more active energy. In all these events, one kind of energy is converted to
other types of energy, including heat.
Regarding applications of the concept of energy
Energy is subject to a strict
global conservation law
; that is, whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always
remains constant.
• The total energy of a system can be subdivided and classified in various ways. For example, it is sometimes convenient to distinguish potential energy (which is a function of coordinates only)
from kinetic energy (which is a function of coordinate time derivatives only). It may also be convenient to distinguish gravitational energy, electric energy, thermal energy, and other forms.
These classifications overlap; for instance thermal energy usually consists partly of kinetic and partly of potential energy.
• The transfer of energy can take various forms; familiar examples include work, heat flow, and advection, as discussed below.
• The word "energy" is also used outside of physics in many ways, which can lead to ambiguity and inconsistency. The vernacular terminology is not consistent with technical terminology. For
example, the important public-service announcement, "Please conserve energy" uses vernacular notions of "conservation" and "energy" which make sense in their own context but are utterly
incompatible with the technical notions of "conservation" and "energy" (such as are used in the law of conservation of energy).
In classical physics energy is considered a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the
energy-momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts).
Energy transfer
Because energy is strictly conserved and is also locally conserved (wherever it can be defined), it is important to remember that by definition of energy the transfer of energy between the "system"
and adjacent regions is work. A familiar example is
mechanical work
. In simple cases this is written as:
$Delta\left\{\right\}E = W$ (1)
if there are no other energy-transfer processes involved. Here $Delta\left\{\right\}E$ is the amount of energy transferred, and $W$ represents the work done on the system.
More generally, the energy transfer can be split into two categories:
$Delta\left\{\right\}E = W + Q$ (2)
where $Q$ represents the heat flow into the system.
There are other ways in which an open system can gain or lose energy. In chemical systems, energy can be added to a system by means of adding substances with different chemical potentials, which
potentials are then extracted (both of these process are illustrated by fueling an auto, a system which gains in energy thereby, without addition of either work or heat). Winding a clock would be
adding energy to a mechanical system. These terms may be added to the above equation, or they can generally be subsumed into a quantity called "energy addition term $E$" which refers to any type of
energy carried over the surface of a control volume or system volume. Examples may be seen above, and many others can be imagined (for example, the kinetic energy of a stream of particles entering a
system, or energy from a laser beam adds to system energy, without either being either work-done or heat-added, in the classic senses).
$Delta\left\{\right\}E = W + Q + E$ (3)
Where E in this general equation represents other additional advected energy terms not covered by work done on a system, or heat added to it.
Energy is also transferred from potential energy ($E_p$) to kinetic energy ($E_k$) and then back to potential energy constantly. This is referred to as conservation of energy. In this closed system,
energy can not be created or destroyed, so the initial energy and the final energy will be equal to each other. This can be demonstrated by the following:
$E_\left\{pi\right\} + E_\left\{ki\right\} = E_\left\{pF\right\} + E_\left\{kF\right\}$
The equation can then be simplified further since $E_p = mgh$ (mass times acceleration due to gravity times the height) and $E_k = frac\left\{1\right\}\left\{2\right\} mv^2$ (half times mass times
velocity squared). Then the total amount of energy can be found by adding $E_p + E_k = E_\left\{total\right\}$.
Energy and the laws of motion
classical mechanics
, energy is a conceptually and mathematically useful property since it is a
conserved quantity
The Hamiltonian
The total energy of a system is sometimes called the
, after
William Rowan Hamilton
. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have remarkably direct analogs in
nonrelativistic quantum mechanics.
The Lagrangian
Another energy-related concept is called the
, after
Joseph Louis Lagrange
. This is even more fundamental than the Hamiltonian, and can be used to derive the equations of motion. In non-relativistic physics, the Lagrangian is the kinetic energy
potential energy.
Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (like systems with friction).
Energy and thermodynamics
Internal energy
Internal energy
– the sum of all microscopic forms of energy of a system. It is related to the molecular structure and the degree of molecular activity and may be viewed as the sum of kinetic and potential energies
of the molecules; it comprises the following types of energy:
│ Type │ Composition of Internal Energy (U) │
│ Sensible energy │ the portion of the internal energy of a system associated with kinetic energies (molecular translation, rotation, and vibration; electron translation and spin; and nuclear spin) │
│ │ of the molecules. │
│ Latent energy │ the internal energy associated with the phase of a system. │
│ Chemical energy │ the internal energy associated with the different kinds of aggregation of atoms in matter. │
│ Nuclear energy │ the tremendous amount of energy associated with the strong bonds within the nucleus of the atom itself. │
│ Energy │ those types of energies not stored in the system (e.g. heat transfer, mass transfer, and work), but which are recognized at the system boundary as they cross it, which represent │
│ interactions │ gains or losses by a system during a process. │
│ Thermal energy │ the sum of sensible and latent forms of internal energy. │
The laws of thermodynamics
According to the
second law of thermodynamics
, work can be totally converted into
, but not vice versa.This is a mathematical consequence of
statistical mechanics
. The
first law of thermodynamics
simply asserts that energy is conserved, and that heat is included as a form of energy transfer. A commonly-used corollary of the first law is that for a "system" subject only to
forces and heat transfer (e.g. a cylinder-full of gas), the differential change in energy of the system (with a
in energy signified by a positive quantity) is given by:
$mathrm\left\{d\right\}E = Tmathrm\left\{d\right\}S - Pmathrm\left\{d\right\}V,$,
where the first term on the right is the heat transfer into the system, defined in terms of temperature T and entropy S (in which entropy increases and the change dS is positive when the system is
heated); and the last term on the right hand side is identified as "work" done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work
to be done on it and so the volume change, dV, is negative when work is done on the system). Although this equation is the standard text-book example of energy conservation in classical
thermodynamics, it is highly specific, ignoring all chemical, electric, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat, and because it contains a
term that depends on temperature. The most general statement of the first law (i.e., conservation of energy) is valid even in situations in which temperature is undefinable.
Energy is sometimes expressed as:
$mathrm\left\{d\right\}E=delta Q+delta W,$,
which is unsatisfactory because there cannot exist any thermodynamic state functions W or Q that are meaningful on the right hand side of this equation, except perhaps in trivial cases.
Equipartition of energy
The energy of a mechanical
harmonic oscillator
(a mass on a spring) is alternatively
. At two points in the oscillation
it is entirely kinetic, and alternatively at two other points it is entirely potential. Over the whole cycle, or over many cycles net energy is thus equally split between kinetic and potential. This
is called
equipartition principle
- total energy of a system with many degrees of freedom is equally split among all available degrees of freedom.
This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts
of a system. When an isolated system is given more degrees of freedom (= is given new available energy states which are the same as existing states), then total energy spreads over all available
degrees equally without distinction between "new" and "old" degrees. This mathematical result is called the second law of thermodynamics.
Oscillators, phonons, and photons
In an ensemble (connected collection) of unsynchronized oscillators, the average energy is spread equally between kinetic and potential types.
In a solid, thermal energy (often referred to loosely as heat content) can be accurately described by an ensemble of thermal phonons that act as mechanical oscillators. In this model, thermal energy
is equally kinetic and potential.
In an ideal gas, the interaction potential between particles is essentially the delta function which stores no energy: thus, all of the thermal energy is kinetic.
Because an electric oscillator (LC circuit) is analogous to a mechanical oscillator, its energy must be, on average, equally kinetic and potential. It is entirely arbitrary whether the magnetic
energy is considered kinetic and the electric energy considered potential, or vice versa. That is, either the inductor is analogous to the mass while the capacitor is analogous to the spring, or vice
1. By extension of the previous line of thought, in free space the electromagnetic field can be considered an ensemble of oscillators, meaning that radiation energy can be considered equally
potential and kinetic. This model is useful, for example, when the electromagnetic Lagrangian is of primary interest and is interpreted in terms of potential and kinetic energy.
2. On the other hand, in the key equation $m^2 c^4 = E^2 - p^2 c^2$, the contribution $mc^2$ is called the rest energy, and all other contributions to the energy are called kinetic energy. For a
particle that has mass, this implies that the kinetic energy is $0.5 p^2/m$ at speeds much smaller than c, as can be proved by writing $E = mc^2$ √$\left(1 + p^2 m^\left\{-2\right\}c^\left\{-2\
right\}\right)$ and expanding the square root to lowest order. By this line of reasoning, the energy of a photon is entirely kinetic, because the photon is massless and has no rest energy. This
expression is useful, for example, when the energy-versus-momentum relationship is of primary interest.
The two analyses are entirely consistent. The electric and magnetic degrees of freedom in item 1 are transverse to the direction of motion, while the speed in item 2 is along the direction of motion.
For non-relativistic particles these two notions of potential versus kinetic energy are numerically equal, so the ambiguity is harmless, but not so for relativistic particles.
Work and virtual work
Work is roughly force times distance. But more precisely, it is
$W = int mathbf\left\{F\right\} cdot mathrm\left\{d\right\}mathbf\left\{s\right\}$
This says that the work ($W$) is equal to the integral (along a certain path) of the force; for details see the mechanical work article.
Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the
person swinging the bat, considerable work is done on the ball.
Quantum mechanics
In quantum mechanics energy is defined in terms of the
energy operator
as a time derivative of the
wave function
. The
Schrödinger equation
equates the energy operator to the full energy of a particle or a system. It thus can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes
the space- and time-dependence of slow changing (non-relativistic)
wave function
of quantum systems. The solution of this equation for bound system is discrete (a set of permitted states, each characterized by an
energy level
) which results in the concept of
. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic wave in vacuum, the resulting energy states are related to the frequency by the
$E = hnu$
is the
Planck's constant
the frequency). In the case of electromagnetic wave these energy states are called quanta of
When calculating kinetic energy (=
to accelerate a
from zero
to some finite speed) relativistically - using
Lorentz transformations
instead of
Newtonian mechanics
, Einstein discovered unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it
rest mass energy
- energy which every mass must possess even when being at rest. The amount of energy is directly proportional to the mass of body:
$E = m c^2$,
m is the mass,
c is the speed of light in vacuum,
E is the rest mass energy.
For example, consider electron-positron annihilation, in which the rest mass of individual particles is destroyed, but the inertia equivalent of the system of the two particles (its invariant mass)
remains (since all energy is associated with mass), and this inertia and invariant mass is carried off by photons which individually are massless, but as a system retain their mass. This is a
reversible process - the inverse process is called pair creation - in which the rest mass of particles is created from energy of two (or more) annihilating photons.
In general relativity, the stress-energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian
It is not uncommon to hear that energy is "equivalent" to mass. It would be more accurate to state that every energy has inertia and gravity equivalent, and because mass is a form of energy, then
mass too has inertia and gravity associated with it.
There is no absolute measure of energy, because energy is defined as the work that one system does (or can do) on another. Thus, only of the transition of a system from one state into another can be
defined and thus measured.
The methods for the
of energy often deploy methods for the measurement of still more fundamental concepts of science, namely
electric charge
electric current
. Conventionally the technique most often employed is
, a
technique that relies on the measurement of temperature using a
or of intensity of radiation using a
Throughout the history of science, energy has been expressed in several different units such as
. At present, the accepted unit of measurement for energy is the
unit of energy, the
Forms of energy
Classical mechanics distinguishes between potential energy, which is a function of the position of an object, and kinetic energy, which is a function of its movement. Both position and movement are
relative to a frame of reference, which must be specified: this is often (and originally) an arbitrary fixed point on the surface of the Earth, the terrestrial frame of reference. It has been
attempted to categorize all forms of energy as either kinetic or potential: this is not incorrect, but neither is it clear that it is a real simplification, as Feynman points out:
Potential energy
Potential energy, symbols
, is defined as the work done
against a given force
(= work of
given force
with minus sign) in changing the position of an object with respect to a reference position (often taken to be infinite separation). If
is the
is the
$E_\left\{rm p\right\} = -int mathbf\left\{F\right\}cdot\left\{rm d\right\}mathbf\left\{s\right\}$
with the dot representing the
scalar product
of the two
The name "potential" energy originally signified the idea that the energy could readily be transferred as work—at least in an idealized system (reversible process, see below). This is not completely
true for any real system, but is often a reasonable first approximation in classical mechanics.
The general equation above can be simplified in a number of common cases, notably when dealing with gravity or with elastic forces.
Gravitational potential energy
gravitational force
near the Earth's surface varies very little with the height,
, and is equal to the
, multiplied by the
gravitational acceleration
= 9.81 m/s². In these cases, the gravitational potential energy is given by
$E_\left\{rm p,g\right\} = mgh$
A more general expression for the potential energy due to Newtonian gravitation between two bodies of masses m[1] and m[2], useful in astronomy, is
$E_\left\{rm p,g\right\} = -G\left\{\left\{m_1m_2\right\}over\left\{r\right\}\right\}$,
is the separation between the two bodies and
is the
gravitational constant
, 6.6742(10)×10
. In this case, the reference point is the infinite separation of the two bodies.
Elastic potential energy
Elastic potential energy is defined as a work needed to compress (or expand) a spring. The force, F, in a spring or any other system which obeys Hooke's law is proportional to the extension or
compression, x,
$F = -kx$
is the
force constant
of the particular spring (or system). In this case, the calculated work becomes
$E_\left\{rm p,e\right\} = \left\{1over 2\right\}kx^2$.
Hooke's law is a good approximation for behaviour of
chemical bonds
under normal conditions, i.e. when they are not being broken or formed.
Kinetic energy
Kinetic energy, symbols
, is the work required to accelerate an object to a given speed. Indeed, calculating this work one easily obtains the following:
$E_\left\{rm k\right\} = int mathbf\left\{F\right\} cdot d mathbf\left\{x\right\} = int mathbf\left\{v\right\} cdot d mathbf\left\{p\right\}= \left\{1over 2\right\}mv^2$
At speeds approaching the
speed of light
, this work must be calculated using
Lorentz transformations
, which results in the following:
$E_\left\{rm k\right\} = m c^2left\left(frac\left\{1\right\}\left\{sqrt\left\{1 - \left(v/c\right)^2\right\}\right\} - 1right\right)$
This equation reduces to the one above it, at small (compared to c) speed. A mathematical by-product of this work (which is immediately seen in the last equation) is that even at rest a mass has the
amount of energy equal to:
$E_\left\{rm rest\right\} = mc^2$
This energy is thus called rest mass energy.
Thermal energy
Thermal energy (of some media - gas, plasma, solid, etc) is the energy associated with the microscopical random motion of particles constituting the media. For example, in case of monoatomic gas it
is just a kinetic energy of motion of atoms of gas as measured in the reference frame of the center of mass of gas. In case of many-atomic gas rotational and vibrational energy is involved. In the
case of liquids and solids there is also potential energy (of interaction of atoms) involved, and so on.
A heat is defined as a transfer (flow) of thermal energy across certain boundary (for example, from a hot body to cold via the area of their contact. A practical definition for small transfers of
heat is
$Delta q = int C_\left\{rm v\right\}\left\{rm d\right\}T$
is the
heat capacity
of the system. This definition will fail if the system undergoes a
phase transition
—e.g. if ice is melting to water—as in these cases the system can absorb heat without increasing its temperature. In more complex systems, it is preferable to use the concept of
internal energy
rather than that of thermal energy (see
Chemical energy below
Despite the theoretical problems, the above definition is useful in the experimental measurement of energy changes. In a wide variety of situations, it is possible to use the energy released by a
system to raise the temperature of another object, e.g. a bath of water. It is also possible to measure the amount of electric energy required to raise the temperature of the object by the same
amount. The calorie was originally defined as the amount of energy required to raise the temperature of one gram of water by 1 °C (approximately 4.1855 J, although the definition later changed), and
the British thermal unit was defined as the energy required to heat one pound of water by 1 °F (later fixed as 1055.06 J).
Electric energy
The electric potential energy of given configuration of charges is defined as the work which must be done against the Coulomb force to rearrange charges from infinite separation to this configuration
(or the work done by the Coulomb force separating the charges from this configuration to infinity). For two point-like charges Q[1] and Q[2] at a distance r this work, and hence electric potential
energy is equal to:
$E_\left\{rm p,e\right\} = \left\{1over \left\{4piepsilon_0\right\}\right\}\left\{\left\{Q_1Q_2\right\}over\left\{r\right\}\right\}$
where ε
is the
electric constant
of a vacuum, 10
² or 8.854188…×10
F/m. If the charge is accumulated in a
capacitance C
), the reference configuration is usually selected not to be infinite separation of charges, but vice versa - charges at an extremely close proximity to each other (so there is zero net charge on
each plate of a capacitor). The justification for this choice is purely practical - it is easier to measure both voltage difference and magnitude of charges on a capacitor plates not versus infinite
separation of charges but rather versus discharged capacitor where charges return to close proximity to each other (electrons and ions recombine making the plates neutral). In this case the work and
thus the electric potential energy becomes
$E_\left\{rm p,e\right\} = \left\{\left\{Q^2\right\}over\left\{2C\right\}\right\}$
If an electric current passes through a resistor, electric energy is converted to heat; if the current passes through an electric appliance, some of the electric energy will be converted into other
forms of energy (although some will always be lost as heat). The amount of electric energy due to an electric current can be expressed in a number of different ways:
$E = UQ = UIt = Pt = U^2t/R = I^2Rt$
is the
electric potential difference
is the charge (in
is the current (in
is the time for which the current flows (in seconds),
is the
) and
is the
electric resistance
). The last of these expressions is important in the practical measurement of energy, as potential difference, resistance and time can all be measured with considerable accuracy.
Magnetic energy
There is no fundamental difference between magnetic energy and electric energy: the two phenomena are related by
Maxwell's equations
. The potential energy of a
magnetic moment m
in a
magnetic field B
is defined as the
of magnetic force (actually of magnetic
) on re-alignment of the vector of the magnetic dipole moment, and is equal:
$E_\left\{rm p,m\right\} = -mcdot B$
while the energy stored in a
inductance L
) when current
is passing via it is
$E_\left\{rm p,m\right\} = \left\{1over 2\right\}LI^2$.
This second expression forms the basis for
superconducting magnetic energy storage
Electromagnetic fields
Calculating work needed to create an electric or magnetic field in unit volume (say, in a capacitor or an inductor) results in the electric and magnetic fields energy densities:
$u_e=frac\left\{epsilon_0\right\}\left\{2\right\} E^2$
$u_m=frac\left\{1\right\}\left\{2mu_0\right\} B^2$,
in SI units.
Electromagnetic radiation, such as microwaves, visible light or gamma rays, represents a flow of electromagnetic energy. Applying the above expressions to magnetic and electric components of
electromagnetic field both the volumetric density and the flow of energy in e/m field can be calculated. The resulting Poynting vector, which is expressed as
$mathbf\left\{S\right\} = frac\left\{1\right\}\left\{mu\right\} mathbf\left\{E\right\} times mathbf\left\{B\right\},$
in SI units, gives the density of the flow of energy and its direction.
The energy of electromagnetic radiation is quantized (has discrete energy levels). The spacing between these levels is equal to
$E = hnu$
where h is the Planck constant, 6.6260693(11)×10^−34 Js, and ν is the frequency of the radiation. This quantity of electromagnetic energy is usually called a photon. The photons which make up visible
light have energies of 270–520 yJ, equivalent to 160–310 kJ/mol, the strength of weaker chemical bonds.
Chemical energy
Chemical energy is the energy due to associations of atoms in molecules and various other kinds of aggregates of matter. It may be defined as a work done by electric forces during re-arrangement of
electric charges, electrons and protons, in the process of aggregation. If the chemical energy of a system decreases during a chemical reaction, the difference is transferred to the surroundings in
some form (often heat or light); on the other hand if the chemical energy of a system increases as a result of a chemical reaction - the difference then is supplied by the surroundings (usually again
in form of heat or light). For example,
when two hydrogen atoms react to form a dihydrogen molecule, the chemical energy decreases by 724 zJ (the bond energy of the H–H bond);
when the electron is completely removed from a hydrogen atom, forming a hydrogen ion (in the gas phase), the chemical energy increases by 2.18 aJ (the ionization energy of hydrogen).
It is common to quote the changes in chemical energy for one
of the substance in question: typical values for the change in molar chemical energy during a chemical reaction range from tens to hundreds of kJ/mol.
The chemical energy as defined above is also referred to by chemists as the internal energy, U: technically, this is measured by keeping the volume of the system constant. However, most practical
chemistry is performed at constant pressure and, if the volume changes during the reaction (e.g. a gas is given off), a correction must be applied to take account of the work done by or on the
atmosphere to obtain the enthalpy, H:
ΔH = ΔU + pΔV
A second correction, for the change in
, must also be performed to determine whether a chemical reaction will take place or not, giving the
Gibbs free energy
ΔG = ΔH − TΔS
These corrections are sometimes negligible, but often not (especially in reactions involving gases).
Since the industrial revolution, the burning of coal, oil, natural gas or products derived from them has been a socially significant transformation of chemical energy into other forms of energy. the
energy "consumption" (one should really speak of "energy transformation") of a society or country is often quoted in reference to the average energy released by the combustion of these fossil fuels:
1 tonne of coal equivalent (TCE) = 29 GJ
1 tonne of oil equivalent (TOE) = 41.87 GJ
On the same basis, a tank-full of
(45 litres, 12 gallons) is equivalent to about 1.6 GJ of chemical energy. Another chemically-based unit of measurement for energy is the "tonne of
", taken as 4.184 GJ. Hence, burning a tonne of oil releases about ten times as much energy as the explosion of one tonne of TNT: fortunately, the energy is usually released in a slower, more
controlled manner.
Simple examples of chemical energy are batteries and food. When you eat the food is digested and turned into chemical energy which can be transformed to kinetic energy.
Nuclear energy
Nuclear potential energy, along with electric potential energy, provides the energy released from nuclear fission and nuclear fusion processes. The result of both these processes are nuclei in which
strong nuclear forces bind nuclear particles more strongly and closely. Weak nuclear forces (different from strong forces) provide the potential energy for certain kinds of radioactive decay, such as
beta decay. The energy released in nuclear processes is so large that the relativistic change in mass (after the energy has been removed) can be as much as several parts per thousand.
Nuclear particles (nucleons) like protons and neutrons are not destroyed (law of conservation of baryon number) in fission and fusion processes. A few lighter particles may be created or destroyed
(example: beta minus and beta plus decay, or electron capture decay), but these minor processes are not important to the immediate energy release in fission and fusion. Rather, fission and fusion
release energy when collections of baryons become more tightly bound, and it is the energy associated with a fraction of the mass of the nucleons (but not the whole particles) which appears as the
heat and electromagnetic radiation generated by nuclear reactions. This heat and radiation retains the "missing" mass, but the mass is missing only because it escapes in the form of heat and light,
which retain the mass and conduct it out of the system where it is not measured. The energy from the Sun, also called solar energy, is an example of this form of energy conversion. In the Sun, the
process of hydrogen fusion converts about 4 million metric tons of solar matter per second into light, which is radiated into space, but during this process, the number of total protons and neutrons
in the sun does not change. In this system, the light itself retains the inertial equivalent of this mass, and indeed the mass itself (as a system), which represents 4 million tons per second of
electromagnetic radiation, moving into space. Each of the helium nuclei which are formed in the process are less massive than the four protons from they were formed, but (to a good approximation), no
particles or atoms are destroyed in the process of turning the sun's nuclear potential energy into light.
Surface energy
If there is any kind of tension in a surface, such as a stretched sheet of rubber or material interfaces, it is possible to define
surface energy
. In particular, any meeting of dissimilar materials that don't mix will result in some kind of
surface tension
, if there is freedom for the surfaces to move then, as seen in
capillary surfaces
for example, the minimum energy will as usual be sought.
A minimal surface, for example, represents the smallest possible energy that a surface can have if its energy is proportional to the area of the surface. For this reason, (open) soap films of small
size are minimal surfaces (small size reduces gravity effects, and openness prevents pressure from building up. Note that a bubble is a minimum energy surface but not a minimal surface by
Transformations of energy
One form of energy can often be readily transformed into another with the help of a device- for instance, a battery, from
chemical energy
electric energy
; a
gravitational potential energy
kinetic energy
of moving
(and the blades of a
) and ultimately to
electric energy
through an
electric generator
. Similarly, in the case of a
chemical explosion
chemical potential
energy is transformed to
kinetic energy
thermal energy
in a very short time. Yet another example is that of a
. At its highest points the
kinetic energy
is zero and the
gravitational potential energy
is at maximum. At its lowest point the
kinetic energy
is at maximum and is equal to the decrease of
potential energy
. If one (unrealistically) assumes that there is no
, the conversion of energy between these processes is perfect, and the
will continue swinging forever.
Energy can be converted into matter and vice versa. The mass-energy equivalence formula E = mc², derived by several authors: Olinto de Pretto, Albert Einstein, Friedrich Hasenöhrl, Max Planck and
Henri Poincaré, quantifies the relationship between mass and rest energy. Since $c^2$ is extremely large relative to ordinary human scales, the conversion of ordinary amount of mass (say, 1 kg) to
other forms of energy can liberate tremendous amounts of energy (~$9x10^\left\{16\right\}$ Joules), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of a unit
of energy is minuscule, which is why a loss of energy from most systems is difficult to measure by weight, unless the energy loss is very large. Examples of energy transformation into matter
(particles) are found in high energy nuclear physics.
In nature, transformations of energy can be fundamentally classed into two kinds: those that are thermodynamically reversible, and those that are thermodynamically irreversible. A reversible process
in thermodynamics is one in which no energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum
states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to
another, is reversible, as in the pendulum system described above. In processes where heat is generated, however, quantum states of lower energy, present as possible exitations in fields between
atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly
stay as heat, and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as
an expansion of matter, or a randomization in a crystal).
As the universe evolves in time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or other kinds of increases in disorder). This has been referred to as the
inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do produce work through a heat
engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), grows less and less.
Law of conservation of energy
Energy is subject to the
law of conservation of energy
. According to this law, energy can neither be created (produced) nor destroyed by itself. It can only be transformed.
Most kinds of energy (with gravitational energy being a notable exception) are also subject to strict local conservation laws, as well. In this case, energy can only be exchanged between adjacent
regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the
universe cannot change; this is a corollary of the local law, but not vice versa. Conservation of energy is the mathematical consequence of translational symmetry of time (that is, the
indistinguishability of time intervals taken at different time) - see Noether's theorem.
According to energy conservation law the total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system.
This law is a fundamental principle of physics. It follows from the translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations
on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable.
Thus is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle - it is impossible to define
the exact amount of energy during any definite time interval. The uncertainty principle should not be confused with energy conservation - rather it provides mathematical limits to which energy can in
principle be defined and measured.
In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by
$Delta E Delta t ge frac \left\{ hbar \right\} \left\{2 \right\}$
which is similar in form to the Heisenberg uncertainty principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in
quantum mechanics).
In particle physics, this inequality permits a qualitative understanding of virtual particles which carry momentum, exchange by which and with real particles, is responsible for the creation of all
known fundamental forces (more accurately known as fundamental interactions). Virtual photons (which are simply lowest quantum mechanical energy state of photons) are also responsible for
electrostatic interaction between electric charges (which results in Coulomb law), for spontaneous radiative decay of exited atomic and nuclear states, for the Casimir force, for van der Waals bond
forces and some other observable phenomena.
Energy and life
Any living organism relies on an external source of energy—radiation from the Sun in the case of green plants; chemical energy in some form in the case of animals—to be able to grow and reproduce.
The daily 1500–2000
(6–8 MJ) recommended for a human adult are taken as a combination of oxygen and food molecules, the latter mostly carbohydrates and fats, of which
) and
) are convenient examples. The food molecules are oxidised to
carbon dioxide
in the
C[6]H[12]O[6] + 6O[2] → 6CO[2] + 6H[2]O
C[57]H[110]O[6] + 81.5O[2] → 57CO[2] + 55H[2]O
and some of the energy is used to convert
ADP + HPO[4]^2− → ATP + H[2]O
The rest of the chemical energy in the carbohydrate or fat is converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains when split and reacted
with water, is used for other
(at each stage of a
metabolic pathway
, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:
gain in kinetic energy of a sprinter during a 100 m race: 4 kJ
gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3kJ
Daily food intake of a normal adult: 6–8 MJ
It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical energy or radiation), and it is true that most real machines
manage higher efficiencies. However, in growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the
molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one
specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings"). Simpler organisms can achieve higher energy efficiencies
than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each
step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology: to take just the first step in the food chain, of the estimated 124.7 Pg/a of carbon that is
fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat.
See also
Notes and references
Further reading
• Alekseev, G. N. (1986). Energy and Entropy. Moscow: Mir Publishers.
• Walding, Richard, Rapkins, Greg, Rossiter, Glenn New Century Senior Physics. Melbourne, Australia: Oxford University Press. ISBN 0-19-551084-4.
• Smil, Vaclav Energy in nature and society: general energetics of complex systems. Cambridge, USA: MIT Press. ISBN 987-0-262-19565-2.
External links | {"url":"http://www.reference.com/browse/energy%20state","timestamp":"2014-04-21T05:08:54Z","content_type":null,"content_length":"183048","record_id":"<urn:uuid:f03b0135-d45c-4242-b152-cc8c611a8ee0>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Erik D. Demaine
Paper by Erik D. Demaine
Erik D. Demaine, Uriel Feige, MohammadTaghi Hajiaghayi, and Mohammad R. Salavatipour, “Combination Can Be Hard: Approximability of the Unique Coverage Problem”, in Proceedings of the 17th Annual
ACM-SIAM Symposium on Discrete Algorithms (SODA 2006), Miami, Florida, January 22–24, 2006, pages 162–171.
We prove semi-logarithmic inapproximability for a maximization problem called unique coverage: given a collection of sets, find a subcollection that maximizes the number of elements covered
exactly once. Specifically, we prove O(1/log^σ(ε) n) inapproximability assuming that NP ⊈ BPTIME(2^n^ε) for some ε > 0. We also prove O(1/log^1/3−ε n) inapproximability, for any ε > 0, assuming
that refuting random instances of 3SAT is hard on average; and prove O(1/log n) inapproximability under a plausible hypothesis concerning the hardness of another problem, balanced bipartite
independent set. We establish matching upper bounds up to exponents, even for a more general (budgeted) setting, giving an Ω(1/log n)-approximation algorithm as well as an Ω(1/log B)
-approximation algorithm when every set has at most B elements. We also show that our inapproximability results extend to envy-free pricing, an important problem in computational economics. We
describe how the (budgeted) unique coverage problem, motivated by real-world applications, has close connections to other theoretical problems including max cut, maximum coverage, and radio
The paper is available in PostScript (365k), gzipped PostScript (139k), and PDF (203k).
Related papers:
UniqueCoverage_SICOMP (Combination Can Be Hard: Approximability of the Unique Coverage Problem)
See also other papers by Erik Demaine. These pages are generated automagically from a BibTeX file.
Last updated April 8, 2014 by Erik Demaine. | {"url":"http://erikdemaine.org/papers/UniqueCoverage_SODA2006/","timestamp":"2014-04-20T23:26:45Z","content_type":null,"content_length":"6236","record_id":"<urn:uuid:7b2cc112-a4cb-4f94-925e-8805c7e9f6c7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00174-ip-10-147-4-33.ec2.internal.warc.gz"} |
I think this is hard....
April 25th 2009, 10:55 AM
I think this is hard....
In arch $OA\frown$ of the line in the plane , with equation $f(x)=x^3$ where $O(0,0)$ and $A(a,a^3)$ for $a>0$. Find the coordinates of the point $M(X0,Y0)$ in order that $\triangle OAM$ has the
biggest area , than find the equation of the tangent and normal line in point $M$
April 25th 2009, 12:01 PM
The area of the triangle is
$S=\frac{1}{2}|\Delta|$ where $\Delta=\begin{vmatrix}x_O & y_O & 1\\x_A & y_A & 1\\x_M & y_M & 1\end{vmatrix}$
$\Delta=\begin{vmatrix}0 & 0 & 1\\a & a^3 & 1\\x & x^3 & 1\end{vmatrix}=x^3-a^2x$
Then $S(x)=\frac{a}{2}|x^3-a^2x|=\left\{\begin{array}{ll}\frac{a}{2}(a^2x-x^3), & x\in(-\infty,-a]\cup [0,a]\\<br /> \frac{a}{2}(x^3-a^2x), & x\in(-a,0)\cup(a,\infty)\end{array}\right.$
$S'(x)=\left\{\begin{array}{ll}\frac{a}{2}(a^2-3x^2), & x\in(-\infty,-a)\cup (0,a)\\<br /> \frac{a}{2}(3x^2-a^2), & x\in(-a,0)\cup(a,\infty)\end{array}\right.$
$S'(x)=0\Rightarrow x=\pm\frac{a\sqrt{3}}{3}$ and both are maximum points.
Then, the maximum area is $S\left(\pm\frac{a\sqrt{3}}{3}\right)$
April 26th 2009, 03:23 AM
Then $S(x)=\frac{a}{2}|x^3-a^2x|=\left\{\begin{array}{ll}\frac{a}{2}(a^2x-x^3), & x\in(-\infty,-a]\cup [0,a]\\<br /> \frac{a}{2}(x^3-a^2x), & x\in(-a,0)\cup(a,\infty)\end{array}\right.$
$S'(x)=\left\{\begin{array}{ll}\frac{a}{2}(a^2-3x^2), & x\in(-\infty,-a)\cup (0,a)\\<br /> \frac{a}{2}(3x^2-a^2), & x\in(-a,0)\cup(a,\infty)\end{array}\right.$
$S'(x)=0\Rightarrow x=\pm\frac{a\sqrt{3}}{3}$ and both are maximum points.
Then, the maximum area is $S\left(\pm\frac{a\sqrt{3}}{3}\right)$
Can you explain me a little bit please, because i can't understand it.(Doh)(Headbang) | {"url":"http://mathhelpforum.com/geometry/85606-i-think-hard-print.html","timestamp":"2014-04-19T02:36:49Z","content_type":null,"content_length":"9805","record_id":"<urn:uuid:1ab124b4-4435-4732-a407-1ec30eae186a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00505-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integration by Substitution
8.1: Integration by Substitution
Created by: CK-12
This activity is intended to supplement Calculus, Chapter 7, Lesson 1.
In this activity, you will explore:
• Integration of standard forms
• Substitution methods of integration
Use this document to record your answers. Check your answers with the Integrate command.
Problem 1 – Introduction
1. Consider the integral $\int\limits \sqrt{2x+3}dx$$u = 2x + 3$
Use the table below to guide you.
$f(x) =$ $\sqrt{2x+3}$
$u =$ $2x+3$
$du =$
$g (u) =$
$\int\limits g(u)du =$
$\int\limits f(x)dx =$
2. Try using substitution to integrate $\int\limits \sin (x) \cos (x) dx$$u = \sin(x)$
3. Now integrate the same integral, but let $u = \cos(x)$
4. $\sin (x) \cos (x) dx$$\frac{1}{2} \ \sin(2x)$
What is the result when you integrate $\int\limits \frac{1}{2} \ \sin(2x)$
Problem 2 – Common Feature
Find the result of the following integrals using substitution.
5. $\int\limits \frac{x+1}{x^2+2x+3} dx$
6. $\int\limits \sin(x) \ e^{\cos(x)} dx$
7. $\int\limits \frac{x}{4x^2+1}dx$
8. What do these integrals have in common that makes them suitable for the substitution method?
Use trigonometric identities to rearrange the following integrals and then use the substitution method to integrate.
9. $\int\limits \tan(x) dx$
10. $\int\limits \cos^3 (x)$
You can only attach files to None which belong to you
If you would like to associate files with this None, please make a copy first. | {"url":"http://www.ck12.org/book/Texas-Instruments-Calculus-Student-Edition/r1/section/8.1/SE-Integration-Techniques---TI-%253A%253Aof%253A%253A-Texas-Instruments-Calculus-Student-Edition/","timestamp":"2014-04-19T22:59:28Z","content_type":null,"content_length":"104985","record_id":"<urn:uuid:41adb261-566f-4072-a863-ed03daaaeae8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Waldo "Slices Salami"
I’ve noticed that even Hansen’s rounding seems to give a little edge to modern values relative to older values. It’s not a big effect, but Hansen doesn’t seem to like to leave any scraps on the
table. Cara, Siberia has 3 versions – all of which appear to be the same station; all 3 versions cover more or less the same period (Data here ) and, in this particular case, the values only go to
As a first step, it seemed to make sense to take a simple average of the data (first 3 columns of data at link) and compare it to the Hansen combined version (4th column). The Hansen combined more or
less matched the rounded average up to 0.1 deg C, but the differences were not random: Hansen was always lower as shown in the first panel. In the second panel, I’ve pretty much matched Hansen by
doing the following:
1) calculate the average rounding to 2 digits; 2) multiply by 10, take the floor and then divide by 10.
Typically, this sort of situation only affects pre-1990 data. GHCN made a major collection effort around 1990. In this particular case, the data hasn’t been updated after 1990. IT’s not that this
station necessarily ceased measuring temperatures (as often thought); it’s may be that GHCN (and thus NASA and CRU) merely haven’t updated it since 1990. GHCN says that it updates rural stations only
“irregularly” and, as it happens, the “irregular” schedule apparently hasn’t included any updates since 1990 for large areas of the world – so NASA are CRU are “forced” to rely for more recent
readings primarily on the WMO airport system, which presumably is more urban-biased than the historical network. For these later readings, multiple scribal versions do not arise and “Hansen rounding”
doesn’t come into play. A small effect to be sure, but Waldo is voracious and seemingly no scrap is too small.
97 Comments
1. id0=”22230372000″; title0=”CARA, SIBERIA ROUNDING”
combine=read.table(file.path(“http://data.climateaudit.org/data/giss”,paste(id0,”dat”,sep=”.”)),header=TRUE, sep=”\t”)
par(mar=c(0,3,2,1));ylim0=c(-.13,.13); M1=min(c(time(combine)))
text(M1,.12,paste(“NASA minus Rounded”),font=2,pos=4)
text(M1,.12,paste(“NASA minus Floored”),font=2,pos=4)
In the second panel, I ve pretty much matched Hansen by doing the following:
1) calculate the average rounding to 2 digits; 2) multiply by 10, take the floor and then divide by 10.
THAT IS JUST STUPID!
Sorry for shouting but this is an intolerable mistake. If you have seen Office Space, you will recognize this method as the scam the characters pulled off: salami slicing.
– Sinan
3. From THE SPECIAL STATUS OF COMPUTER ETHICS:
Legendary or not, there are at least three factors that make this type of scheme unusual. First, individual computer computations are now so cheap that the cost of moving a half-cent from one
account to another is vastly less than half a cent. For all practical purposes, the calculation is free. So there can be tangible profit in moving amounts that are vanishingly small if the
volume of such transactions is sufficiently high. Second, once the plan has been implemented, it requires no further attention. It is fully automatic. Money in the bank. Finally, from a
practical standpoint, no one is ever deprived of anything in which they have a significant interest. In short, we seem to have invented a kind of stealing that requires no taking — or at
least no taking of anything that would be of significant value or concern. It is theft by diminishing return.
– Sinan
4. Forgive me but I need a translator. Are you saying that the result has been shaved to reduce it so that future results look even more impressive in comparison? btw as a Software Engineer I’m not
sure what the purpose of floor(10*average1)/10 actually is. Why not use round (or ceil!)?
5. Re: 4
ceil is just as inappropriate as floor.
– Sinan
6. #4. Ay, there’s the rub. Of course, any normal calculation would use a rounded average. The floor calculation is an attempt to replicate Hansen’s actual results. In this case, the floor function
reduces data values slightly (0.1 deg C for some percentage of the values) when there are multiple versions (i.e. before 1990). It’s a little bias increasing post-1990 results relative to
pre-1990 results. Not a big bias, but it seems to be at work here. It may be embedded a little differently than I’ve replicated it here, but, however HAnsen implemented it, it has the effect of
giving an edge to the house.
7. Sinan, I’ve adopted your title for the post.
8. Here’s some information on the effects of different rounding methods: http://support.microsoft.com/kb/196652
9. Re: 7
Very nice, although I can’t really take credit for the phrase :-)
– Sinan
10. If you inspect the Kalakan data http://data.climateaudit.org/data/giss22230469000.dat , in the early portions, the Hansen combined total is simply 0.1 deg C lower even when there’s no rounding
involved. It looks like rounding’s involved but maybe it’s elsewhere. I wonder what the hell their source code looks like. I think that it looks like FOI time.
11. Re: 10
I still suspect an implicit conversion from float to int somewhere in the code.
– Sinan
12. So walk us through this — if for any given rural site, there are significant chance that recent collection has *not* been done, then a significant proportion of sites will be left out of the
data. It seems that it would then follow that the more recent the date to the current time, the greater the proportion of remote/rural sites left out of the available data.
Without adjustments, it would seem a near guarantee of a hockey stick for any given time’s available data, no?
So this is “Office Space” NASA edition. Are we full circle? The original purpose of the Office Space SW engineers was to fix Y2K bugs! :)
n this particular case, the values only go to 1990.
I must have missed something: don’t these guys use the same number of stations throughout the years of observation? If not, Waldo may be hiding in the averages.
14. Re: 12
It seems that it would then follow that the more recent the date to the current time, the greater the proportion of remote/rural sites left out of the available data.
That seems to be the case. I noticed earlier that there were only 11 stations in Canada with data for 2006 in GHCN. All eleven seem to be airports (some on islands):
In a follow-up comment, I gave links to the graphs of those stations (on my web site). The graph pages also have links to Google maps and GISS graphs for the stations with data for 2006.
– Sinan
15. #5
Indeed, so unless it’s just a plain error, you would be consciously choosing floor for a reason other than the integrity of the result ;).
16. #6 – does this mean the 1930s inch up another fraction in the world rankings?
17. This looks like an old programmer trick to truncate a number to a given number of digits after the decimal point. To three digits you’d do
double num = 1.153395; // original number which would print as written
num = num * 1000.0; // shift it three places
int val = (int)num; // cast the result to integer truncating the shifted number
num = (double)val; // cast it back to double
num = num/1000.0; // shift it back three places
// num will print as 1.153
To round it you’d have to add .0005 to this number before the shift. Bet they forgot to do that step.
18. Floored division:
In order to achieve maximum performance, each version of Forth implements most arithmetic primitives to use the internal behavior of that particular processor’s hardware multiply and divide
instructions. Therefore, to find out at the bit level what these primitives do, you should consult either the manufacturer’s hardware description or the implementation’s detailed description
of these functions.
In particular, signed integer division where only one operand (either dividend or divisor) is negative and there is a remainder may produce different, but equally valid, results on
different implementations. The two possibilities are floored and symmetric division. In floored division, the remainder carries the sign of the divisor and the quotient is rounded to its
arithmetic floor (towards negative infinity). In symmetric division, the remainder carries the sign of the dividend and the quotient is rounded towards zero, or truncated. For example,
dividing -10 by 7 can give a quotient of -2 and remainder of 4 (floored), or a quotient of -1 and remainder of -3 (symmetric).
Most hardware multiply and divide instructions are symmetric, so floored division operations are likely to be slower. However, some applications (such as graphics) require floored division in
order to get a continuous function through zero. Consult your system’s documentation to learn its behavior.
19. Re: 17
That would yield incorrect results in all cases, though, would it?
– Sinan
20. Re: 19
Sorry, I meant:
That would not yield incorrect results in all cases, though, would it?
– Sinan
21. FOI ‘R US
22. 1.153395 can not be represented exactly in floating point binary.
23. Re 20
True, but it It biases the answers down. Actually I didn’t give a good example because that one (1.153395) would have given the answer with correct rounding (rounded down). If I’d used 1.153695,
it would have given 1.153 the same as 1.153395, not rounding up to 1.154. Without adding in the .0005 it winds up truncating the answer not rounding.
24. Well, I’ve tried the various patches proposed for Bagdarin to Kalakna and Cara and nothing works. I’ve sent the following letter to Hansen and Ruedy:
Dear Drs Hansen and Ruedy,
I am unable to replicate how you combined data sets at Kalakan, Siberia (as an example, but this applies to other stations,)
Prior to 1987, there are two versions of this series, the values of which are identical for most periods. Nevertheless, the combined version is 0.1 deg C lower than these values.
Can you provide an explanation for this? Thank you for your attention.
Yours truly, Stephen McIntyre
PS. I would appreciate a copy of the source code that you use for station adjustments as the verbal descriptions in Hansen et al 1999, 2001 are insufficient to replicate results such as the
one cited above or the change in versions previously noted,
25. Add 0.01 to the first column.
Subtract 0.2 from the second.
Add 0.01 to the third.
If you now average the resultant values you get the combined figure.
As to where the +0.01, -0.2, +0.01 figures come from I have no idea.
26. We coders recognize that trick. To truncate to two decimal places: multiply by 100.0, cast to an integer, then divide by 100.0 (converting the number back to a real). This does not round. One
would need to add 0.005 first in order to round.
I thought scientists desire precision though. Why would one purposefully get rid of it?
27. My guess is that the reason that they don’t want to show their code is because of exactly whatever’s going on here.
28. It does not matter what method they use as long as they were consistant. While most people would assume that a rounding function was used, if a floor function was used, then that is fine as long
as it is used for all data sets. Of course, if you use a method other than rounding, then that should be documented. — John M Reynolds
29. Please excuse what may be a stupid question but, wouldn’t any failure to update rural
stations with the same frequency as non-rural stations in the post-1990s period have
the potential to produce a spurious trend if a different “station mix” existed prior
to the 1990s?
30. I just looked at some stations in India and the averaging there seems to be OK. Maybe there are patches all over the place with some calculations being done separately for Russia; it seems
inconceivable, but so do all the other explanations.
31. In some programming languages, including any of Microsoft’s .Net languages, the round function by default rounds towards zero (ie floor when +ve, ceil when -ve). This has caused problems in my
own code when I assumed that rounding would work properly.
32. #28. In theory, but you only have multiple versioning before 1990. After 1990, you have a single series. So the bias, slight as it is and assuming that we’ve diagnosed it, applies only to
pre-1990 series and imparts a slight warming bias to recent years.
33. RE: #3 – the film “Entrapment” comes to mind.
34. Re #26
Sometimes reals get you in trouble because you don’t have infinite precision.
Here’s a trivial example.
float piFloat = (float)PI;
double piDoub = (double)PI;
double should_be_zero = piDoub/piFloat – 1.0; // the divide gives you a small residual since the two values aren’t really equal (the float has less precision)
If you then multiply the result by a very large number you a non zero answer. As Simon mentions in #18 depending on the algorithms implemented can also vary from machine to machine or language to
35. Re #24
I can just imagine the faces of those in the receiving end when they read this email. “What now?!”
Then everybody will start running around with much flapping of hands, just like headless chickens …
36. Start running around like headless chickens???
I’ve been getting the impression that they have been performing that act for several years now.
37. I wonder if this truncating is not due to Fortran variable declaration where you can (must ?) give the number of decimals for a float variable (I’m no specialist in Fortran, just vague memories
from my Fortran courses years ago).
38. In looking at the Bagdarin data I noticed that in some instances they constructed values where they had missing data in order to come up with annual temperatures. A number of these were bizarre
to say the least in that there was no way to find a simple rule as to what they had done. In the analysis I did for Bagdarin I only reported the rules for month by month adjustements not for
seasonal or annual data points. In many instances this is where additional adjustments can enter into
the data series.
For example, The March 1990 data is missing in this data series . The Apr and May 1990 monthly average are -2.4 and 9.1 and the seasonal (M-A-M) average is -2.1. This means that they have
inserted an average temperature for March, 1990 of (3*-2.1)- (-2.4) – (9.1) or -13.0. It is totally unclear where this number came from but it is needed to generate the annual temperature. The
March temperature in the other Bagdarin series that overlap this period and the ones used in the final compilation are -5.3! Now the final reconciled series simply includes a -0.3 adjustment, but
one has to wonder how these numbers are being derived. Remember these are the numbers that are supposedly being charted on the raw data graphs.
39. With the number of “adjustments” Steve and others are finding (and their seemingly arbitrary nature), I’m almost convinced that these researchers use the “Adjustment Dartboard” (available at all
fine scientific supply stores).
40. As someone who routinely handles and post-processes large quantities of data for a living, I know first-hand mistakes are made without any ulterior motive (except maybe to get out of the office
on time). What is very disturbing is that the noted inconsistancies all seem to go in the same direction. The hockey stick, the NYC Central Park UHI correction, the GISS Y2K error, rounding
issues, homogenity corrections that spread around microsite/UHI biases instead of removing them – all either cool the past or warm the present. The only example I can recall that went the other
direction was the UAH orbital decay error, which was promptly acknowledged and corrected.
Perhaps this is the result of internal auditing that only checks data that doesn’t meet preconceived notion (i.e. they know it getting warmer so processed data that says otherwise must be in
error) and isn’t intentional. With all the funding at stake it’s getting harder and harder to believe that.
I wonder if this truncating is not due to Fortran variable declaration where you can (must ?) give the number of decimals for a float variable (I m no specialist in Fortran, just vague
memories from my Fortran courses years ago).
I would think that this would perform an implicit round, though I cannot guarantee that. All modern CPUs have built-in round, ceil, and floor functions and typically their behaviors are
configurable (though you may have to operate at the assembly level). The standard C math library, libc, checks to see if the primitive exists, and uses that instead of the IEEE754 variant that is
written into the library. I believe it is not much different with Fortran, though again, I cannot guarantee this since I only did a little work with Fortran during my optimization phase a year or
so ago.
42. #41. I agree that there’s a bias in what’s detected. Had the Y2K or NYC errors gone the other way, I think that they’d have more likely to notice it.
43. Re #32,
There are some multiple versions after 1990
mostly prior to 1994. However, those that continue into the late 1990s, and
the 2000s, would be actual multiple stations, not multiple versions of single
44. #39 The “data series” link in the post doesn’t appear to work.
I do hope Waldo hasn’t taken his data home so no-one else can play.
45. #45. GISS links expire.
46. The data in the GHCN are expressed as integers corresponding to temperature in degrees Celsius in tenths of a degree. That is, 212 is actually 21.2 degrees Celsius.
If one leaves the data that way and carries all operations using integers, there is no risk of loss of precision or overflow or underflow (assuming 32-bit signed integers).
Then, one can do a one of conversion to a decimal and round the result to one decimal digit. Works fine.
I suspect, whatever the NASA guys has written is similar to the following C program:
#include <stdio.h>
#define ARRAY_SIZE(x) (sizeof(x)/sizeof(x[0]))
int celsius_i[] = {
432, 121, 212, -13, -141, 222, 67, 99, 101, 12, 1, -169
int main(void) {
size_t z;
float avg_f;
int avg_i;
float celsius_f[ ARRAY_SIZE( celsius_i ) ];
float sum = 0.0;
for ( z = 0; z < ARRAY_SIZE( celsius_i ) ; ++z ) {
celsius_f[ z ] = celsius_i[ z ] / 10.0;
for ( z = 0; z < ARRAY_SIZE( celsius_i ) ; ++z ) {
sum += celsius_f[ z ];
avg_f = sum / ARRAY_SIZE( celsius_i );
avg_i = (int) ( avg_f * 10.0 );
avg_f = avg_i / 10.0;
printf( "%.1f\n", avg_f );
return 0;
C:\Temp> gcc -Wall n.c -o n.exe
C:\Temp> n
whereas the correct result should be 7.9.
– Sinan
47. Your text tells me you divide X by Y.
I want to see how you do that in code.
I believe I have mentioned this before.
48. RE 43.
Best example is Crater lake NPS HQ. The entire series is deleted by hansen ( 2001) for no
discernable reason. Its homogeneus with surrounding stations. Its just colder. ( altitude of
station and massive snowfall I suspect)
I’ve asked Gavin for the stations used to kick Crater Lake off the Island and I got SQUADUSCH.
( ok, I don’t ask nice like you do)
49. I find it difficult to believe that NASA scientists would be using C, but I suppose anything is possible.
One would think that scientists, particularly NASA scientists, would have the whole precision thing “down pat” by now. I can understand the desire create datasets with the same precision as the
original datasets, however one has to take into account that errors propagate and get amplified. (Then again, we are talking about an organization that sends multi-million dollar satellites into
space with code that unwittingly and unsuccessfully mixes metric and English measurement.)
50. Re 50
We should be so lucky that they’re using C. DIdn’t someone say the codes in FORTRAN? I wonder what version?
51. 50 & 51
They are using Fortran but Fortran makes me want to puke. My example is an illustration. I never made the claim that they were using C.
52. MarkR
Sorry about that. I didn’t know the links expired – that explains why I have to use the basic link and click
Which reminds me that if anyone has a macro or function that turns Excel arrays/matrices into a vector that would help. I am sure there is a faster way to do it but I simply can’t find it.
53. ModelE is F90..i think
I’ve never seen a line of NASA C code. ( shudders) the fortran with
giant common blocks was bad enough. Imagine if you passed gavin a pointer?
He’d poke his eye out, execute garbage and publish the output.
54. Re #53 bernie
a macro or function that turns Excel arrays/matrices into a vector that would help
What do you mean exactly ?
55. #53 bernie No probs.
56. Sorry I don’t have more time to completely fill everyone in on the details, but my method for Bagdarin does work for this station IF you use the overlap period from 1960 to 1989.917 to get your
average differences.
Average of dates 1960 to 1989.917 (inclusive)
Column 1: -7.8656
Column 2: -7.6662
Column 3: -7.8917
Once the averages for that period are calculated, subtract the average difference between the first and third columns (-7.8917 – -7.8656)/2 from the third column for the entire series.
Then for the second column, subtract the averaged difference between the first and second column (-7.6662 – -7.8656)/2 from the second column for the entire series. Also, ADD the averaged
difference between the second and third column (-7.8917 – -7.6662)/2 to the second column for the entire series.
After averaging these adjusted series and rounding to 1 decimal place, there are only 4 data points that differ from the ‘combined’ column and they are only off by 0.1.
I’m running out of time to play with these, so if I made a mistake post it but I might not get back to it until Tuesday.
57. Some places used to use Ada and Fortran, but everyone probably uses Fortran now.
What version, who knows.
58. Re: 50
I can understand the desire create datasets with the same precision as the original datasets,
Here is the thing: GHCN temperatures are integers in tenths Celsius. So, in calculating a sum, I would only use integers. Then, calculating the average involves a single floating point operation
(as opposed to 24 – 12 for converting measurements from tenths Celsius to Celsius, 11 for adding them up, one for division).
The calculated average would also be in tenths Celsius. I would carry out all operations in tenths Celsius. When it came to the point where I wanted to display results, I would just divide the
result by ten and print with with one decimal digit precision.
Looking at ftp://data.giss.nasa.gov/pub/gistemp/download/SBBX_to_1x1.f they seem to have a preference for storing and processing temperatures as floating point values.
I have a feeling they have not read Goldberg
– Sinan
59. I think even a lot of computer folks don’t know that you can have binary numbers like .0110 for 1/6
60. Re: 59
they seem to have a preference for storing and processing temperatures as floating point values.
I forgot: In their gridded data sets, temperatures are stored in the internal binary representation of floating point values as I found out when I was trying to produce the one-and-only known
animation of month-to-month temperature anomalies using their data (sadly, the animation is now incorrect due to Steve’s discovery of errors in GISS processing).
– Sinan
61. fFreddy
GISS data comes in a matrix with essentially the months as the columns and the years in the rows. To line up and compare different multiple series for the same site, I needed to create a single
column for each series with all the months for all the years in a single column. I found code to do it MATLAB but that is not my poison and I am just starting R.
I can do it relatively quickly using simple copy and paste commands but I think something more elegant
probably exists.
I think even a lot of computer folks don t know that you can have binary numbers like .0110 for 1/6
Uh, actually, in binary each point on the right side of the decimal is a power of 2 just like on the left, i.e. 1/2 = 0.1, 1/4 = 0.01, 1/8 = 0.001, etc. 1/6 would then be the infinite series
63. #62. This sort of thing is very easy in R and I do it ALL the time.
Let’s say that x is one time series and y is another, both annual.
Then do something like this
x=ts(x, start= 1860)
y= ts(y,start=1880)
It joins things without you having to keep track of the exact number of rows and columns.
For GISS data, if you start with a matrix X with year,jan,feb,,,, then you do this:
giss=ts( c(t(X[,2:13])),start=c(X[1,1],1),freq=12)
It transposes the matrix; c makes it into a vector columnwise which is why you need to transform it first; freq=12 shows the ts that it’s monthly.
Collation of the GISS global series from their archive is annoying because they have a variety of oddball formats and clutter. HOwever here’s a way of doing it:
fred< -readLines(url);N<-length(fred)
fred1= gsub("[****]"," ",fred)
giss.glb=ts( c(t(test[,2:13])),start=c(1880,1),freq=12) /100
If you want to use in Excel, you can export it from R by:
This gives a tab-separated file that Excel recognizes.
You have to watch quotation-marks when you copy out of WordPress.
64. Steve:
Thanks, I will try it out.
65. #57 Damek – Do you have thoughts on what the rule is that would have us select the period of 1960 to 1989.917 for this station (when the overlap period is longer) and a different set of overlap
periods for Bagdarin? The rule for Bagdarin is somewhat clear, but the one for Cara seems arbitrary.
66. You caught it too quick. :) I was hoping it would take longer. :D
No, seriously, I buggered that one up. I forgot about 0 and 1 and hadn’t had seen that to correct it. (as in it goes 4/1 2/1 1/1 0 1/2 1/4 etc)
Silly me, just doing the inverse of 6.
I prefer to use the base in the number, so it’s not confusing, and not mix the fractions and binereals:
.12 = .510
.012 = .2510
.0012 = .12510
.00012 = .062510
.000012 = .0312510
But that’s cool, I never realized 1/6 is a repeating alternating 012 after the first .02 I only thought it out to .1562510 but it seems correct.
Dang I haven’t done this in a while.
(Oh, don’t try to do this binereals to decimals or vice versa in Windows sci calc kiddies, it won’t work btw) (Yes I made that word up and don’t know if it’s a real one.)
Quick, what’s 210!!! And 220!!! :)
67. Dang, preview showed all my sub and /sub tags as working, and stripped them out when it posted it. Grrrrr.
That should be .1 base 2 = .5 base 10 not .12 and .510 etc.
The .012 is .1 base 2, .02 is .0 base 2 and .15625 base 10 etc As in after the first .0 in binary, it repeats as 01 in base two etc.
And the last bits are totally hosed, supposed to be 2 to the 10th and 2 to the 20th.
68. RE: 66
Yes, I believe the rule is to just use the most recent 30 years of overlap when calculating the average difference. If there aren’t 30 years, then use the most overlap available. I went back to
Bagdarin, adjusted the average difference for just the last 30 years on the older two columns (222305540001 and 222305540000), and I still got an exact match to the ‘combined’ column for the
entire series.
Quick, what s 210!!! And 220!!!
Base 3? That would be 12 in base 10, followed by 15 in base 10. :)
Oh, I see your follow up, hehe.
Steve M., reading that, I’m glad I’m a Matlab user. Oh so simple to use…
Quick, what s 210!!! And 220!!!
Base 3? That would be 21 in base 10, followed by 24 in base 10. :)
Oh, I see your follow up, hehe.
Steve M., reading that, I’m glad I’m a Matlab user. Oh so simple to use…
71. Oops, hit stop after I realized the error in my first, but it took anyway. The second is the corrected one, though totally immaterial to any discussion and nothing but fun anyway. :)
72. #71. The difficulty is not with R; it’s with Hansen’s crappy formating. I challenge you to do any better with Matlab. Matlab and R are a lot the same and I doubt that Matlab can do it any better.
There might be an easier way in R, but this was one-off and it worked and I can do this quickly.
73. Steve, your #71 is referencing Darnek in #69 perhaps?
Mark, well, I did a little work in Matlab a long while ago, but nothing much at all just graphing. I’m actually doing those conversions in my head. (Well, I had to write down .03125 and .125
before I added them. Looking at it now, no I don’t know why.)
But if that was base 3, yep, 2 9s and 1 or 2 3s is 21 or 24 decimal :) lol That’s why I messed up the .0110, I did it too fast and didn’t go into thinking mode! :D I’m usually pretty good at
factoring by 2s. Let me see, what is it, right and left shift logical and circular, isn’t it? I haven’t done any assembly/machine in quite a while.
If only we had 16 fingers. bin oct and hex are our friends.
I don’t know why, but it’s interesting to me that 2 to the 10th is 1024. I wish I didn’t have to put up with 10 to the 3rd being 1000. 10 to the 2nd would be much more elegant.
What’s even more interesting is how it goes; 2 10th is a K, to the 20th is an M, to the 30th is a G, to the 40th is a T… I don’t remember the next one. Maybe I’ll make one up, a KT? Or maybe K*T
Or 2 20th + 2 20th + 2 10th….
Sort of like Waldo Slicin’ the Salami. I prefer Bratwurst myself. But there ya go.
74. Re #62 bernie
Paste the following code (between the hash lines) into a module on your workbook
Option Explicit
Public Sub TableToColumn()
Dim RwIx%, RwNo%, ClIx%, ClNo%
Dim FrmRg As Range, ToRg As Range
Set FrmRg = Selection.Areas(1)
Set ToRg = Selection.Areas(2)
RwNo% = FrmRg.Rows.Count
ClNo% = FrmRg.Columns.Count
For RwIx% = 1 To RwNo%
For ClIx% = 1 To ClNo%
ToRg.Value = FrmRg(RwIx%, ClIx%).Value
Set ToRg = ToRg.Offset(1, 0)
End Sub
Usage :
Select the table of Giss data
Hold down Ctrl key, click on first cell of target column
Run the macro
Hope this helps.
bernie, I don’t know your level of experience – if this is not clear, say so, and I’ll explain in more detail.
#71. The difficulty is not with R; it s with Hansen s crappy formating. I challenge you to do any better with Matlab. Matlab and R are a lot the same and I doubt that Matlab can do it any
better. There might be an easier way in R, but this was one-off and it worked and I can do this quickly.
Oh, that wasn’t what I was trying to imply. I mean “so much simpler to understand,” but I’m biased because I’ve been using Matlab since the late 80s (William Tranter, my first signal processing
professor, was a Mathworks consultant at the time IIRC). R totally loses me not unlike LaTex. Sorry ’bout that. :)
I haven t done any assembly/machine in quite a while.
I spent the better part of 2005 and 2006 optimizing C and assembly routines on a MIPS based machine, even digging in to libm and the compiler, gcc, optimizations (I incorrectly referred to the
math library as libc earlier). Ugh.
76. I’m talking inputing op codes in hex into an i 8080a or doing assembly for a tms 9900! Or doing some stuff with the 68000 or 80186. Ugh, MIPS. Might as well start talking about PDPs. :D
77. Let us not talk PDPs.
I never programmed one of those.
Re: 8080 – I still remember C3h. What is required to make Hansen, Mann, et. al. C3h?
Mark T.August 31st, 2007 at 5:23 pm,
I had to write some drivers for a Linux variant to get the debugger for a new processor to work. C internals are some very ugly s***. Stack frames ugh. And optimizing your compiler has got to be
tough. Lots and lots of trying to find the correspondence between the input source and the assembly language produced.
When ever I wrote C for demanding applications there was alway the question of what was the compiler actually going to do. C was never a good fit to real time.
steven mosher August 31st, 2007 at 12:33 pm,
As I commented in another thread in response to your question “how to do x/y”. I have done that in binary. It is a most difficult question and implementation dependent. Also truncation errors.
Because decimal fractions do not map well into binary floating point. This is true no matter what wrapper is put around it (FORTRAN, LISP, Basic, etc.). Best is to convert every thing to integers
(provided you have enough) and renormalize at the end of the calculation. Like any true FORTHer would.
Floating point is a crutch for programmers who do not wish to think through the problem. The noise propagates and multiplies. Very small from one multiplication. A chain of 300 can ruin your
The problem with doing everything in integers is that each problem is a new one.
Nothing wrong either way, as long as you understand the limitations.
78. re 78. Simon.
I had drinks with Maddog last night
we toasted to many old things we shared. New things too.
Floats are a crutch.
79. But the uber high-end DSPs use floats. I think they have a good reason.
80. Larry,
An awful lot of DSP was done with integer arithmetic.
A lot depends on how much the result of your last calculation influences your next one. If the function converges. No problem. If not – look out.
Even non-convergence is not a problem in some feedback systems. The feedback tends to anneal out the errors. Although they will add noise to the system.
With static calculations (no feedback) you have nothing helping you and each multiplication tends to multiply the errors. As does each addition or subtraction (where a number is not fully
represented in floating point).
81. Filters are different from control systems.
If I was doing an IIR filter – feedback – I’d prefer integer arithmetic. That is because due to the feedback you wind up reprocessing the noise over and over.
For FIR filters – no feedback – floats would just add calculation and rounding noise at each stage but the noise merely propagates instead of multiplying due to feedback.
But I’m kind of fussy that way. I tend to prefer systems that don’t add noise to my signal. Or add the minimum.
82. Re #77
Having debugged machines in octal with punch paper tape for input and no operating system I feel your pain. I get puzzled looks from programmers when I mention KSR-33 teletypes (one of our I/O
Remember that C was written to support the development of the operating system for PDPs (no I’m not talking about RSX) which is where the i++ came from (increment register).
83. fFreddy
Many, many thanks. I hope generating it was not too much of a chore. You would think there would be a standard function in Excel.
I don’t do a lot with Macros, but I just got this one running and it works well. It should make extracting the data a lot easier. Though I am still stumped on the Cara data. It will keep me busy
until I master R – which I can see will take a little while.
84. Rounding is a very contentious issue, and a lot of care is needed. Stepwise rounding (e.g. rounding to the 1000th, then to the 100th, then to the tenth) in itself creates a problem because it
puts the breakpoint at 0.445 rather than 0.500 – just bad practice to do this. Floor puts the breakpoint at 0.999, and ceil at 0.001. Bearing in mind we are looking for trends of a tenth of a
degree, these differences are far from trivial.
There are occasions when floor is appropriate – when comparing stopwatch times from a 1/100th stopwatch and a 1/10th stopwatch, you should always “floor” the 1/100th stopwatch, because 1/10th
stopwatches “tick over” at 0.09 by design. But for thermometer readings, I would assume a straight round to the required precision is most appropriate.
85. This is interesting.
86. I’ve looked at more examples. Whatever’s going on is more than simple slicing: there are a few examples with upward slicing. While I can’t put a firm handle on it, I’ve looked at about 100 plots
and would say that there are definitely more negative slices. Sometimes the displacement is more than 0.1 deg C. This is very annoying.
There are all sorts of weird variations. Changsha has 2 series that cover virtually the identical period and it doesn’t come up to date. The combined is about 0.2 deg C cooler than the average or
any conceivable variation.
In other cases, the combined is bang on to the average. There’s no rhyme or reason that I can discern.
I can’t believe that this crap can meet any sort of NASA specifications.
87. OK, I’ve replicated a Hansen combining of two series in a case with two series only: Changsha, where the station values. I don’t vouch for whether this works anywhere else.
Changsha has two versions with one having only a few more values. The longer series was chosen first pace Hansen. The delta between the two versions over all available values was calculated (
0.4169249) and rounded to one digit (0.4). The average was calculated. Then the average was multiplied by 10, 0.499999 added to the average, the floor taken and then it was divided by 10. If you
add 0.5 (equal to 0.05 in the unadjusted), you don’t get the slicing.
There was NO difference between this value and the dset=1 combined value for Changsha. I’ll experiment with other stations with only 2 columns. It looks to me like this method ends up giving some
extra points to the house, but I’m still experimenting. Here’s the code. AS usual, watch out for quotation marks in WordPress versions.
###load Changsha
combine=read.table(file.path(“http://data.climateaudit.org/data/giss”,paste(id0,”dat”,sep=”.”) ),header=TRUE,sep=”\t”)
combine=ts(combine[,2:ncol(combine)] ,start=c(combine[1,1],1),freq=12)
##Hansen emulation function – this case
hansen=function(X) {
hansen= floor(10*y+.49999) /10
##Calculate Changsha
plot(X[,3]-y,type=”p”) #0,5
#[1] 0 0
88. Re: 88
Then the average was multiplied by 10, 0.499999 added to the average, the floor taken and then it was divided by 10.
Most likely, they did not intend to add 0.499999. Accumulated floating point errors can have manifest themselves in that way. The fact that they are using single precision floats does not help
but one can run into this kind of thing with doubles as well.
– Sinan
89. Well.
Do not let these guys near any Navier-Stokes equations. DOH!
90. Cara, Siberia
There are 604 months that overlap all three series and the combined column, which 604 months average:
Col. 1 (Series 2): -7.63
Col. 2 (not named): -7.39
Col. 3 (not named): -7.62
combined: -7.61
In the case of multiple series for a given station, Hansen, et. al. supposedly claim that each series is weighted equally, but that does not appear to be the case when comparing these 606 months
(i.e. apples to apples).
Giving equal weight to each series average should result in a combined average for these 604 months of -7.55, a difference of 0.06 degrees C or about 0.1 degree F for the entire series. While
this might seem small, it appears to be a consistent difference over 604 months or about 50 years.
91. You can EXACTLY duplicate hansens combined figure by doing the following.
1. Convert to Kelvin by adding 273.2 to each series.
2. Apply deltas to series 2 and 3 (see below for deltas)
3. Average the monthly values.
4. Round to nearest 1/10th degree.
5. Convert to Celsius by subtracting 273.2
You now have hansens combined figure.
Valid deltas for the 2nd and 3rd series are:
-0.17 and +0.01
-0.18 and +0.01
-0.19 and +0.01
-0.18 and +0.02
-0.19 and +0.02
-0.19 and +0.03
If you try this and get any different values to hansens combined figure could you please include just one of the differences so I can verify your results.
92. re: #92
But would/should 273.15 be rounded up to 273.2 for such calculations? I think should not; don’t know about would.
93. Re 25:
Add 0.01 to the first column.
Subtract 0.2 from the second.
Add 0.01 to the third.
If you now average the resultant values you get the combined figure.
This is consistent with the treatment of the single value (2nd column) for October, 1941 of -3.7: the “combined” value is -3.9. Obviously, something other than rounding has been applied here.
94. Compare A to B, combine to form AB. Compare AB to C, combine to form ABC. Repeat until you run out of records.
If you don’t do it this way, you’re doing it wrong.
Terry #92 I’m looking at your step 2.
95. Terry.
I don’t think they would do a conversion to Kelvin.
The input data from the US would ALL be in Fahrenheit. For early records the data would be
INTEGER F.
For the ROW input data would be Centigrade INT.
So, I’d think the first step is a Fahrenheit to Centigrade Conversion for the US sites.
for ROW no conversion is necessary.
This is 1987. I bet the station data is read into a 2D array of 16bit int
Column 1 is year. Column 2 is 12ths in int. col 3 starts the data
96. #92 Terry: I have confirmed your calculations. In addition, the following values also replicate exactly:
-0.20 and +0.01
-0.20 and +0.02
-0.20 and +0.03
In addition, when rounding the deltas to the nearest tenth of a degree, all of the valid adjustments resolve to -0.2 and 0.0. When rounding the averages in my #91, they are: -7.6, -7.4 and -7.6,
so it looks like the warmest series was “adjusted” by -0.2 and then the three were averaged using the method you describe.
Hansen, says (from the excerpt in Hansen’s Bias Method):
A third record for the same location, if it exists, is then combined with the mean of the first two records in the same way, with all records present for a given year contributing equally to
the mean temperature for that year (HL87).
In the case of Cara, the highlighted part of the quote above does not seem to be true, as the warmest series appears to be biased down to the mean of the other two, colder series. So all records
do NOT “contribute equally“
97. #97
Glad you spotted that Phil, my little perl script had a problem rounding one particular value and so rejected those deltas. Fixed it now.
Post a Comment | {"url":"http://climateaudit.org/2007/08/31/waldo-scavenges-for-scraps/","timestamp":"2014-04-20T11:47:59Z","content_type":null,"content_length":"197256","record_id":"<urn:uuid:0a0f0296-bc21-47e3-9026-38a3e7aa9d80>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
Page:Scientific Memoirs, Vol. 3 (1843).djvu/689
This page needs to be
L. F. MENABREA ON BABBAGE'S ANALYTICAL ENGINE.
as an example the resolution of two equations of the first degree with two unknown quantities. Let the following be the two equations, in which $x$ and $y$ are the unknown quantities:—
\left \{ \begin{align} &mx+ny = d \\ &m'x +n'y =d'.\end{align} \right .
We deduce $x = \frac{{dn'-d'n}}{{n'm-nm'}}$, and for $y$ an analogous expression. Let us continue to represent by $V_0$, $V_1$, $V_2$, &c. the different columns which contain the numbers, and let us
suppose that the first eight columns have been chosen for expressing on them the numbers represented by $m$, $n$, $d$, $m'$, $n'$, $d'$, $n$ and $n$, which implies that $V_0 = m$, $V_1 = n$, $V_2 =
d$, $V_3 = m'$, $V_4 = n'$, $V_5 = d'$, $V_6=n,$, $V_7=n'$.
The series of operations commanded by the cards, and the results obtained, may be represented in the following table:—
Number of the Operation-cards. Cards of the variables.
operations. Symbols indicating Columns on which operations Columns which receive Progress of the operations.
the nature of the operations. are to be performed. results of operations.
1 $\times$ $V_2 \times V_4 =$ $V_8 \ldots \ldots \ldots$ $= dn'$
2 $\times$ $V_5 \times V_1 =$ $V_9 \ldots \ldots \ldots$ $= d'n$
3 $\times$ $V_4 \times V_0 =$ $V_{10} \ldots \ldots \ldots$ $= n'm$
4 $\times$ $V_1 \times V_3 =$ $V_{11} \ldots \ldots \ldots$ $= nm'$
5 ${}-{}$ $V_8 - V_9 =$ $V_{12} \ldots \ldots \ldots$ $= dn'- d'n$
6 ${}-{}$ $V_{10} - V_{11} =$ $V_{13} \ldots \ldots \ldots$ $= n'm - nm'$
7 $\div$ $\frac{{V_{12}}}{{V_{13}}} =$ $V_{14} \ldots \ldots \ldots$ $= x = \frac{{dn' - d'n}}{{n'm-nm'}}$
Since the cards do nothing but indicate in what manner and on what columns the machine shall act, it is clear that we must still, in every particular case, introduce the numerical data for the
calculation. Thus, in the example we have selected, we must previously inscribe the numerical values of $m$, $n$, $d$, $m'$, $n'$, $d'$, in the order and on the columns indicated, after which the
machine when put in action will give the value of the unknown quantity $x$ for this particular case. To obtain the value of $y$, another series of operations analogous to the preceding must be
performed. But we see that they will be only four in number, since the denominator of the expression for $y$, excepting the sign, is the same as that for $x$, and equal to $n' m - n m'$. In the
preceding table it will be remarked that the column for operations indicates four successive multiplications, two subtractions, and | {"url":"http://en.wikisource.org/wiki/Page:Scientific_Memoirs,_Vol._3_(1843).djvu/689","timestamp":"2014-04-19T17:40:49Z","content_type":null,"content_length":"33926","record_id":"<urn:uuid:c16b935f-b277-48f1-b802-2c9e10adc8a2>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
If there are two fixed curves, and a curve S of fixed shape and length that slide with its ends on the fixed curves, then the locus of a point moving with S is called a glissette. An example is the
locus of the midpoint of a line segment sliding with its ends on two perpendicular lines; this locus is a circle.
Related category
PLANE CURVES | {"url":"http://www.daviddarling.info/encyclopedia/G/glissette.html","timestamp":"2014-04-16T04:15:36Z","content_type":null,"content_length":"6148","record_id":"<urn:uuid:96cf6ee9-e667-4924-9617-e259e81d6199>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
flatness problem
flatness problem
is an issue of
that the theory of
cosmological inflation
answers. It is also closely related to the
horizon problem
. Our
began with a
big bang
and has been expanding ever since, it has neither recollapsed on itself in a
big crunch
, of flown off to be (nearly)
large. The speed of
by the total amount of
present in the universe, and therefore there is a
critical density
of matter in the universe, above which the universe recollapses, below which the universe expands forever. This relationship; the ratio of the average density of the universe, to the critical density
is often termed as
. Omega values of less than one give an ever expanding universe, greater than one, a collapsing universe and when
one a universe that neither grows or collapses. Cosmologists refer to a universe with an omega value of one, as being
. If you model the
big bang
'traditionally' you find that during the initial expansion, the average density of the universe and the critical density fluctuated hugely. It would have only taken a very small difference (only
about 10
) in the early stages to give rise to a universe we couldn't live in today. The problem of what could so finely tune our universe to have a omega value that is near to one is the
flatness problem
. With an
model of the big bang however the 10
(at least!) expansion factor quickly dampened any irregularities in space, making omega near one, no matter what it's initial value. NB. Current
can only find enough matter to give an omega value of 0.1, making it increasingly likely the universe will carry on expanding until the
final heat death
of the universe. | {"url":"http://everything2.com/title/flatness+problem","timestamp":"2014-04-21T02:48:50Z","content_type":null,"content_length":"20398","record_id":"<urn:uuid:bf2a3d5e-3420-4d81-817f-3cdfffe175b9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
NOLAN RYAN: THE POWER OF THE "POWER PRECIPICE"
OK, really swamped with non-hardball projects of late, so March has done more than half its lionizing thing without us...thus we'll give you something big, bold, brawny and arduous (you can pick up
the internal association therein
our usual application of the sledgehammer,
n'est-ce pas
We were struck recently (just a glancing blow, fortunately) by an off-hand remark about
Nolan Ryan
. The utterer attributed it to
Bill James
, though we've not had time to verify the reference. (No matter: despite what might otherwise seem to be the case, there's no intent in this essay to "bash" anyone...just making that clear in case
someone erstwhile
Jack Kruschen
-type alerts
Rob Neyer
to the goings-on here.)
That remark, boiled down to its most prosaic formulation, suggested that when Ryan had his control he was unbeatable, and when he didn't he wasn't. Sounds like good advice (but that didn't stop
Linda Ronstadt
from loving some sweet-talking heartbreaker in a what seemed like an endless series of lachrymose ballads...) as well as a large dollop of early sabermetric common sense.
There were numbers in the formulation that went something like this: 5+ walks in a game, struggle; 3 or less walks, unbeatable. But as with many of the binary formulations that informed early efforts
(and that are still, how shall we say..."psychologically influential" even today) to deconstruct baseball statistics, this one doesn't really stand up to a full sniff test.
But it does lead us into some interesting areas that aren't quite as settled as the heterodox orthodoxy would have us believe. (Cue up the music, folks...it's QMAX time again!)
It turns out that when we break out Ryan's career starts (all 773 of them, available one by one at
Forman et fil
), the idea that he was unbeatable when he had better control is roughly three-fourths true.
The data is all here, compiled into your basic counting stats, well-known rate stats (ERA, H/9), and then some variously recondite calculations (the basic QMAX "S" and "C" values; QMAX's ERA
predictor--named QERA; and, finally, the fabled FIP value). A lot of intriguing stuff here, so let's get right to it.
First, there's the raw interest in knowing exactly how many games with various amounts of walks allowed--Ryan has been retired for twenty years now, but he's still one of the most indelible presences
on the mound (even if he's nowhere near the level of the all-time greats).
It's amazing to find out that he had 232 starts in which he walked five or more batters; that he had only 27 starts (just 3%) where he didn't walk anyone. And it's very interesting to note that his
ERA in games where he walked five batters isn't all that different from his ERA in games where he walked only three.
In fact, it's downright weird to discover that Ryan's ERA in games where he walked five or more batters is lower than it is in games where he walks three or four batters.
And it's very interesting to note the divergences in Ryan's QERA and FIP through this sequence of breakouts.
Now, we know (even before it's pointed out to us by our super-modeling brethren...) that QERA and FIP aren't attempting to measure the same thing. But the predictive qualities that are claimed for
FIP (that its reliance only on the so-called "three true outcomes" to fashion a massaged model of ERA is a truer picture of future performance than anything else) run into a few thorny issues when we
look at how it handles cluster of starts where the pitcher has high walks and low hits.
And there's no better place to examine that discrepancy between the predicted and the actual than in the region of the QMAX chart (as you'll see in the many matrix breakouts that show the shape of
Ryan's start distributions by the number of walks/game) in the upper right corner.
That is what we've taken to call the "power precipice": the area where pitchers give up a good bit fewer hits than the league average per nine innings, and a good bit more walks than the league
average per nine innings. It is a range that is shockingly close to the level of success that pitchers achieve in the upper left corner of the QMAX chart, where they have similar success at hit
prevention and have lower than average walks per nine.
Nolan Ryan may be the king of the power precipice: we'd be extremely surprised if there is any pitcher (other than possibly Bob Feller) who has more starts in that region. His total of 184 "power
precipice" starts represents just under one-fourth of his career total (24%, to be exact). That number encompasses five seasons' worth of starts.
In those games, Ryan's won-loss record is 101-57. (OK, you don't like won-loss records.) His ERA is 1.87--and this is happening in games where he's walking an average of nearly six-and-a-half men per
nine innings! He's allowing just over four hits per nine innings (which works out to about three-and-a-half hits per actual start, since his starts in these games last about seven-and-a-half
FIP's assumption that the variability in hits on balls in play is low enough to simply ignore the extremes in performance creates a situation where the method predicts that Ryan's ERA will be nearly
90% higher than what it actually is in these games (referring back to the big chart above: 3.55 vs. 1.87).
QERA suggests that Ryan has gotten some breaks in terms of what that ERA ought to be as well, but it's nowhere near that divergent. This is because QERA, using the QMAX "S" and "C" values to
calibrate the relative importance of hit prevention and walk prevention, does not throw out ninety percent of the hits based on a modeling assumption or sixty percent of the outs that involve a
fielding play.
FIP makes an assumption about how baseball works and applies it monolithically to a model that suggests that the weighted average of the "true outcome" events is sufficient to characterize quality.
That is not without some value, but it's clear that certain combinations of those events produce serious discrepancies with the actual results in the games where those event combinations occur.
That doesn't completely invalidate it, but it points out that these mega-modeling methods are not nearly as robust or as granular as they have been claimed to be.
The chain of QMAX charts that have been running down the right side for awhile now give us a glimpse as to how the shape of performance is distributed across walks/game. Ryan proves a general rule
that has been jettisoned in the FIP concept: the more walks a team draws, the fewer hits they will make. (Of course, there are clearly exceptions in individual games; but the available data for this
is now vast and as you move rightward on the QMAX grid--even in those regions where the pitcher is being hit hard, in the 5, 6, 7 "S" areas--the hits/9 IP declines.
Ryan has one anomaly in his data: the 7BB/G group, where his H/9 rises. But the rest of the progression, once you get past the very small number of starts where he allows no walks at all, is linear.
The QMAX range summaries tell us a bit more in this regard. Note, for example, how consistent Ryan's "top hit prevention" (the S12 rows from left to right across the QMAX chart) are all the way
across the walks/game spectrum. The fact that he gets into that range--even if it's over on the right of the QMAX diagram (and above you can see the rightward drift as his walks/game rises)--is what
allows him to remain a successful pitcher even when he is having major control problems.
Note that Ryan is hit hardest when he walks three men in a start (24%). Note that his Power Precipice percentage jumps sharply in the 3-5 walks/game range--it increases nearly fivefold.
This is why Ryan's ERA in games with five or more walks per start is not dramatically different from his ERA in games where he walks three or fewer per game (3.29 to 3.03). FIP predicts that ERA to
be a lot farther apart (4.15 to 2.85).
Finally, here are the ERA values for each cell on Ryan's QMAX chart. (This is for his entire career, all 773 starts). You see how there's a strong tendency for him to sustain success in the upper
right corner, while he struggles more in the middling regions of the chart. He is clearly a below-average pitcher when he pitches in the outer reaches of the success square (the 3,3-3,4-4,2-4,3
area): pitchers with less "stuff" and more "control" have less of sharp break there. Also, he's only intermittently successful in the "Tommy John" region (the one at lower left, where control
pitchers manage to thrive despite giving up more hits than innings pitched). When he's giving up hits, he's really in trouble, even in those areas of the chart where most other pitchers manage to be
The other sharp break here is between "2S" and "3S." Ryan fades as badly in the rightward movement across the "3S" and "4S" zones as anyone we've seen.
So what's clear from all this? Pitchers like Ryan, who are hard to hit and who have an intermittent kind of wildness, can be just about as successful in the outer reaches of wildness as they are in
their more conventionally "great" performances (the ones in the yellow four squares at top left, the region we call the "Elite Square"). Once Ryan moves into more conventional hit/game regions, he
becomes much less effective when his control deserts him--much more like a normal pitcher. One of these days we'll put the FIP values up for a chart like this--and that might turn out to be the best
way to spot-check the value of that highly-ballyhooed stat: we just might find that there's a pattern to what it predicts well in terms of actual results, and what it does not. Stay tuned. | {"url":"http://bigbadbaseball.blogspot.com/2013/03/nolan-ryan-power-of-power-precipice.html","timestamp":"2014-04-17T21:27:10Z","content_type":null,"content_length":"78454","record_id":"<urn:uuid:65b3ae52-6920-4006-9c59-3cf5c301585d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Danny Tarlow and
Lee-Ming Zen
over at
This Number Crunching Life
have announced their annual March Madness prediction contest. To compete, you use data from this season and past seasons (which Danny & Lee will provide
), build a computer system that fills out a bracket, then pit yourself against the field of silicon competition. The posts from last season's tournament can be found
I personally know the winner from
last year
and the
previous year
, and I can only say that I have the utmost respect for their dedication, intelligence, and ruggedly handsome good looks.
Ken Pomeroy recently made a couple of blog postings concerning defense, and specifically a statistic he calls the "3 Point Attempt Percentage" (3PA%). He defines this statistic as the "percentage of
field-goal attempts that are from three-point range." Ken Pomeroy thinks this is a better measure of defense than 3PT%. His reasoning is that most teams only take 3 point shots when they are
relatively unguarded; the effect of defense is not to make these shots harder, but to cut down on the number of opportunities. Hence the claim that it's really how many 3 pointers your opponent takes
that reveals the quality of your 3PT defense. Near the end of the second posting he says:
People that are unaware of 3PA% (which is to say nearly everyone) are missing a very telling statistic that explains a lot of how defense works.
This is a strong statement and worthy of a little research to see whether it is true (at least so far as predicting outcomes is concerned).
3PA% is similar to Effective Field Goal Percentage, one of Dean Oliver's Four Factors. I have previously considered the Four Factors and concluded that they didn't add any predictive value to my
models, but 3PA% captures a slightly different slice of information.
When I recently looked at derived statistics, one of the derived statistics was pretty close to 3PA%:
(Ave. number of 3PT attempts by the opposing team)
(Ave. number of FG attempts by the opposing team)
This isn't quite the same statistic, because it is using game averages rather than cumulatives, but it is close. This statistic turned out to have no predictive value, but a couple of statistics
based upon 3PT attempts did have value:
(Ave. number of 3PT attempts by the opposing team)
(Ave. number of turnovers)
(Ave. number of 3PT attempts by the opposing team)
(Ave. number of rebounds)
Note that these statistics are relating the number of 3PT attempts by the opponent to a statistic for the defending team. I'm not entirely sure what these statistics are capturing, but I don't think
it is 3PT defense. (The latter might be indirectly saying something about how a team defends against the three pointer, from how it is positioned to rebound effectively or not after a taken three
That aside, I modified my models to generate four new statistics: the 3PA% for the home team in previous games, the 3PA% for the away team in previous games, the 3PA% for the home team's opponents in
previous games, and the 3PA% for the away team's opponents in previous games. I then tested the model both with and without these statistics:
│ Model │ Error │ %Correct │
│ Base Statistical model │ 11.06 │ 72.7% │
│ Base Statistical model + 3PA% statistics │ 11.05 │ 72.7% │
There's a very small improvement in RMSE with the added 3PA% statistics. So at least for my model, the 3PA% statistics don't seem to add any significant new information.
As I've mentioned earlier, I use my models to bet (in some theoretical sense) against the "the line". Typically I bet the games where my model differs significantly from the line (e.g., >4 points or
so). As I've documented here, I have a number of different models, all of which have around the same performance (~11 points RMSE).
In the past I've usually averaged the predictions of these models for betting purposes, but for some time I've wondered whether they all perform equally well against the line. Although they all have
similar errors, it's possible that some of the models error more consistently to the winning side of the line. To test this, I gathered three seasons worth of Vegas closing line data (about 7700
games) and tested each model for how often its predictions were correct versus the line. (The predictor is "correct" if it would make a winning bet given the line.) I also looked at each predictor's
error versus the line (i.e., how accurately it predicted the line).
│ Model │ Performance │ Error vs. Line │
│ │ vs. Line │ │
│ TrueSkill │ 49.89% │ 3.75 │
│ Govan │ 49.28% │ 3.49 │
│ BGD │ 49.58% │ 3.51 │
│ Base Statistical │ 50.12% │ 4.34 │
│ Statistical w/ Derived │ 50.15% │ 4.34 │
│ All │ 52.00% │ 3.49 │
│ All (Difference > 2) │ 53.15% │ │
The "All" model here is a linear predictor using all the inputs to TrueSkill, Govan, BGD and Statistics w/ Derived. (I also tested some voting models, but they all under-perform the Statistical/All
There are a couple of interesting results.
Most noticeably, the "All" predictor is at break-even versus the line. (Due to "house cut" on sports bets, you need to win about 52% of your bets to break even.) If we restrict ourselves to bets
where the predictor differs from the line by at least two points, performance moves into (barely) positive territory. This is very good performance; the best predictors tracked at The Prediction
Tracker do not even break 50%. (Furthermore, I am using the "closing" line, which is a tougher measure [by about one point] than the opening line used at the Prediction Tracker.)
It's also intriguing that TrueSkill/Govan/BGD all underperform the line but track it noticeably better than the statistical predictor. This suggests to me that the line is set not by wily veteran
gamblers in the smoky back rooms, but by a computer program using some kind of team strength measure.
A (possibly interesting) side-note: All models that under-perform the line are going to fall into the seemingly miniscule range of 48-52%. (If a model performs worse than 48% against the line, we
would simply bet against the model.) Pick any crazy model you like -- "Always bet the home team," "Always bet on the team whose trainer's name is first alphabetically," etc. -- and the performance is
almost certainly going to fall in that 48-52% range against the line. (If it doesn't, you've found the key to beating Vegas!)
As promised last time, we'll now look at a different type of derived statistic. We're going to look at statistics which are the ratio between the two teams of the same base statistic, e.g.,
(Ave # of offensive rebounds for the home team / Ave # of offensive rebounds for the away team)
The idea here is that it may be more predictive to look at the relative strengths of the teams rather than the absolute strengths.
The first statistics I want to try this upon are the strength measures like TrueSkill and RPI. Suppose that Syracuse, with an RPI of 0.6823, plays Missouri, with an RPI of 0.6234, and the same night
UCF, with an RPI of 0.5723 plays Oregon State with an RPI of 0.516. Would we expect the same outcome in those games? In both cases, the better team is about 0.06 better in RPI. But Syracuse is about
10% better than Missouri, while UCF is about 12% better than OSU. If it's the relative strength that matters, we would expect UCF to win (on average) by more than Syracuse.
To test this out, I generated the relative strengths for measures like TrueSkill and ran them through my testing setup. In every case, the relative strengths had no predictive value above and beyond
the value of the absolute strengths. And when the relative strengths alone were used for prediction, they underperformed the absolutes used alone.
I then did the same thing for the statistical attributes like offensive rebounding and got the same result. The relative strengths of the two teams provided no additional predictive accuracy.
I find this result fairly intriguing. My strong intuition was that at least a portion of the game outcome would be better explained by the relative strengths of the two teams. It's hard to believe
that Syracuse should win its game against Missouri by more points simply because they're both stronger teams than UCF and OSU. But (as has often proven to be the case!) my intuition was just wrong,
and relative strength is much less important than I would guess.
Continuing on from last time, I had set up the infrastructure to allow me to easily test the value of derived variables in statistical prediction. Before testing any of these derived variables, we
need a baseline. In this case, the baseline is the performance of a linear regression using all the base variables. I don't know that I've ever documented the base variables, but they are basically
all that can be created from the full game statistics available at Yahoo! Sports. These are averaged by game, so for example one of the base statistics is "Average free throw attempts per game." I
also have the capability to average statistics by possession (e.g., "Average free throw attempts per possession" but unlike some other researchers, I've never found per possession averages to be any
more useful than per game averages, so I generally don't produce them.
For most statistics, I also produce the average for the team's opponents. So to continue the example above, I produce "Average free throws per game for this team's opponents." I also produce a small
number of simple derived statistics, such as "Average Margin of Victory (MOV)", and winning percentages at home and on the road.
When we get to predicting game outcomes, of course we have all of these statistics for both the home and the away team. (And that home/road distinction is important, obviously.) If we use all these
base statistics to create a linear regression, we get the following performance:
│ Predictor │ % Correct │ MOV Error │
│ Base Statistical Predictor │ 72.3% │ 11.10 │
This is the same performance I have reported earlier, and tracks fairly well with the best performance from the predictors based upon strength ratings.
Now we want to augment that predictor with derived statistics to see if they offer any performance improvement. As mentioned last time, we have 1200 derived statistics, so we have to do some feature
selection to thin that crop for testing.
One possibility (as discussed here) is to build a decision tree, and use the features identified in the tree. If we do that (and force the tree to be small), we identify these derived features as
1. The home team's average margin of victory per possession over the overall winning percentage
2. The away team's average number of field goals made by opponents over average score
3. The home team's average assists by opponents over the field goals made
4. The home teams average MOV per game over the home winning percentage
That is, you'd have to admit, quite a goulash of statistics. I can probably come up with some rationale about some of those, but I won't bother. All I really care about is whether they will improve
my predictive accuracy.
To test that, I add those statistics to my base statistics and re-run the linear regression. In this case, what I find is that while some of the derived statistics are identified as having high value
by the linear regression, the overall performance does not improve.
There are other methods for feature selection, of course. RapidMiner has an extension focused solely on feature extension. This offers a variety of approaches, including selecting based on Maximum
Relevance, Correlation-Based Feature Selection, and Recursive Conditional Correlation Weighting. All of these methods identified "important" derived statistics, but none produced a set of features
that out-performed the base set.
A final approach is a brute force approach called forward search. In this approach, we start with the base set of statistics, add each of the derived statistics in turn, and test each combination. If
any of those combinations improve on the base set, we pick the best combination and repeat the process. We continue this way until we can find no further improvement.
There are a couple of advantages to this approach. First, there's no guessing about what features will be useful -- instead we're actually running a full test every time and determining whether a
feature is useful or not. Second, we're testing all combinations in our search space, so we know we'll find the best combination. The caveat here is that we assume that improvement is monotonic with
regards to adding features. If the best feature set is "A, B, C" then we're assuming we can find that by adding A first (because it offers the most improvement at the first step), then B to that, and
so on. That isn't always true, but in this case it seems a reasonable assumption.
The big drawback of this approach is that it is very expensive. We have to try lots of combinations of features, and we have to run a full test for each combination. In this case, the forward search
took about 54 hours to complete -- and since I had to run it several times because of errors or tweaks to the process in ended up taking about a solid week of computer time.
In the end, the forward search identified ten derived features, with this performance:
│ Predictor │ % Correct │ MOV Error │
│ Base Statistical Predictor │ 72.3% │ 11.10 │
│ w/ Forward Search Features │ 74.0% │ 10.73 │
This is a fairly significant improvement. The most important derived features in the resulting model were:
1. The away team's opponent scoring average over the away team's winning percentage.
2. The away team's offensive rebounding average over the away team's # of field goals attempted
3. The away team's scoring average over the away team's winning percentage
4. The away team's opponent treys attempted over the away team's rebounds
The ten statistics were actually evenly divided between home team statistics and away team statistics, but it turned out that the most significant five were all the away team statistics.
I'll leave it to the reader to contemplate the meaning of these statistics, but there are some interesting suggestions here. The first and third statistics seem to be saying something about whether
the away team is winning games through defense or offense. The second and fourth statistics seem to be saying something about rebounding efficiency, and perhaps about whether the team is good at
getting "long" rebounds. (The statistics for the home team are completely different, by the way.)
Next time I'll begin looking at a different set of derived statistics. | {"url":"http://netprophetblog.blogspot.ca/2012_02_01_archive.html","timestamp":"2014-04-19T11:57:48Z","content_type":null,"content_length":"99249","record_id":"<urn:uuid:e9347eac-583e-4914-9b6d-b814d10db6f7>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why does a universe with a spherical geometry have to be finite?
Well, how about one? Would you like to make some point?
I was talking about the 2D surface of a 3D ball, something with a definite geometry.
If you want to talk about some other geometric object, some different surface, you could define it.
I was just thinking how a 2d surface of a 3d sphere could be infinite, so I thought fractals.. infinite recursions of a surface texture but thats just mathmatical, in the real world you would reach a
finite limit. Of course if you are talking a pure geometric sphere then it would have no surface texture and would be finite.
one thing I kind of take issue with and it's probably due to a lack of understanding on my part. It has been said that in postivly curved space there can be no parralel lines, that they would
eventually intersect. Yet I can draw paralel non intersecting lines all the way around a globe.
Actually I understand the concept, I'm just being nitpicky, I think the globe and poles analogy is useful only to a point. like the baloon analogy.
but I like a previous posters comment about a circle or sphere with infinite diameter. That would be as easy to imagine as an infinite 2d plane.
'edit'.. another quick thought. since the circumference of a circle is calculated based on PI and since PI has infinite decimal fraction then is the circumference of a circle really finite?
Hope all that's not too off topic. | {"url":"http://www.physicsforums.com/showthread.php?t=293157","timestamp":"2014-04-20T08:41:29Z","content_type":null,"content_length":"67340","record_id":"<urn:uuid:7696229f-c059-4d9d-a6c8-df7fdefc3b79>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem with array manipulation
I'm trying to solve a cubic equation. A cubic equation has three solutions for x and I want the values of x to be stored in an array int_array. The problem that I'm having is that I'm getting double
results for each x. I want to access the first value of x by writing the following: int_array[0], the second value by int_array[1] and the third value by int_array[2]. The other problem is that if I
try to access int_array[1], I get [3,0,0], the same result that I get when I try to access int_array[0] and int_array[2].
Here is the code that I wrote to solve the problem:
public class Equation {
public void doCalculations(int a, int b, int c, int d){
int j;
for (int i=-10;i<=10;i++){
if((a*(i*i*i)+ b*(i*i)+ c*(i)+ d)==0){
int int_array[] = new int[3];
for(int k=0;k<int_array.length;k++)
System.out.println("The value of X1 is :" + int_array[0]);
System.out.println("The value of X2 is :"+ int_array[1]);
System.out.println("The value of X3 is :"+ int_array[2]);
public static void main(String[]args){
Equation equation = new Equation();
equation.doCalculations(1, -6, 11, -6);
Here are sample results that I get after running the program:
The value of X2 is :1
The value of X3 is :1
The value of X1 is :2
The value of X2 is :2
The value of X3 is :2
The value of X1 is :3
The value of X2 is :3
The value of X3 is :3
May anyone out there help me, I'm stuck and I dont know where to begin now. I also accept direct postings to
. Thank you in advance.
Last edited by shabbir; 2Apr2009 at 15:20.. Reason: Code blocks
Please use code blocks when posting code.
What do you think the output should be and why (i.e. what do you think you are displaying)?
What are you trying to do with this code? :
for(int k=0;k<int_array.length;k++)
You seem to be recreating the int array every time the result of the equation is zero, then assigning the same value to each of the three locations. So probably what you want to do is to create the
array outside the loop and set j=0, then inside the loop, when the result is zero, assign i to int_array[j++], and maybe use a safety cutout in case j exceeds 2 because due to integer arithmetic that
could well happen (why aren't you using double variables?) | {"url":"http://www.go4expert.com/forums/array-manipulation-t16768/","timestamp":"2014-04-20T21:01:13Z","content_type":null,"content_length":"32961","record_id":"<urn:uuid:e54972e0-1544-47e5-b6e1-261aacc768e8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
Radhika Ganapathy
Office: LSK 126D
Email: rganapat(AT)math(DOT)ubc(DOT)ca
Winter 2013, Term 2: MATH 101 Section 209, Integral Calculus with Applications to Physical sciences and Engineering
Class time and location: TTh: 9:30 - 11 am in MATH 100
Office hours: TTh - 11 am to noon in LSK 303D
Course Information:
Here is the link to the MATH 101 Common Course Website. It is the main resource for information on MATH 101 course policies, exams, grading scheme etc.
Here is the course outline.
Grading Scheme:
Your grade normally will be computed based on the following formula: 50% Final Exam + 30% 1 Midterm + 10% WebWork Assignments + 10% Section specific Homework, Quizzes, and other coursework assigned
during the lectures. The Midterm and Final Exam will be common to all sections of MATH 101. Note that a student must score at least 40% on the final exam to pass the course, regardless of the grade
computed by the normal calculation.
Midterm: There will be one common midterm in MATH 101. The date, which is subject to change, is Tuesday, February 25th, scheduled for a set time period between 6 p.m. and 8 p.m. | {"url":"http://www.math.ubc.ca/~rganapat/","timestamp":"2014-04-20T11:04:30Z","content_type":null,"content_length":"2195","record_id":"<urn:uuid:3032033f-7a70-4e2c-a0f6-7fb296515e82>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Soil Behavior from Analysis of Tests of Uninstrumented Piles under Lateral Loading
STP444: Soil Behavior from Analysis of Tests of Uninstrumented Piles under Lateral Loading
Reese, L. C.
Professor and chairman of Department of Civil Engineering and associate professor of Civil Engineeringpersonal member ASTM, The University of Texas, Austin, Tex.
Cox, W. R.
Professor and chairman of Department of Civil Engineering and associate professor of Civil Engineeringpersonal member ASTM, The University of Texas, Austin, Tex.
Pages: 17 Published: Jan 1969
The most up-to-date method for the design of laterally loaded piles is to solve numerically the differential equation describing pile behavior. Iterative solutions are necessary since there is a
nonlinear relationship between soil resistance and pile deflection. Curves giving soil resistance as a function of pile deflection, called p-y curves, have been the subject of research for a number
of years. The development of p-y curves normally requires that a test be performed on an instrumented laterally loaded pile. A curve showing bending moment in the pile needs to be obtained for each
of the applied loads. This curve can be differentiated twice to obtain soil resistance, and it can be integrated twice to obtain pile deflection. Cross plots of these values can be made at desired
depths to obtain the p-y curves. This paper shows that nondimensional curves, developed from the numerical solutions of the differential equation, can be used to estimate p-y curves if only the
following easily obtainable information is reported; pile properties, magnitude of the individual lateral loads, point of load application, deflection of the top of the pile, slope of the top of the
pile, and condition of restraint (if any) at the top of the pile. Thus, there needs to be no instrumentation of the pile except above ground. The procedure is illustrated by applying it to a test
reported in the literature.
piles, static loads, dynamic loads, pile tests, soil mechanics, instrumentation, soil modulus, elastic theory, evaluation, tests
Paper ID: STP47285S
Committee/Subcommittee: D18.11
DOI: 10.1520/STP47285S | {"url":"http://www.astm.org/DIGITAL_LIBRARY/STP/PAGES/STP47285S.htm","timestamp":"2014-04-17T12:38:57Z","content_type":null,"content_length":"13414","record_id":"<urn:uuid:718b19b5-1518-4e90-811c-2deb28cc7e52>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
[BioC] coerce matrix to aafTable
James W. MacDonald jmacdon at med.umich.edu
Thu Nov 29 15:01:04 CET 2007
Hi Georg,
Georg Otto wrote:
> Hi,
> I have an R/bioconductor problem, that seemed to be quite simple to me
> at the first glance, but turned out to be tricky, either because I do
> not have reached the necessary level of R-savvyness yet, or because I
> miss something obvious.
> Here is the problem: I want to transform a matrix of expression
> values (where rows are genes and columns are hybridizations) into an
> aafTable (from package annaffy). The probe IDs should be taken from
> the row names of the matrix and the column names should be taken from
> the column names of the matrix. No problem to do this by hand:
> table<-aafTable(probeids=rownames(matrix),
> "colname1"=matrix[,1],
> "colname2"=matrix[,2],
> "colname3"=matrix[,3],
> "colname4"=matrix[,4])
table <- aafTable(items = as.data.frame(matrix))
This can be rather computationally expensive if the matrices are large,
but it is probably the simplest solution.
> I would like to do that programatically for many different matrices
> which differ both in the number of columns and in the column names,
> maybe by using a small function. However I was not successful to
> contrive such a thing. Maybe somebody out there could give me a hint?
> Best,
> Georg
> _______________________________________________
> Bioconductor mailing list
> Bioconductor at stat.math.ethz.ch
> https://stat.ethz.ch/mailman/listinfo/bioconductor
> Search the archives: http://news.gmane.org/gmane.science.biology.informatics.conductor
James W. MacDonald, M.S.
Affymetrix and cDNA Microarray Core
University of Michigan Cancer Center
1500 E. Medical Center Drive
7410 CCGC
Ann Arbor MI 48109
More information about the Bioconductor mailing list | {"url":"https://stat.ethz.ch/pipermail/bioconductor/2007-November/020303.html","timestamp":"2014-04-19T12:06:08Z","content_type":null,"content_length":"4901","record_id":"<urn:uuid:a8791476-2bed-4ecd-bf6b-b13f81219e9f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to plot and generate sine wave
December 28th 2012, 11:13 PM
How to plot sine wave and generate table
Hi All,
Could someone tell me how to plot the graph and generate the table for y = 9 sin (Ɵ - 15) over 360 degree.
The coordinates need to be a 15 degree intervals
December 28th 2012, 11:31 PM
Re: How to plot and generate sine wave
Hey bungouk.
Are you aware of the taylor series for the sine function (in radians)? Do you know how to convert radians to degrees (or degrees to radians)?
December 28th 2012, 11:34 PM
Re: How to plot and generate sine wave
i'm not aware of the taylor series and yes i do know how to convert degrees to radians and and radians to degrees
December 28th 2012, 11:49 PM
Re: How to plot and generate sine wave
Here is more information on the Taylor series:
Trigonometric functions - Wikipedia, the free encyclopedia
December 29th 2012, 12:22 AM
Re: How to plot and generate sine wave
Are you not allowed to use a scientific calculator, or even to look them up in tables ??????
December 29th 2012, 12:30 AM
Re: How to plot and generate sine wave
yes i am allowed to as long as i show it on graph and a table
December 29th 2012, 02:37 AM
Prove It
Re: How to plot sine wave and generate table
Don't bother using Taylor Series for this problem. You are expected to know the exact values of the sine of multiples of 30 degrees, and the others can be found using a half angle identity.
December 29th 2012, 07:41 AM
Re: How to plot sine wave and generate table
does the 9 represent the amplitude, what do i do with (Ɵ - 15) part
December 29th 2012, 08:02 AM
Re: How to plot and generate sine wave
December 29th 2012, 08:09 AM
Re: How to plot and generate sine wave
So, what, exactly is the question? If it is, as you initially said, to "graph and generate the table" with "15 degree intervals", then you just need to use your calculator (make sure it is set to
"degree" measure, not radian) to to take $\theta= 0, 15, 30, 45, ... 345, 360$ so that $\theta- 15= -15, 0, 15, 30, ... 330, 345$ and so find 9sin(-15), 9 sin(0), 9 sin(15), 9 sin(30), 9 sin(45),
..., 9 sin(330), 9 sin(450) and plot those points.
If, however, the problem is to use the "standard" graph, y= sin(x), and alter it to give y= 9 sin(x- 15), then, yes, "9" is the amplitude: sin(x) is always between-1 and 1 so that, no matter what
$\theta$ is, 9 sin(x- 15) is always between -9 and 9. One thing you could do is draw horizontal lines at y= -9 and y= 9 as guides. You know, I presume, that sin(x) is periodic with period 360
degrees, that it is 0 when x= 0, goes up to 1 when x= 90, down to 0 at x= 180, down to -1 at x= 270, back up to 0 at x= 360 and then repeats. Okay, when $\theta- 15= 0$, that is, when $\theta=
15$, $9sin(\theta- 15)$ is 0. When $\theta- 15= 90$, that is, when $\theta= 105$, $9sin(\theta - 15)$ is 9, when $\theta- 15= 180$, that is, when $\theta= 195$, [tex]9sin(\theta-15)= 0[k/tex],
December 29th 2012, 08:46 AM
Re: How to plot and generate sine wave
Plot on a single sheet of graph paper the two following functions over 360o. Ensure both your axis are clearly labelled and that the graph is appropriately titled. Produce tables to generate all
of the coordinates for both functions at approximately 15o intervals.
a) Y = 9 sin (θ – 15)
b) Y = 5 cos (θ + 10)
please see full question above
December 29th 2012, 09:13 AM
Re: How to plot and generate sine wave
Plot on a single sheet of graph paper the two following functions over 360o. Ensure both your axis are clearly labelled and that the graph is appropriately titled. Produce tables to generate all
of the coordinates for both functions at approximately 15o intervals.
a) Y = 9 sin (θ – 15)
b) Y = 5 cos (θ + 10)
please see full question above
clear enough ... follow my advice in post #9
December 29th 2012, 11:11 AM
Re: How to plot and generate sine wave
taking the sine of θ - 15, is what's known as a "phase shift".
since you are going to find sine in 15 degree intervals anyway: do this:
sin(0°) = sin((15-15)°) <---this is your value for θ = 15°
sin(15°) = sin((30-15)°) <--this is your value for θ = 30°
remember to multiply everything by 9 when you're done. your answers will be numbers between -9 and 9 (if you get something of bigger absolute value than this, you've made a mistake, somewhere).
some points are "easy" to find (all angles are in degrees):
9sin(45-15) = 9sin(30) = 9/2 = 4.5
9sin(60-15) = 9sin(45) = 9√2/2 ~ 6.364
9sin(75-15) = 9sin(60) = 9√3/2 ~ 7.794
9sin(105-15) = 9sin(90) = 9
and so on. the "hard ones" to do are for θ = 30 and θ = 90 (which require finding sin(15) and sin(75)). after that, there is a pattern:
A,B,C,D,E,F,G (for the first 7 15-degree increments, starting with θ = 15)
F,E,D,C,B,A (for the next 6)
-B,-C,-D,-E,-F,-G (for the next 6)
-F,-E,-D,-C,-B (the next one would be A = 9sin(375-15) = 9sin(360) = 9sin(0) = 9sin(15-15), you've come "full circle").
December 30th 2012, 10:44 PM
Re: How to plot sine wave and generate table
Just draw the curve for sine function.
Now you have to make adjustments as explained:
For the function f(x) = A sin(Bx + C ) + D
A gives the amplitude, B the period, C the horizontal shift and D the vertical shift. I am sure this would help you. | {"url":"http://mathhelpforum.com/trigonometry/210464-how-plot-generate-sine-wave-print.html","timestamp":"2014-04-19T12:16:16Z","content_type":null,"content_length":"15731","record_id":"<urn:uuid:414b0b83-1c83-4d13-8e0f-4de8f99cce8c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] dot function or dot notation, matrices, arrays?
[Numpy-discussion] dot function or dot notation, matrices, arrays?
Dag Sverre Seljebotn dagss@student.matnat.uio...
Mon Dec 21 15:31:04 CST 2009
Christopher Barker wrote:
> Dag Sverre Seljebotn wrote:
>> I recently got motivated to get better linear algebra for Python;
> wonderful!
>> To me that seems like the ideal way to split up code -- let NumPy/SciPy
>> deal with the array-oriented world and Sage the closer-to-mathematics
>> notation.
> well, maybe -- but there is a lot of call for pure-computational linear
> algebra. I do hope you'll consider building the computational portion of
> it in a way that might be included in numpy or scipy by itself in the
> future.
> Have you read this lengthy thread?
> and these summary wikipages:
> http://scipy.org/NewMatrixSpec
> http://www.scipy.org/MatrixIndexing
> Though it sounds a bit like you are going your own way with it anyway.
Yes, I'm going my own way with it -- the SciPy matrix discussion tends
to focus on cosmetics IMO, and I just tend to fundamentally disagree
with the direction these discussions take on the SciPy/NumPy lists.
What I'm after is not just some cosmetics for avoiding a call to dot.
I'm after something which will allow me to structure my programs better
-- something which e.g. allows my sampling routines to not care (by
default, rather than as a workaround) about whether the specified
covariance matrix is sparse or dense when trying to Cholesky decompose
it, or something which allows one to set the best iterative solver to
use for a given matrix at an outer level in the program, but do the
actual solving somewhere else, without all the boilerplate and all the
variable passing and callbacks.
Dag Sverre
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-December/047608.html","timestamp":"2014-04-17T06:55:40Z","content_type":null,"content_length":"4701","record_id":"<urn:uuid:62b46d8a-9d58-4d71-9da0-f24c90736be0>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Blue Bell Prealgebra Tutor
...I am also completing my certification in Orton-Gillingham, a program designed specifically for teaching children with dyslexia how to read, through Fairleigh Dickinson University. This program
is also effective for children with ADD/ADHD as it uses multi-sensory techniques and uses short session...
20 Subjects: including prealgebra, reading, geometry, algebra 1
...While I teach English, I love math too! Breaking down math problems in the SAT is still a type of critical reading. I placed out of math in college, and took through Calc AP in high school.
17 Subjects: including prealgebra, reading, writing, English
...I hold an advanced degree in Education from the University of Pennsylvania. As a tutor, I work to get to know each of my students and their families. Understanding the different needs and
learning styles of each student is important to developing a comprehensive tutoring plan.
12 Subjects: including prealgebra, reading, algebra 1, special needs
...I am stern but caring, serious but fun, and nurturing but have high expectations of all of my students. Together as a team, you and I can help your child to do his or her best. I look forward
to working with you and your child!I am a certified and current teacher in the public schools.
12 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...The City has put me through advanced Excel classes and I have continued to learn since. In order to scientifically look through data I have to refine ways to improve my skills and get things
done more efficiently. I am always trying to improve.
16 Subjects: including prealgebra, physics, geometry, biology | {"url":"http://www.purplemath.com/Blue_Bell_Prealgebra_tutors.php","timestamp":"2014-04-17T16:08:56Z","content_type":null,"content_length":"23931","record_id":"<urn:uuid:a656fa89-36a6-4c65-83ca-8f91059a3f53>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quarter Wit, Quarter Wisdom: Varying Jointly
Now that we have discussed direct and inverse variation, joint variation will be quite intuitive. We use joint variation when a variable varies with (is proportional to) two or more variables.
Say, x varies directly with y and inversely with z. If y doubles and z becomes half, what happens to x?
“x varies directly with y” implies x/y = k (keeping z constant)
If y doubles, x doubles too.
“x varies inversely with z” implies xz = k (now keeping y constant)
If z becomes half, x doubles.
So the overall effect is that x becomes four times of its initial value.
The joint variation expression in this case will be xz/y = k. Notice that when z is constant, x/y = k and when y is constant, xz = k; hence both conditions are being met. Once you get the expression,
it’s very simple to solve for any given conditions.
x1*z1/y1 = x2*z2/y2 = k (In any two instances, xz/y must remain the same)
x1*z1/y1 = x2*(1/2)z1/2*y1
x2 = 4*x1
Let’s look at some more examples. How will you write the joint variation expression in the following cases?
1. x varies directly with y and directly with z.
2. x varies directly with y and y varies inversely with z.
3. x varies inversely with y^2 and inversely with z^3.
4. x varies directly with y^2 and y varies directly with z.
5. x varies directly with y^2, y varies inversely with z and z varies directly with p^3.
Solution: Note that the expression has to satisfy all the conditions.
1. x varies directly with y and directly with z.
x/y = k
x/z = k
Joint variation: x/yz = k
2. x varies directly with y and y varies inversely with z.
x/y = k
yz = k
Joint variation: x/yz = k
3. x varies inversely with y^2 and inversely with z^3.
x*y^2 = k
x*z^3 = k
Joint variation: x*y^2*z^3 = k
4. x varies directly with y^2 and y varies directly with z.
x/y^2 = k
y/z = k which implies that y^2/z^2 = k
Joint variation: x*z^2/y^2 = k
5. x varies directly with y^2, y varies inversely with z and z varies directly with p^3.
x/y^2 = k
yz = k which implies y^2*z^2 = k
z/p^3 = k which implies z^2/p^6 = k
Joint variation: (x*p^6)/(y^2*z^2) = k
Let’s take a GMAT prep question now to see these concepts in action:
Question 1: The rate of a certain chemical reaction is directly proportional to the square of the concentration of chemical M present and inversely proportional to the concentration of chemical N
present. If the concentration of chemical N is increased by 100 percent, which of the following is closest to the percent change in the concentration of chemical M required to keep the reaction rate
(A) 100% decrease
(B) 50% decrease
(C) 40% decrease
(D) 40% increase
(E) 50% increase
Rate/M^2 = k
Rate*N = k
Rate*N/M^2 = k
If Rate has to remain constant, N/M^2 must remain the same too.
If N is doubled, M^2 must be doubled too i.e. M must become √2 times. Since √2 = 1.4 (approximately),
M must increase by 40%.
Answer (D)
Simple enough? | {"url":"http://www.veritasprep.com/blog/2013/02/quarter-wit-quarter-wisdom-varying-jointly/","timestamp":"2014-04-17T18:26:37Z","content_type":null,"content_length":"47199","record_id":"<urn:uuid:3c630cbd-559f-4aef-a26a-2d0c3d90d70f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to calculate the steel for rcc slab?
plz tell me the formula for calcu
how to calculate the steel for rcc slab?
Question plz tell me the formula for calculating the steel?
Question Submitted By :: Kaleem
I also faced this Question!! Rank Answer Posted By
Re: how to calculate the steel for rcc slab? plz tell me the formula for calculating the steel?
Answer For Tender purpose, just take valume of RCC conc.X 110 = 3 Mahendra Gaikwad
# 1 steel in rcc
Is This Answer Correct ? 394 Yes 160 No
Re: how to calculate the steel for rcc slab? plz tell me the formula for calculating the steel?
Answer there is no general formula for the calculation fo steel 2 Md Gulam Moinuddin
# 2 (reinforcement) its depend upon the design of the structur
,for whih load and moment acting on the structure are
Is This Answer Correct ? 386 Yes 86 No
Re: how to calculate the steel for rcc slab? plz tell me the formula for calculating the steel?
Answer Ast=0.5fck/fy(1-(1-4.6Mu/fck.b.d.d))bd 4 Shashwati Nag
# 3
Is This Answer Correct ? 194 Yes 83 No
Re: how to calculate the steel for rcc slab? plz tell me the formula for calculating the steel?
Answer for calculating the area of steel in rcc slabs, 0 K.srinivasa Rao
# 4 ast = ( M ) / (.87*fy*(d-.42*xu))
M= moment corresponding to entire loads of span in one
meter length
fy= characteristic strength of steel
d= depth of slab
xu=lever arm depth depending upon the type of fy given
in Is:456-2000
Is This Answer Correct ? 146 Yes 48 No
Re: how to calculate the steel for rcc slab? plz tell me the formula for calculating the steel?
Answer Two Methods, First from Calculation, 2nd using SP16 0 Hemant Gor
# 5 1st Method Compute K= Mu/(fcu x b x d^2)
for Fy = 415 if K<= 0.138 Singly Reinforced
Calculate Z/d = 0.5 + sqrt { 0.25 - K/0.865}
Area of Steel = Mu / { 0.87 x fy x Z }
Z is Lever Arm Distance Between CG of Tension and Compression Force
2nd Method
Calculate Mu /{b x d^2}
Check Table 2 to 5 of SP16 based on Concrete Grade
It gives Percentage of Steel Pt for given Grade of Concrete,steel and Mu/{bxd^2}
Is This Answer Correct ? 84 Yes 28 No
Re: how to calculate the steel for rcc slab? plz tell me the formula for calculating the steel?
Answer using code book to find out is one method is456-2000 1 Devendranchandrasekar
# 6 sp16
another method is design of rcc slab by limit state method
u refer any author
u check one way or two slab slab
what is ur liveload , dead load , seismic condition and
factor of safety
then u go to table or manual both will be same or it may be
slightly different
dont ask basic questions
Is This Answer Correct ? 54 Yes 33 No
Re: how to calculate the steel for rcc slab? plz tell me the formula for calculating the steel?
Answer ast = ( M ) / (.87*fy*(d-.42*xu)) 0 Shafiullah
# 7
Is This Answer Correct ? 65 Yes 27 No
Re: how to calculate the steel for rcc slab? plz tell me the formula for calculating the steel?
Answer It depends on type of loading on the slab 0 Devendran Chandrasekar
# 8 normally we go for ly/lx ratio after to find out one or two
way slab
depending on the loading condition u can find out IS
456-2000 Code book to
find the rft depends on moment acting on the slab
if it is two way slab provide shear reinforcement on all
four corners
better u can study design of concrete structures in slab
there is
calculation arrived by some model examples
but field condition is difference follow depends on
structural designer
u cannot ask direcly what is the formula for design the slab?
u have to read more book then u know
Saif Bin Darwish
Abu Dhabi
Is This Answer Correct ? 49 Yes 15 No
Re: how to calculate the steel for rcc slab? plz tell me the formula for calculating the steel?
Answer Ast=(0.5Fck/Fy) into(1-(in sqrt(1-((4.6Mu)/Fck.bd^2))into bd 0 Vinesh
# 9 Fck=grade of cement i.e M25, M20
Fy=grade of steel i.e.Fe415,Fe250
Is This Answer Correct ? 60 Yes 24 No
Re: how to calculate the steel for rcc slab? plz tell me the formula for calculating the steel?
Answer Design steps of one way slab 0 Aanand Babasaheb Chougule
# 10 1. Depth of slab
d= lx/20 * M.F. (M.F. = Modification factor)
Assume the dia. of bar (8, 10, 12) and
Final overall depth
D= d+15+0/2
d= d-15-0/2
2. Effective Span
This is taken as the lesser of the followings
a) Center to center distance between followings
L= lx + t
L= lx + d
3. Load
Consider 1m width of the slab and find out the total udl
a) W = (Self weight + Live Load + Floor finish)
b) Find factor load
Wu = 1.5 * w
4. Factor bending moment
Mu = Wu * l²/8
5. Equating Mu limit to Mu find the depth req. if it is
less than d than O.K otherwise
Increase D & repeat 2, 3, and 4
Mulimit = Mu
6. Area of main steel
Pt = 50fck/fy (1-√1 4.6 Mu / fck bd²)
Ast = Pt/100 *100* d
7. Spacing of main steel
S = Area one bar * 1000/ Ast
Check: - a) 3d
b) 450 mm
8. Distribution Steel
Astd = 0.15 % of Ag -Fe 250 (mild)
= 0.12% of Ag Fe 415/5000 (torque)
Ag = b.D
9. Spacing of distribution steel
S= Area one bar * 1000/ Astd
Check: - a) 5d
b) 450 mm
10. Check for depth
a) v = V/bd
b) Pt Zc
c) Zv < Zc .. No shear reinforcement req.
Is This Answer Correct ? 83 Yes 19 No
123 >> | {"url":"http://www.allinterview.com/showanswers/80855.html","timestamp":"2014-04-17T06:50:37Z","content_type":null,"content_length":"51758","record_id":"<urn:uuid:4ba957c0-a1df-4c92-93c4-d80cf2611c62>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
resources for
You are here: Home → Online resources → Calculus
Online resources for calculus & graphing
Calculus Books Graphing
Tutorials for the Calculus Phobe
Animated little flash movies about limits, continuity and derivative.
Graphics for the Calculus Classroom
Animations illustrating basic calculus concepts.
Visual Calculus
A collection of modules from pre-calculus till sequences and series that uses lots of numerical and graphical illustrations, Flash tutorials, animations, etc. Some of the discussion is on college
A collection of clear and simple video tutorials for precalculus, limits & continuity, derivatives & their applications, integrals & their applications, sequences, series, polar & parametric
functions, vectors, and differential equations.
Calculus for Beginners and Artists
By Daniel Kleitman. Includes 20 chapters of text, Java applets, and some Flash dialogs all completely free and online.
Online Calculus Course Video Tutorials
A collection of free videos for the calculus student.
Calculus by Gilbert Strang
A free download of a regular calculus book for undergraduates. It is well organized, covers single variable and multivariable calculus in depth, and is rich with applications.
Calculus video tutorials from Midnight Tutor
Videos of calculus problems and their solutions. You can also send in your problem and they will post a video of its solution.
Calculus video course
Free videos from university of Houston, covering a complete college calculus course.
Over 6,000 free, online video lessons for basic math, algebra, trigonometry, and calculus. Videos also available in Spanish. Also includes online textbooks. I've written a review of MathTV lessons
when they used to be offered on CDs.
BrightStorm Math
Over 2,000 free videos covering all high school math topics from algebra to calculus. Registration required (free).
Notes on first-year Calculus
Free college level calculus text in PDF form. Good for refreshment or for concise notes but not for first time learners.
Calculus in Context
An introductory calculus textbook you can download for free (PDF). It is based on studying calculus as it is used in contemporary science, so that mathematical ideas and techniques grow out of
scientific questions.
Calculus Made Easy
A free download (PDF) of an old textbook, acclaimed to be a lively introduction to calculus, with clarity and simplicity.
Derivative Calculator and Integral Calculator
Online symbolic calculators for derivatives and integrals. Both tools are designed for intuitive user interaction. While you type in your expression, it is transformed into a graphical formula in
real-time and shown to you, which helps reducing input mistakes. The calculators do not show step-by-step differentiation or integration (not intended for cheating) but work great for checking your
homework, or finding the derivative or integral for some other usage.
Visualizing an Infinite Series
An on-line tutorial illustrating geometrically how an infinite series can have a finite sum.
Integral Definida
A tutorial with a series of Java applets about definite integral. En Español.
A math ebook by Dan Umbarger explaining logarithm how's, why's, and wherefore's in all detail for students.
An Easy Way to Remember How Logarithms Work
This is a visual mnemonic to help remember what goes where in the logarithmic equation.
Contains video animations on calculus topics that are most useful for college and high school math instructors to reinforce or clarify what is explained in the class/lecture.
Calculus by Ron Larson
Larson's Calculus is a basic textbook with long history on the market. It contains lots of illustrations and plenty of exercises. It has been praised by a students and professors for its effective
pedagogy that addresses the needs of a broad range of teaching and learning styles and environments.
Mathematical Modeling and Computational Calculus
This is a DIFFERENT kind of calculus book: it doesn't go through theorems and such, but instead teaches you about mathematical modeling using differential equations that are computed with the aid of
computers, without the need of any advanced mathematics. Systems studied include satellite orbits, the orbits of the earth and moon, rocket trajectories, the Apollo mission trajectory, the Juno space
probe, electrical circuits, oscillators, filters, tennis serves, springs, friction, automobile suspension systems, lift and drag, and airplane dynamics.
Calculus Without Tears
Calculus Without Tears is a collection of worksheets (in 4 volumes) that teaches basic concepts of calculus very step-by-step, without need of much algebra. They are intended to be self-teaching
workbooks that even students before high school can study. Calculus Without Tears starts with studying the simplest of motions, which is a runner running with constant speed (or sometimes standing
still!). Volume 2 (Newton's Apple) concentrates on the motion of a falling apple. Volume 3 goes about finding the derivatives of polynomials, trigonometric functions, roots, exponential and
logarithmic functions. See my review.
Calculus Made Easy
Calculus Made easy is a classic textbook, making the subject at hand still more comprehensible to readers of all levels. Martin Gardner, himself an American mathematical landmark, says, "This is the
leanest and liveliest introduction to calculus ever written." The book concentrates on little bits of x, called dx, their differences and sums among all kinds of functions, their geometric meaning,
and what they can do for you.
A free, online graphing calculator that is incredibly easy and intuitive to use, yet very powerful. You can use parameters with sliders, define your own functions and constants, graph inequalities,
derivatives, and more. You can save, email, print, and embed your graphs.
An online graphing/scientific/matrix/statistics calculator. Includes graphing implicit equations, finding interesections for equations, an equation solver, LCM, GCF, standard deviation, regressions,
T-tests, and more.
An online graphing calculators for functions, equations, and inequalities with automatic analysis of graphs' properties. Automatic optimal viewing window, displays asymptotes, discontinuities,
piecewise graphing, etc. Also 'guess the graph', galleries, articles on graphing concepts and tricks. Very useful for high school students and teachers.
Function Flyer
Create graphs of functions but also allows the manipulation of constants and coefficients in any function so the user can explore the effects on the graph by changing those numbers. Great tool!
Graph Plotting - Basic Skills
For learning to graph a line - first, you fill in a table of x and y values, then the computer draws the line.
Function, derivative, and integral
Modify the given graph by dragging points and the applet dynamically shows the derivative, the integral, the tangent, the shading. Would be more useful with a lesson plan (provide yourself?) but
still very illustrative.
AnalyzeMath.com: Interactive Java Applets
Interactive tutorials and problems on precalculus graphing topics where the student interacts with a Java applet.
The Function Institute from Zona Land
Comprehensive discussion on definition of a function with Java applets that illustrate very clearly one-to-one, one-to-many, many-to-one concepts. Also has pages on different types of functions
(linear, polynomial, rational, exponential, trigonometric) and interactive graphs of those.
Mathinsite Products
A collection of very instructive Java applets with tutorial worksheets on graphing different functions (line, parabola, exponential function, circle, ellipse, trig functions, complex numbers and
Calculus by Benjamin Crowell
A short introductory free text for self-study of calculus. Available in many formats for downloading or viewing.
Maths Online Gallery: Functions 1
Collection of Java applets with exercises and some lesson plans: Function and graph, Recognize function graphs puzzles, Polynomial of third order, and a Function plotter.
Maths Online Gallery: Functions 2
Collection of Java applets with exercises: Recognize function graphs puzzles that concentrate on negative power functions and trigonometric functions, and graph collections of exponential &
logarithmic functions, and trigonometric functions.
Explorelearning Algebra Gizmos for Grades 9-12
Gizmos are dynamic online simulations that let students visualize and experiment with the mathematical concepts. They come with an exploration guide that serves as a lesson plan. Find gizmos for
linear, quadratic, exponential, logarithmic, rational, radical, trigonometric functions, polynomials, conics, sequences and series, and more. Excellent resource! Free 30-day trial.
www.explorelearning.com/index.cfm?method=cResource.dspChildrenForCourse&CourseID=126" target="_blank
GraphCalc—Graphing Calculator
This is a powerful, easy-to-use, scientific graphing calculator on your computer. Make 2D and 3D graphs, different number bases, statistical analysis, calculus, etc.
Price: Free
Engineering, scientific and financial freeware calculator for Windows. Functions for statistics, use of different number bases, metric units conversions and physical properties and constants. Also
has financial and time functions including investment, loan and mortgage calculations, a stopwatch.
Price: Free
SpeQ Mathematics
Small but extensive math program for calculations and graphing. Can also define variables and save your calculations.
Price: Free
An open-source scientific graphing calculator for web browsers. It can be stored on a hard drive, accessed over a local or wide area network, or made available via an internet webpage. Provides
calculation and graphing tools commonly needed by high school math and science students.
Price: Free
Microsoft Mathematics 4.0
Software that includes a full graphing calculator, equation solver, triangle solver, unit conversion tool, linear algebra solver, and more. Price: free.
A & G Grapher
A basic 2D/3D graphing program. Add tangent lines by clicking on the graph, trace the graph, calculate area. Also draws implicitly defined functions/graphs. Easy to learn.
Price: A 10-day free trial. $19.95.
MathGV Function Plotting Freeware for Windows
Plot 2 dimensional, parametric, polar, and 3 dimensional functions. You can add lines, rectangles, circles, round rectangles, flood fills, and text for labels or for artistic effects. Because the
plots are shown with a big screen, and are extremely easy to zoom in/out or move around, MathGV is excellent for visualizing graphs and functions in middle/high school. It does NOT have calculus or
tracing tools.
Price: Free
Mathscribe Lite 2.5.1
Mathscribe is a simple dynamic graphing and mathematical modeling program for algebra, trigonometry and precalculus classes. The free download includes 13 ready-to-print-n-use assignments, which I
consider a real benefit of this program. They cover linear equations, systems of linear equations, and quadratic equations. Also, the program comes with lots of pre-made examples of different kinds
of functions for the high school algebra/pre-calculus student. You can also use Mathscribe to mathematically model scientific data, either by typing in the data from scratch, or simply pasting it
from a spreadsheet or other program. Mathscribe lacks the zooming in/out and centering functions present in many plotting programs. Its strength is in the ready-made examples and lab sheets, helping
the student to connect both symbolic and visual thinking through guided discovery.
Price: Free
A software that includes eight different graphing modes, including implicit mode for conic sections. Other modes support polar, parametric, and piecewise-defined functions, as well as slope fields.
You can save your graphs or copy and paste them into other programs, and you can customize colors, line styles, captions, and legends. Includes over 100 built-in functions from calculus, statistics,
vectors, complex numbers, and more.
Price: $29
A comprehensive program, like a math professor on your desktop. Not only for algebra, but for analysis, geometry, 3D figures, stochastics, vector algebra, and linear algebra.
Download is free. Price: $45.
Very easy to use, fast, compact graphing program. Plots 2D graphs, implicitly defined functions, in polar coordinates, piecewise, parametric graphing. Finds derivative and area under curve. Use a
parameter/free variable. Data plotting, curve fitting, differential equations. Recommended for middle school/high school algebra and calculus.
Price: It's up to you what you pay; if you find Graphmatica easy, helpful, and convenient to use, you are asked to support the release of future versions by sending your contribution to the
Advanced Grapher
Intuitively easy to use 2D graphing program. Many calculus and function analysis features including derivative, tangent/normal, zeros, intersection, integral/area, regression. Choose between 8
Price: $29, has 30-day free trial
Handy Graph
HandyGraph produces cartesian graphs and number lines - including empty ones - and graph paper. Especially designed for teachers.
Price: $59 download, a free trial available.
Autograph is a dynamic and very powerful PC graphing program spanning many levels, including Pre-Algebra, Algebra I, Geometry, Algebra II, Trigonometry, Pre-Calc, Discrete Math, Calculus,
Differential Equations, Probability, and Statistics.
The numerous features include
• Cartesian plots, implicit functions, gradient/derivative, integral function, reflection in y=x, inequalities, piecewise, parametric, polar equations, differential equations, use of constants,
• Impressive zoom facilities and coordinate point manipulations, locus, shapes, transformations; regression lines
• Solves zeros, intersections; mid-points, centers
• Finds tangent, normal; parallel, perpendicular; angle bisector.
• Vector operations
• several numerical methods/area calculations
• geometrical transformations.
• statistical diagrams and data analysis
You can download a nice collection of examples, materials and worksheets. Recommended for those who are serious about mathematics in high school or college. Single user Price $80. Click here for
30-day preview free download | {"url":"http://www.homeschoolmath.net/online/calculus.php","timestamp":"2014-04-20T00:38:01Z","content_type":null,"content_length":"48522","record_id":"<urn:uuid:66e22ce4-676d-4629-a208-128992eb818e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Move-by-Move Dynamics of the Advantage in Chess Matches Reveals Population-Level Learning of the Game
The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the
study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player’s advantage from over seventy thousand high level chess matches
spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is 0.17
pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player’s advantage and
find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period,
corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends
as a resulting from learning of the features of the game. Additionally, we find that the exponent characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic
regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments.
Citation: Ribeiro HV, Mendes RS, Lenzi EK, del Castillo-Mussot M, Amaral LAN (2013) Move-by-Move Dynamics of the Advantage in Chess Matches Reveals Population-Level Learning of the Game. PLoS ONE 8
(1): e54165. doi:10.1371/journal.pone.0054165
Editor: Matjaz Perc, University of Maribor, Slovenia
Received: November 8, 2012; Accepted: December 7, 2012; Published: January 30, 2013
Copyright: © 2013 Ribeiro et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Funding: This work has been supported by the agencies Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES). HVR
thanks the financial support of CAPES (Grant 5678-11-0) and MdCM thanks DGAPA-UNAM (Grant IN102911) for partial financial support. The funders had no role in study design, data collection and
analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
The study of biological and social complex systems has been the focus of intense interest for at least three decades [1]. Elections [2], popularity [3], population growth [4], collective motion of
birds [5] and bacteria [6] are just some examples of complex systems that physicists have tackled in these pages. An aspect rarely studied due to the lack of enough data over a long enough period is
the manner in which agents learn the best strategies to deal with the complexity of the system. For example, as the number of scientific publication increases, researchers must learn how to choose
which papers to read in depth [7]; while in earlier times word-of-mouth or listening to a colleague’s talk were reliable strategies, nowadays the journal in which the study was published or the
number of citations have become, in spite of their many caveats, indicators that seem to be gaining in popularity.
In order to understand how population-level learning occurs in the “real-word,” we study it here in a model system. Chess is a board game that has fascinated humans ever since its invention in
sixth-century India [8]. Chess is an extraordinary complex game with legal positions and distinct matches, as roughly estimated by Shannon [9]. Recently, Blasius and Tönjes [10] have showed that
scale-free distributions naturally emerge in the branching process in the game tree of the first game moves in chess. Remarkably, this breadth of possibilities emerges from a small set of
well-defined rules. This marriage of simple rules and complex outcomes has made chess an excellent test bed for studying cognitive processes such as learning [11], [12] and also for testing
artificial intelligence algorithms such as evolutionary algorithms [13].
The very best chess players can foresee the development of a match 10–15 moves into the future, thus making appropriate decisions based on his/her expectations of what his opponent will do. Even
though super computers can execute many more calculations and hold much more information in a quickly accessible mode, it was not until heuristic rules were developed to prune the set of
possibilities that computers became able to consistently beat human players. Nowadays, even mobile chess programs such as Pocket Fritz™ (http://chessbase-shop.com/en/products/pocket_fritz_4) have a
Elo rating [14] of which is higher than the current best chess player (Magnus Carlsen with a Elo rating of 2835 – http://fide.com).
The ability of many chess engines to accurately evaluate the strength of a position enables us to numerically evaluate the move-by-move white player advantage and to determine the evolution of the
advantage during the course of a chess match. In this way, we can probe the patterns of the game to a degree not before possible and can attempt to uncover population-level learning in the historical
evolution of chess match dynamics. Here, we focus on the dynamical aspects of the game by studying the move-by-move dynamics of the white player’s advantage from over seventy thousand high level
chess matches.
We have accessed the portable game notation (PGN) files of 73,444 high level chess matches made free available by PGN Mentor™ (http://www.pgnmentor.com). These data span the last two centuries of the
chess history and cover the most important worldwide chess tournaments, including the World Championships, Candidate Tournaments, and the Linares Tournaments (see Table S1). White won of these
matches, black won and ended up with in a draw. For each of these 73,444 matches, we estimated using the Crafty™ [15] chess engine which has an Elo rating of 2950 (see Methods Section A). The white
player advantage takes into account the differences in the number and the value of pieces, as well as the advantage related to the placement of pieces. It is usually measured in units of pawns,
meaning that in the absence of other factors, it varies by one unit when a pawn (the pieces with lowest value) is captured. A positive value indicates that the white player has the advantage and a
negative one indicates that the black player has the advantage. Figure 1A illustrates the move dependence of for 50 matches selected at random from the data base. Intriguingly, visually resembles the
“erratic” movement of diffusive particles.
Figure 1. Diffusive dynamics of white player’s advantage.
(A) Evolution of the advantage for matches selected at random. We highlight the trajectories from three World Chess Championship matches: the match between Anand (playing white) and Kramnik in 2008
(green line), the match between Karpov (playing white) and Kasparov in 1985 (red line), and the match between Spassky (playing white) and Petrosian in 1969 (blue line). (B) Mean value of the
advantage as a function of move number for matches ending in draws (squares), white wins (circles) and black wins (triangles). Note the systematically alternating values and the initial positive
values of these means for all outcomes. For white wins, the mean advantage increases with , while for black wins it decreases. For draws, the mean advantage is approximately a positive constant. We
estimated the advantage of playing white to be and horizontal dashed line represents this value. (C) Variance of the advantage as a function of move number for matches ending in draws (squares) and
white wins (circles) and black wins (triangles). Note the very similar profile of the variance for white and black wins. Note also that there is practically no diffusion for the initial moves,
corresponding to the opening period, a very well studied stage of the game. After the opening stage, the trajectories exhibit a faster than diffusive spreading. For draws, we find this second regime
() to be superdiffusive and characterized by an exponent , as shown by the dashed line. For wins, the variance presents a more complex behavior. For the variance increases faster than ballistic
(hyper-diffusion), but for later stages it displays a behavior similar to that found for draws. (D) Variance of advantage evaluated after grouping the matches by length and outcome. For draws
(continuous lines), the different match lengths do not change the power-law dependence of the variance. For wins (dashed lines), the variance systematically approaches the profile obtained for draws
as the matches becomes longer. We further note the existence of a very fast diffusive regime for the latest moves of each grouping.
We first determined how the mean value of the advantage depends on the move number across all matches with the same outcome (Fig. 1B). We observed an oscillatory behavior around a positive value with
a period of move for both match outcomes. This oscillatory behavior reflects the natural progression of a match, that is, the fact that the players alternate moves. Not surprisingly, for matches
ending in a draw the average oscillates around an almost stable value, while for white wins it increases systematically and for black wins it decreases systematically.
Figure 1B suggests an answer to an historical debate among chess players: Does playing white yield an advantage? Some players and theorists argue that because the white player starts the game, white
has the “initiative, ” and that black must endeavor to equalize the situation. Others argue that playing black is advantageous because white has to reveal the first move. Chess experts usually
mention that white wins more matches as evidence of this advantage. However, the winning percentage does not indicate the magnitude of this advantage. In our analysis, we not only confirm the
existence of an advantage in playing white, but also estimate its value as by averaging the values of the mean for matches ending in draws.
We next investigated the diffusive behavior by evaluating the dependence of the variance of on the move number (Fig. 1C). After grouping the matches by match outcome, we observed for all outcomes
that there is practically no diffusion during the initial moves. These moves correspond to the opening period of the match, a stage very well studied and for which there are recognized sequences of
moves that result in balanced positions. After this initial stage, the variance exhibits an anomalous diffusive spreading. For matches ending in a draw, we found a super-diffusive regime () that is
described by a power law with an exponent . We note the very similar profile of the variance of matches ending in white or black wins.
Matches ending in a win display an hyper-diffusive regime ()– a signature of nonlinearity and out-of-equilibrium systems [16]. In fact, the behavior for matches ending in wins is quite complex and
dependent on the match length (Fig. 1D). While grouping the matches by length does not change the variance profile of draws, for wins it reveals a very interesting pattern: As the match length
increases the variance profile become similar to the profile of draws, with the only differences occurring in the last moves. This result thus suggests that the behavior of the advantage of matches
ending in a win is very similar to a draw. The main difference occurs in last few moves where an avalanche-like effect makes the advantage undergo large fluctuations.
Historical Trends
Chess rules have been stable since the 19th century. This stability increased the game popularity (Fig. 2A) and enabled players to work toward improving their skill. A consequence of these efforts is
the increasing number of Grandmasters – the highest title that a player can attain – and the decreasing average player’s age for receiving this honor (Figs. 2A and 2B). Intriguingly, the average
player’s fitness (measured as the Elo rating [14]) in Olympic tournaments has remained almost constant, while the standard deviation of the player’s fitness has increased fivefold (Figs. 2C and 2D).
These historical trends prompt the question of whether there has been a change in the diffusive behavior of the match dynamics over the last 150 years.
Figure 2. Historical changes in chess player demographics.
(A) Number of new Chess Grandmaster awarded annually by the world chess organization (http://fide.com) and the number of players who have participated in the Chess Olympiad (http://www.olimpbase.org)
since 1970. Note the increasing trends in these quantities. (B) Average players’ age when receiving the Grandmaster title. (C) Average Elo rating and (D) standard deviation of the of Elo rating of
players who have participated in the Chess Olympiad. Note the nearly constant value of the average, while the standard deviation has increased dramatically.
To answer this question, we investigated the evolution of the profile of the mean advantage for different periods (Fig. 3A). For easier visualization, we applied a moving averaging with window size
two to the mean values of . The horizontal lines show the average values of the means for and the shaded areas are confidence intervals obtained via bootstrapping. The average values are
significantly different, showing that the baseline white player advantage has increased over the last 150 years. We found that this increase is well described by an exponential approach with a
characteristic time of years to an asymptotic value of pawns (Fig. 3C). Our results thus suggest that chess players are learning how to maximize the advantage of playing white and that this advantage
is bounded.
Figure 3. Historical trends in the dynamics of highest level chess matches.
(A) Mean value of the advantage of matches ending a draw for three time periods. These curves were smoothed by using moving averaging over windows of size 2. The horizontal lines are the averaged
values of the mean for and the shaded regions are confidence intervals for these averaged values. (B) Variance of the advantage of matches ending a draw for three time periods. The shaded regions are
confidence intervals for the variance and the colored dashed lines indicate power law fits to each data set. The horizontal dashed line represents the average variance for the most recent data set
and for . Note the systematic increase of and of the number of moves in the opening. The symbols on this line indicate the values of , the number of moves at which the diffusion of the advantage
changes behavior. The rightmost symbol represent the extrapolated maximum value . (C) Time evolution of the white player advantage for matches ending in draws. The solid line represents an
exponential approach to an asymptotic value. The estimated plateau value is pawns and the characteristic time is years. Time evolution of (D) the exponent and (E) the crossover move . The solid lines
are fits to exponential approaches to the asymptotic values and . The estimated characteristic times for convergence are years for the diffusive exponent and years for the crossover move. Based on
the conjecture that and are approaching limiting values, we plotted a continuous line in Fig 3B to represent this limiting regime.
Next, we considered the time evolution of the variance for matches ending in draws (Fig. 3B). Surprisingly, seems to be approaching a value close to that for a ballistic regime. We found that the
exponent follows an exponential approach with a characteristic time of years to the asymptote (Fig. 3D). We surmise that this trend is directly connected to an increase in the typical difference in
fitness among players. Specifically, the presence of fitness in a diffusive process has been shown to give rise to ballistic diffusion [17]. For an illustration of how differences in fitness are
related to a ballistic regime (), assume that(1)
describes the advantage of the white player in a match , where the difference in fitness between two players is and is a Gaussian variable. yields a positive drift in thus modeling a match where the
white player is better. Assuming that the fitness is drawn from a distribution with finite variance , it follows that
Thus, . In the case of chess, the diffusive scenario is not determined purely by the fitness of players. However, differences in fitness are certainly an essential ingredient and thus Eq.(1) can
provide insight into the data of Fig. 3D by suggesting that the typical difference in skill between players has been increasing.
A striking feature of the results of Fig. 3B is the drift of the crossover move at which the power-law regime begins. We observe that is exponentially approaching an asymptote at moves with a
characteristic time of years (Fig. 3E). Based on the existence of limiting values for and , we plot in Figure 3B an extrapolated power law to represent the limiting diffusive regime (continuous
line). We have also found that the distributions of the match lengths for wins and draws display exponential decays with characteristics lengths of moves for draws and moves for wins. Moreover, we
find that these characteristic lengths have changed over the history of chess. For matches ending in draws, we observed a statistically significant growth of approximately moves per century. For
wins, we find no statistical evidence of growth and the characteristic length can be approximated by a constant mean of moves (Fig. S1).
A question posed by the time evolution of these quantities is whether the observed changes are due to learning by chess players over time or due to a secondary factor such as changes in the
organization of chess tournaments. In order to determine the answer to this question, we analyze the type of tournaments included in the database. We find that 88 of the tournaments in the database
use “round-robin” pairing (all-play-all) and that there has been an increasing tendency to employ this pairing scheme (Fig. S2). In order to further strengthen our conclusions, we analyze the matches
in the database obtained by excluding tournaments that do not use round-robin pairing. This procedure has the advantage that it reduces the effect of non-randomness sampling. As shown in Fig. S3,
this procedure does not change the results of our analyses.
We next studied the distribution profile of the advantage. We use the normalized advantage(3)
where is the mean value of advantage after moves and is the standard-deviation. Figures 4A and 4B show the positive tails of the cumulative distribution of for draws and wins for . We observe the
good data collapse, which indicates that the advantages are statistically self-similar, since after scaling they follow the same universal distribution. Moreover, Figs. 4D and 4E show that the
distribution profile of the normalized advantage is quite stable over the last 150 years. These distributions obey a functional form that is significantly different from a Gaussian distribution
(dashed line in the previous plots). In particular, we observe a more slowly decaying tail, showing the existence of large fluctuations even for matches ending in draws.
Figure 4. Scale invariance and non-Gaussian properties of the white player’s advantage.
Positive tails of the cumulative distribution function for the normalized advantage for matches ending in (A) draws and (B) wins. Each line in these plots represents a distribution for a different
value of in the range 10 to 70. By match outcome, the distributions for different values of exhibit a good data collapse with tails that decay slower than a Gaussian distribution (dashed line).
Average cumulative distribution for matches ending in (C) draws and (D) wins for four time periods. We estimated the error bars using bootstrapping. These data support the hypothesis of scaling, that
is, the distributions follow a universal non-Gaussian functional form. The negative tails present a very similar shape (see Fig. S4).
Another intriguing question is whether there is memory in the evolution of the white player’s advantage. To investigate this hypothesis, we consider the time series of advantage increments for all
5,154 matches ending in a draw that are longer than moves. We used detrended fluctuation analysis (DFA, see Methods Section B) to obtain the Hurst exponent for each match (Fig. 5A). We find
distributed around (Fig. 5B) which indicates the presence of long-range anti-correlations in the evolution of . A value of indicates the presence of an anti-persistent behavior, that is, the
alternation between large and small values of occurs much more frequently than by chance. This result also agrees with the oscillating behavior of the mean advantage (Fig. 1B). We also find that the
Hurst exponent has evolved over time (Fig. 5C). In particular, we note that the anti-persistent behavior has statistically increased for the recent two periods, indicating that the alternating
behavior has intensified in this period. We have found a very similar behavior for matches ending in wins after removing the last few moves in the match (Fig. S5).
Figure 5. Long-range correlations in white player’s advantage.
(A) Detrended fluctuation analysis (DFA, see Methods Section B) of white player’s advantage increments, that is, , for a match ended in a draw and selected at random from the database. For series
with long-range correlations, the relationship between the fluctuation function and the scale is a power-law where the exponent is the Hurst exponent . Thus, in this log-log plot the relationship is
approximated by a straight line with slope equal to . In general, we find all these relationships to be well approximated by straight lines with an average Pearson correlation coefficient of . (B)
Distribution of the estimated Hurst exponent obtained using DFA for matches longer than 50 moves that ended in a draw (squares). The continuous line is a Gaussian fit to the distribution with mean
and standard-deviation . Since , it implies an anti-persistent behavior (see Fig. 1B). We have also evaluated the distribution of using the shuffled version of these series (circles). For this case,
the dashed line is a Gaussian fit to the data with mean and standard-deviation . Note that the shuffled procedure removed the correlations, confirming the existence of long-range correlations in .
(C) Historical changes in the mean Hurst exponent . Note the significantly small values of in recent periods, showing that the anti-persistent behavior has increased for more recent matches.
We have characterized the advantage dynamics of chess matches as a self-similar, super-diffusive and long-ranged-memory process. Our investigation provides insights into the complex process of
creating and disseminating knowledge of a complex system at the population-level. By studying 150 years of high level chess, we presented evidence that the dynamics of a chess have evolved over time
in such a way that it appears to be approaching a steady-state. The baseline advantage of the white player, the cross-over move , and the diffusive exponent are exponentially approaching asymptotes
with different characteristic times. We hypothesized that the evolution of are closely related to an increase in the difference of fitness among players, while the evolution of the baseline advantage
of white player indicates that players are learning better ways to explore this advantage. The increase in the cross-over move suggest that the opening stage of a match is becoming longer which may
also be related to a collective learning process. As discussed earlier, hypothesized historical changes in pairing scheme during tournaments cannot explain these findings.
The core of a chess program is called the chess engine. The chess engine is responsible for finding the best moves given a particular arrangement of pieces on the board. In order to find the best
moves, the chess engine enumerates and evaluates a huge number of possible sequences of moves. The evaluation of these possible moves is made by optimizing a function that usually defines the white
player’s advantage. The way that the function is defined varies from engine to engine, but some key aspects, such as the difference of pondered number of pieces, are always present. Other theoretical
aspects of chess such as mobility, king safety, and center control are also typically considered in a heuristic manner. A simple example is the definition used for the GNU Chess program in 1987 (see
http://alumni.imsa.edu/stendahl/comp/txt/gnuchess.txt). There are tournaments between these programs aiming to compare the strength of different engines. The results we present were all obtained
using the Crafty™ engine [15]. This is a free program that is ranked 24th in the Computer Chess Rating Lists (CCRL - http://www.computerchess.org.uk/ccrl). We have also compared the results of
subsets of our database with other engines, and the estimate of the white player advantage proved robust against those changes.
DFA consists of four steps [18], [19]:
1. We cut into non-overlapping segments of size , where is the length of the series;
2. For each segment a local polynomial trend (here, we have used linear function) is calculated and subtracted from , defining , where represents the local trend in the -th segment;
3. We evaluate the fluctuation function
where is mean square value of over the data in the -th segment.
If is self-similar, the fluctuation function displays a power-law dependence on the time scale , that is, , where is the Hurst exponent.
Supporting Information
Historical trends in match lengths. Cumulative distribution function for the match lengths ending in (A) draws and wins (B). Both distributions display an exponential decay with characteristic
lengths for draws and for wins. (C) Cumulative distribution function for the match lengths ending white wins (circles) and black wins (triangles). Note that both distributions are almost
indistinguishable. (D) Changes in the characteristic game length over time. For draws (squares), we observe a statistically significant growth of approximately moves per century (red line). For wins
(circles), we find that is approximately constant with mean value (green line).
Percentage of tournaments that employ the round-robin (all-play-all) pairing scheme. Note the increase in the fraction of tournaments employing round-robin pairing scheme.
The effect of excluding tournaments using the swiss-pairing scheme on the historical trends reported in Fig. 3. It is visually apparent that excluding data from those tournaments does not
significantly change our results. Thus, temporal changes in the pairing schemes used in chess tournaments can not explain our findings.
Scale invariance and non-Gaussian properties of the white player’s advantage. Negative tails of the cumulative distribution function for the normalized advantage for matches ending in (A) draws and
(B) wins. Each line in these plots represents a distribution for a different value of in the range 10 to 70. For match outcome, the distributions for different values of exhibit a good data collapse
with tails that decay slower than a Gaussian distribution (dashed line). Average cumulative distribution for matches ending in (C) draws and (D) wins for four time periods. We estimated the error
bars using bootstrapping.
Match outcome and long-range correlations in the white player’s advantage. Distribution of the estimated Hurst exponent obtained using DFA for matches longer than 50 moves that ended in draws
(squares), wins (circles) and wins after dropping the five last moves of each match. The continuous line is a Gaussian fit to the distribution for draws with mean and standard-deviation . For wins,
the mean value of is and the standard-deviation is . Note that after dropping the five last moves the distribution of for wins becomes very close to distribution obtained for draws. The mean value in
this last case is and the standard-deviation is .
Full description of our chess database. This table show all the tournaments that comprise our data base. The PGN files are free available at http://www.pgnmentor.com/files.html. Specifically, the
files we have used are those grouped under sections “Tournaments”, “Candidates and Interzonals” and “World Championships”.
Author Contributions
Conceived and designed the experiments: HVR LANA. Performed the experiments: HVR. Analyzed the data: HVR LANA. Contributed reagents/materials/analysis tools: HVR RSM EKL MdCM LANA. Wrote the paper:
HVR RSM EKL MdCM LANA. | {"url":"http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0054165","timestamp":"2014-04-18T20:49:10Z","content_type":null,"content_length":"176847","record_id":"<urn:uuid:feaba8d4-68bd-4da4-a5fc-9a9be8171212>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Huygens fresnel integral
1. The problem statement, all variables and given/known data
Hey I have a question regarding the Huygens Fresnells transformation.
The formula is often written E(p)= (j/lambda*R)*exp(jkR).... and so on.
2. Relevant equations
What is physical meaning of the complex constant j and lambda in the front of the integral?
In some versions of the huygens fresnell it isn't even there ???
3. The attempt at a solution | {"url":"http://www.physicsforums.com/showthread.php?t=238901","timestamp":"2014-04-16T10:24:01Z","content_type":null,"content_length":"18968","record_id":"<urn:uuid:540d4191-caaa-4eab-8040-3d98b007666f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Radioactive Decay Model: Math Science Activity | Exploratorium Science Snacks
Radioactive Decay Model
Substitute coins for radiation.
Throwing 100 coins, removing all those that come up tails, and placing them in piles gives us a hands-on model for radioactive decay. The piles graphically show the meaning of the term half-life.
• At least 100 pennies.
• A container to hold the pennies.
• Optional: Use 100 small wooden cubes (approximately 2/5 inch [1 cm] on a side); paint one face of each cube red.
(30 minutes or more) Toss the pennies onto a table surface. Remove all of the pennies that land with their tail side up, and put them flat on the left edge of the table, arranged in a tall column.
Gather up the remaining pennies and toss them again. Remove the pennies that land tail-side-up, and arrange them in a second column, right beside the first column. Repeat this experiment until all of
the pennies have been removed. If no pennies come up tails on a toss, leave an empty column.
You can do the same thing with wooden cubes, removing the cubes that land red side up.
The chance that any penny will come up tails on any toss is always the same, 50%. However, once a penny has come up tails, it is removed. Thus, about half the pennies are left after the first toss.
Even though half of the remaining pennies come up tails on the second toss, there are fewer pennies to start with. After the first toss, about 1/2 of the original pennies are left; after the second,
about 1/4; then 1/8; 1/16; and so on. These numbers can be written in terms of powers, or exponents, of 2: 2^1, 2^21, 2^3, and 2^4. This type of pattern, in which a quantity repeatedly decreases by a
fixed fraction (in this case, 1/2), is known as exponential decay.
Each time you toss the remaining pennies, about half of them are removed. The time it takes for one half of the remaining pennies to be removed is called the half-life. The half-life of the pennies
in this model is about one toss. The probability that a cube will land red side up is 1/6. (Each cube has six sides, and only one of those sides is painted red.) It takes three tosses for about half
the cubes to be removed, so the half-life of the cubes is about three tosses. [After one toss, 5/6 remain; after two tosses, 5/6 of 5/6, or 25/36, remain; and after three tosses, (5/6)^3 = 125/216 of
the cubes are left.]
Tossing the coins or cubes is an unpredictable, random process. Rarely will exactly 1/2 of the coins or 1/6 of the cubes decay on the first toss. However, if you repeat the first toss many, many
times, the average number of coins or cubes that decay will approach 1/2 or 1/6.
In this model, the removal of a penny or a cube corresponds to the decay of a radioactive nucleus. The chance that a particular radioactive nucleus in a sample of identical nuclei will decay in each
second is the same for each second that passes, just as the chance that a penny would come up tails was the same for each toss (1/2) or the chance that a cube would come up red was the same for each
toss (1/6). The smaller the chance of decay, the longer the half-life (time for half of the sample to decay) of the particular radioactive isotope. The cubes, for instance, have a longer half-life
than the pennies. For uranium 238, the chance of decay is small: Its half-life is 4.5 billion years. For radon 217, the chance of decay is large: Its half-life is 1/1,000 of a second.
Some radioactive nuclei, called mothers, decay into other radioactive nuclei, called daughters. To simulate this process, start with 100 nickels. Toss them and replace the nickels that land tail side
up with pennies. Toss the pennies and the nickels together. Make a column with all the pennies that land tail side up, and replace all the nickels that land tail side up with more pennies. The
nickels represent the mother nuclei; the pennies, the daughter nuclei. Notice that the columns of decayed pennies grow at first and then decay. | {"url":"http://www.exploratorium.edu/snacks/radioactive_decay/index.html","timestamp":"2014-04-17T21:34:44Z","content_type":null,"content_length":"9973","record_id":"<urn:uuid:3c1862f6-edb1-4742-a5be-7f2991a91602>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Attleboro Precalculus Tutor
...My Mass. teaching license is in High School Mathematics. I originally was an Accountant for several years before becoming a math teacher. I have a love of math and a great interest in helping
students that struggle with math.
7 Subjects: including precalculus, geometry, algebra 1, algebra 2
...As noted above, trigonometry is usually encountered as a part of a pre-calculus course. In my view, much of the traditional material associated with trigonometry should be replaced by an
introduction to the linear algebra of vectors, which provides alternative methods of solving many of the prob...
7 Subjects: including precalculus, calculus, physics, algebra 1
...Over the last decade, I've successfully helped numerous students with their college and graduate school application essays, as well as advised them on which schools would be the best fit for
them I've successfully prepared students for ISEE and SSAT to help them gain admission to top schools. I ...
67 Subjects: including precalculus, English, reading, calculus
...When I taught at a Catholic middle school, I helped eighth-graders prepare for these tests. The students were happy with their actual test results, and every student was accepted by a Catholic
high school. I have experience teaching with a tutoring service all levels of the ISEE.
45 Subjects: including precalculus, chemistry, Spanish, French
...I am also a chemistry instructor at Wheaton College. I majored in biochemistry and psychology as an undergraduate student at Wheaton College. I have years of experience as a tutor.
17 Subjects: including precalculus, chemistry, calculus, geometry | {"url":"http://www.purplemath.com/North_Attleboro_Precalculus_tutors.php","timestamp":"2014-04-17T19:16:06Z","content_type":null,"content_length":"24216","record_id":"<urn:uuid:f069e44d-c587-4962-bfa1-25f41a5c1b37>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |