content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Mathematicians: Famous Math PeopleGradeAmathHelp.comMathematicians: Famous Math PeopleGradeAmathHelp.com
For more information on any famous math people, please click on their pictures, or the name below. A short description of each mathematician is given.
René Descartes: Most known for contribution to the coordinate plane. In fact, it is sometimes referred to as the Cartesian coordinate because of Descartes.
Albert Einstein: Perhaps the most famous known scientist of all time. His theories on relativity were groundbreaking and are still used today.
Leonhard Euler: Well known for his contributions to mathematical vocabulary and notation. In particular, he is consider the founder of function notation.
Fibonacci: His given name is Leonardo of Pisa. He is most known the number sequence 1, 1, 2, 3, 5, 8, 13, 21,... which was eventually named after him as the Fibonacci Numbers.
Carl Friedrich Gauss: Considered a child prodigy that eventually realized his true potential. He made monumental contributions in the areas of set theory, statistics, and many others.
Sir Isaac Newton: Shares in the credit as the developer of Calculus!
Blaise Pascal: Contributed in several areas of mathematics, but his nameis most recognized with its connection to Pascal's Triangle.
Pythagoras: His full name is Pythagoras of Samos and he is considered "the father of numbers." He is best known for the widely used formula known as the Pythagorean Theorem. | {"url":"http://www.gradeamathhelp.com/famous-math-people.html","timestamp":"2014-04-18T15:48:30Z","content_type":null,"content_length":"27581","record_id":"<urn:uuid:2a837b70-0405-498b-9057-11b44bbad6a9>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
You are currently browsing the tag archive for the ‘Computer Science’ tag.
This is a follow-up to my previous post, THIS IS NOT A REVIEW. I was inspired by a comment from Julia to translate the examples Airi and Maimi wrote (in the fictional universe depicted in my last
post) into Turing, from the original Scheme.
The original snippets of code were:
(printing “LOVE” an infinite number of times using named recursion in Scheme)
(define (FOREVER x)
(display x) (FOREVER x))
(FOREVER "LOVE ")
(the Y combinator in Scheme)
(lambda (f)
((lambda (x) (f (lambda (y) ((x x) y))))
(lambda (x) (f (lambda (y) ((x x) y))))))
(printing “LOVE” an infinite number of times using the Y combinator to do anonymous recursion in Scheme)
(((lambda (f)
((lambda (x) (f (lambda (y) ((x x) y))))
(lambda (x) (f (lambda (y) ((x x) y))))))
(lambda (p) (lambda (s) (display s) (p s))))
"LOVE ")
Of course, the latter two examples rely on unnamed functions (lambdas), which, as it turns out, appear not to be supported at all in Turing. As a result, I could only duplicate the first example:
(printing “LOVE” an infinite number of times using named recursion in Turing)
procedure forever(x : string)
put x ..
end forever
forever("LOVE ")
Feeling inspired, I decided to try this out in Python. Unfortunately, while lambdas are supported in Python, they are purely functional, meaning it’s impossible to get side effects out of the
recursive process like printing “LOVE” every time. You’d have to generate an infinitely long string of these and then print it out:
(printing “LOVE” an infinite number of times using named recursion in Python)
def FOREVER(x):
print x,
(the Y combinator in Python)
(lambda f: \
(lambda x: f(lambda y: (x(x))(y))) \
(lambda x: f(lambda y: (x(x))(y))))
(printing “LOVE” an infinite number of times using the Y combinator to do anonymous recursion in Python, theoretically)
(lambda f: \
(lambda x: f(lambda y: (x(x))(y))) \
(lambda x: f(lambda y: (x(x))(y)))) \
(lambda p: (lambda s:(s + p(s)))) \
('LOVE ')
Of course, you won’t be able to see anything from running that last one. But you can use the Y combinator to print out “LOVE” a finite number of times:
(printing “LOVE” 17 times using the Y combinator to do anonymous recursion in Python)
(lambda f: \
(lambda x: f(lambda y: (x(x))(y))) \
(lambda x: f(lambda y: (x(x))(y)))) \
(lambda p: (lambda s: (s==0 and ' ') or ('LOVE ' + p(s-1)))) \
Feeling even more inspired, I decided to tackle the task of translating this into PostScript, which is totally one of the awesomest programming languages ever, even though many people who are
familiar with it don’t realize its programming complexity and dismiss it as a simple page description language … ahem. (It’s the language behind PS files, the precursor to PDF.)
And behold, it worked!
(printing “LOVE” an infinite number of times using named recursion in PostScript)
/FOREVER { dup print FOREVER } def
(LOVE ) FOREVER
(the Y combinator in PostScript)
{ [ exch
{ [ exch
{ dup cvx exec exec } aload pop
] cvx } aload pop
8 -1 roll
{ cvx exec } aload pop ]
dup cvx exec }
(printing “LOVE” an infinite number of times using the Y combinator to do anonymous recursion in PostScript)
(LOVE )
{ [ exch
{ dup 5 string cvs print
[ exch } aload pop 8 -1 roll
[ exch cvx { exec } aload pop ] cvx
{ aload pop ] cvx exec } aload pop ] cvx } cvlit
{ [ exch
{ [ exch
{ dup cvx exec exec } aload pop ] cvx } aload pop
8 -1 roll { cvx exec } aload pop ]
dup cvx exec }
exec cvx exec
And just for fun, here’s computing the factorial of 6 using the Y combinator:
{ [ exch { dup 0 eq exch { 1 } exch [ exch } aload pop
9 -1 roll [ exch { dup 1 sub } aload pop
4 -1 roll cvx { exec } aload pop { mul } aload pop ] cvx
{ aload pop ] cvx ifelse } aload pop ] cvx } cvlit
{ [ exch { [ exch
{ dup cvx exec exec } aload pop ] cvx } aload pop
8 -1 roll {cvx exec } aload pop ] dup cvx exec }
exec cvx exec
PostScript rocks.
CUZ I DONT DO REVIEWS YO
this is a DIALOGUE on the nature of computation brought to you by °C-ute or actually just THE AIRI AND MAIMI SHOW cuz everybody else is like wut im a computer n00b i dont even know how to type
zomgwtf but AIRI AND MAIMI are like SUPER FREAKING GENIUSES or something and they totally know everything about computers and programming and data structures and algorithms and complexity theory and
all that
because contrary to popular belief °C-ute is actually short for
thats right THATS A DASH your supposed to fill in the missing letters duh
(betcha didnt know that huh?)
oh hay look its °C-ute in the middle of their STRUCTURE AND INTERPRETATION OF COMPUTER PROGRAMS class and today their going over HIGHER ORDER PROCEDURES and how to call a procedure using another
procedure as an argument or even reflexively using the procedure itself
so erika and mai are like WUUUHHHHHH???? but airi and maimi totally know wuts going on cuz their like SUPER FREAKING GENIUSES and their already jumping ahead to next weeks topic…..
and airi goes hey check this out zomglolololololol
… one level of embedding …
….. TWO levels of embedding …..
……. FOUR LEVELS OF EMBEDDING ……..
……… EIGHT LEVELS OF EMBEDDING IS THAT AMAZING OR WUT OMG???!!! ………
and maimis like WUT
and airis like RECURSION B****ES!!!
but maimis like STFU N00B that terminates after only eight iterations
betcha cant do an infinite loop wut
so airi sez OH YEAH WELL CHECK THIS OUT
(define (FOREVER x)
(display x) (FOREVER x))
(FOREVER "LOVE ")
its like LOVE …….. FOREVER!!!!!
get it? GET IT??!!!
but maimis like SRSLY zomgrofl THAT IS SO TRIVIAL IT MAKES THE TRIVIAL GROUP LOOK NONCOMMUTATIVE lol
cuz thats like CONJUGATE SYMMETRY
which when restricted to a real scalar field means that under the inner product any two vectors COMMUTE
oh yeah well thats nothing compared to the Y COMBINATOR
(lambda (f)
((lambda (x) (f (lambda (y) ((x x) y))))
(lambda (x) (f (lambda (y) ((x x) y))))))
your procedure is so weak it needs a name zomglol
(((lambda (f)
((lambda (x) (f (lambda (y) ((x x) y))))
(lambda (x) (f (lambda (y) ((x x) y))))))
(lambda (p) (lambda (s) (display s) (p s))))
"LOVE ")
………………… O__________________O ……………………
THATS RIGHT THE Y COMBINATOR IS SHORT FOR YAJIMA COMBINATOR!!!!!!
o rly?
well lambda calculus is ok but….
KAPPA CALCULUS IS THE BEST!!!!!! =(^w^)=
and maimis like *facepalm*
but then saki butts in and is like
i like SKI calculus cuz its like SAKI but like without the A
and airi and maimi are like LOLWUT
NEXT TIME ON THE AIRI AND MAIMI SHOW FEATURING °C-UTE
and °C-ute perform AN INTERPRETIVE DANCE interpreting THE RUNTIME EXECUTION OF A SCHEME INTERPRETER INTERPRETING ITS OWN SOURCE CODE
LOLWUT kthxbai <33333333 XD
… I’ve just been busy, sorry.
Busy with what, you ask?
Well … I’ve been living it up (?) as a graduate teaching assistant for a course in mathematics for computer science, taken mostly by second-year undergrads, with a total enrollment of around ~180. As
part of my duties, I get to teach two sections that meet twice a week and also contribute to writing problems for assignments and quizzes. What fun!
Actually, the making-up-awesome-problems part really is fun! I managed to whip together an entire problem set on the topic of sums and asymptotic relations in which all the problems are Hello!
Alas, after discussion with the other staff members, we decided that while the problems were awesome and hilarious (maybe more so for me than for them), they were a bit on the challenging side, not
straightforward enough, and touched on a few topics we weren’t really covering (Problem 4d in particular “would kill the students”). So it got scratched, and a more boring replacement was released
But all is not lost! We’ve decided to release this problem set as optional, not-for-credit “challenge problems”, and you can try them out here:
Hello! Project Challenge Problems (PDF)
If you wish, you can send your solutions to kirarinsnow@mit.edu, and I’ll respond with comments.
Happy birthday Koharu!
In celebration of Koha’s birthday (07.15), here are a few double dactyls I’ve composed:
Pancakey pancakey,
Master chef Kirari
Proves she can pancake-sort
Faster than SHIPS.
Quite inexplicably,
This method takes but a
Number of flips.
Flavor Flav, baklava,
Koha-chan’s talk of a
“Genuine flavor” is
Just a disguise—
Hard-to-find particle
Actually is quantum
Largest in size.
Hana wo Pu~n
Kirari pikari,
Koha and Mai, though
Experts at tangent and
Cosine and sine,
Find themselves thwarted by
Transforms affine.
Konnichi pa
Konnichi pa-pa-pa!
Kirari’s tra-la-la
Seizes the heartstrings and
Moves one to tears.
Poignantly touched by her
Passersby nonetheless
Cover their ears.
More double dactyls are in the works, so stay tuned!
Going over the last few installments of Pocket Morning Weekly Q&A, posted in translation at Hello!Online, one might notice a developing interest in mathematics by none other than Michishige Sayumi:
Question: Is there something about which you’ve thought, “Certainly this year, I want to challenge myself with this!”?
Michishige: Math Problems ☆
Question: Fill in the blank to the right with one word. “I’m surprisingly ___”
Michishige: I’m surprisingly intellectual.
Please try to understand that somehow. m(・-・)m
Question: Among your fellow members, what about you makes you think, “At this, I definitely can’t lose!”
Michishige: Simultaneous equations!!
The evidence is indisputable. Sayumi is a math geek. XD
While her fellow MoMusu are busy with more mundane interests, our Sayumi is off challenging herself with math problems (here, Sayu, try Project Euler) and has apparently discovered the wonders of
linear algebra (I’m assuming at least some of those simultaneous equations are linear). No doubt Sayumi has mastered the techniques of Gauss-Jordan elimination, Cramer’s rule, and LU decomposition
and is well on her way to achieving world domination.
In addition to this, Sayumi has listed Tetris as a hobby and as a “special skill”. This is by far the geekiest interest I’ve seen in any H!P member. Because Tetris is not your average video game. It
is a mind-stretching mathematical puzzle, and several of its subproblems are NP-complete. NP-complete, I tell you! This places it in the class of difficult problems that includes Boolean k
-satisfiability, determining the existence of a Hamiltonian path, and Minesweeper.
Sayumi is hardcore.
For this, she gets an Excellence in Unabashed Geekitude Award.
And I still need to give Koharu one, don’t I?
Just for fun, and because Goto Maki’s name makes a great programming pun, here’s a function in C/C++ that uses Goto’s name to actually do something (it computes the factorial of a number; the goto
maki statement makes the program loop over the individual multiplications until the final product is computed):
int factorial(int n)
int p = 1;
if (n == 0)
return p;
p *= n;
goto maki;
You can put this in, say, a C++ program like the following:
#include <iostream>;
using namespace std;
int factorial(int);
int main()
int n;
cout << "Enter a nonnegative integer to factorialize: ";
cin >> n;
cout << "The factorial of " << n << " is " << factorial(n) << ".\n" << endl;
return 0;
int factorial(int n)
int p = 1;
if (n == 0)
return p;
p *= n;
goto maki;
and then you too (yes, you!) can factorialize away with Gocchin:
% g++ gotomaki.cc -o gotomaki
% ./gotomaki
Enter a nonnegative integer to factorialize: 1
The factorial of 1 is 1.
% ./gotomaki
Enter a nonnegative integer to factorialize: 2
The factorial of 2 is 2.
% ./gotomaki
Enter a nonnegative integer to factorialize: 3
The factorial of 3 is 6.
% ./gotomaki
Enter a nonnegative integer to factorialize: 4
The factorial of 4 is 24.
% ./gotomaki
Enter a nonnegative integer to factorialize: 5
The factorial of 5 is 120.
% ./gotomaki
Enter a nonnegative integer to factorialize: 6
The factorial of 6 is 720.
% ./gotomaki
Enter a nonnegative integer to factorialize: 0
The factorial of 0 is 1.
Fun, ne?
Countdown! The Top 100 Hello! Project PVs
My last post has apparently sparked a “laugh riot” of a debate that’s now more than three times as long as my original post. If you haven’t seen it yet, you may find it worth reading. Or maybe not.
As always, I appreciate your feedback, positive or negative. It’s always good to know how effective my communication is.
And now, on to the next batch:
Recent Comments
Matt ディトマソ on Koharu Kusumi: A Career Change…
Kirarin☆Snow ☃ on Koharu Kusumi: A Career Change…
Matt ディトマソ on Koharu Kusumi: A Career Change…
stephen on Suugaku♥Joshi Gakuen, Episode…
Stephen on [Puzzle Contest] Hello! Projec… | {"url":"https://minimonimania.wordpress.com/tag/computer-science/","timestamp":"2014-04-19T04:20:58Z","content_type":null,"content_length":"102042","record_id":"<urn:uuid:93d18045-6282-4c17-85b7-1c79e609c958>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
What do you think?
Re: What do you think?
Hi gAr;
Hi anonimnystefy;
Please calculate that. When you do you will notice that you submitted two different answers. I believe your second answer is much closer.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym,
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr;
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym,
Phew, from past couple of hours I was checking why it's not getting close, since most of the questions had answers near to that of B's!
And how are you? Moved to a new place?
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Hi gAr;
Sorry, but B got it wrong when he did this, so I put that into the problem. I had to research this answer when I posted this.
Yes, I moved. It was very hectic and just a blur of activity. I still have some work to do before I am done.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Ok, it's a good problem. Thanks.
How's your new place? More greenery or more concrete?
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
A little better scenery and better location but only about 150 yards from the other place.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Okay, I'll take a break.
See you later.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: What do you think?
Okay and thanks for working on the problem!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
New Problem:
A student approaches A with a dilemma. The student has a favorite calculator located here;
The student would like to calculate this
on her favorite calculator of course. So she enters sqrt(3.0000000000000001)-sqrt(3)= and to her dismay the calculator returns 0. She knows that is not right and her faith in her calculator is
shattered. She runs to A...
A says) Obvious proof of the stupidity of computers and the people who use them. You deserve that answer for trusting something that is clearly kaboobly doo!
The student begins to cry.
B says) Hold on A. You are mistaken. That is a pretty decent calculator, she just used it incorrectly.
The student explains to be B that she entered it correctly.
B says) I did not mean that. I meant you phrased the problem in a way that was numerically unstable and if you phrase it like...
C says) You hold on there B. A is right, as usual. I tried this on a bunch of calculators and even used your beloved mathematica and I got 0 too! Obviously 3.0000000000000001 = 3.
E says) How do you get that conclusion? Let's hear what B has to say on this.
A says) We have indulged B enough. I think C has a point. Look at 3.1, 3.01, 3.001, 3.0001, 3.00001 ... Doesn't that approach 3? Good work C! 3.0000000000000001 = 3
D says) Yes, I think C has hit it right on the head. And A's reasoning just brings tears of joy to my eyes.
E says) More like agony rather than joy.
Can you help the student get a decent answer to her problem using her calculator?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
Hi bobbym
[hide=reply]Then what?
"Can you demonstrate something we talked about that shows more understanding?"
What else is there?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
I sure can! What is the workhorse of numerical analysis?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
Close! What kind of series?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
Nope. Taylor series!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: What do you think?
Not exactly.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: What do you think?
bobbym wrote:
Problem #15:
What is the longest string of consecutive positive numbers that when added equal 2009?
Watch it it can be tricky.
This is the one I got. Is it the longest? No peeking!
which must be equal to 2009.So:
Because the sum should have as many terms as possible,we need n-m to maximum,so n-m+1 must also be maximum possible,but it still needs to be less than the term m+n And they need to be of the
different parity.We write all possible factorization of 4018 into 2 factors:
So the largest possible length is 4.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=213147","timestamp":"2014-04-18T16:22:00Z","content_type":null,"content_length":"45056","record_id":"<urn:uuid:e8a05c2d-8b0a-41ad-8a49-cad7996f4b01>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Analytic Geometry on The Unapologetic Mathematician
Sorry for the delay from last Friday to today, but I was chasing down a good lead.
Anyway, last week I said that I’d talk about a linear map that extends the notion of the correspondence between parallelograms in space and perpendicular vectors.
First of all, we should see why there may be such a correspondence. We’ve identified $k$-dimensional parallelepipeds in an $n$-dimensional vector space $V$ with antisymmetric tensors of degree $k$:
$A^k(V)$. Of course, not every such tensor will correspond to a parallelepiped (some will be linear combinations that can’t be written as a single wedge of $k$ vectors), but we’ll just keep going and
let our methods apply to such more general tensors. Anyhow, we also know how to count the dimension of the space of such tensors:
This formula tells us that $A^k(V)$ and $A^{n-k}(V)$ will have the exact same dimension, and so it makes sense that there might be an isomorphism between them. And we’re going to look for one which
defines the “perpendicular” $n-k$-dimensional parallelepiped with the same size.
So what do we mean by “perpendicular”? It’s not just in terms of the “angle” defined by the inner product. Indeed, in that sense the parallelograms $e_1\wedge e_2$ and $e_1\wedge e_3$ are
perpendicular. No, we want any vector in the subspace defined by our parallelepiped to be perpendicular to any vector in the subspace defined by the new one. That is, we want the new parallelepiped
to span the orthogonal complement to the subspace we start with.
Our definition will also need to take into account the orientation on $V$. Indeed, considering the parallelogram $e_1\wedge e_2$ in three-dimensional space, the perpendicular must be $ce_3$ for some
nonzero constant $c$, or otherwise it won’t be perpendicular to the whole $x$-$y$ plane. And $\vert c\vert$ has to be ${1}$ in order to get the right size. But will it be $+e_3$ or $-e_3$? The
difference is entirely in the orientation.
Okay, so let’s pick an orientation on $V$, which gives us a particular top-degree tensor $\omega$ so that $\mathrm{vol}(\omega)=1$. Now, given some $\eta\in A^k(V)$, we define the Hodge dual $*\eta\
in A^{n-k}(V)$ to be the unique antisymmetric tensor of degree $n-k$ satisfying
for all $\zeta\in A^k(V)$. Notice here that if $\eta$ and $\zeta$ describe parallelepipeds, and any side of $\zeta$ is perpendicular to all the sides of $\eta$, then the projection of $\zeta$ onto
the subspace spanned by $\eta$ will have zero volume, and thus $\langle\zeta,\eta\rangle=0$. This is what we expect, for then this side of $\zeta$ must lie within the perpendicular subspace spanned
by $*\eta$, and so the wedge $\zeta\wedge*\eta$ should also be zero.
As a particular example, say we have an orthonormal basis $\{e_i\}_{i=1}^n$ of $V$ so that $\omega=e_1\wedge\dots\wedge e_n$. Then given a multi-index $I=(i_1,\dots,i_k)$ the basic wedge $e_I$ gives
us the subspace spanned by the vectors $\{e_{i_1},\dots,e_{i_k}\}$. The orthogonal complement is clearly spanned by the remaining basis vectors $\{e_{j_1},\dots,e_{j_{n-k}}\}$, and so $*e_I=\pm e_J$,
with the sign depending on whether the list $(i_1,\dots,i_k,j_1,\dots,j_{n-k})$ is an even or an odd permutation of $(1,\dots,n)$.
To be even more explicit, let’s work these out for the cases of dimensions three and four. First off, we have a basis $\{e_1,e_2,e_3\}$. We work out all the duals of basic wedges as follows:
\displaystyle\begin{aligned}*1&=e_1\wedge e_2\wedge e_3\\ *e_1&=e_2\wedge e_3\\ *e_2&=-e_1\wedge e_3=e_3\wedge e_1\\ *e_3&=e_1\wedge e_2\\ *(e_1\wedge e_2)&=e_3\\ *(e_1\wedge e_3)&=-e_2\\ *(e_2\wedge
e_3)&=e_1\\ *(e_1\wedge e_2\wedge e_3)&=1\end{aligned}
This reconstructs the correspondence we had last week between basic parallelograms and perpendicular basis vectors. In the four-dimensional case, the basis $\{e_1,e_2,e_3,e_4\}$ leads to the duals
\displaystyle\begin{aligned}*1&=e_1\wedge e_2\wedge e_3\wedge e_4\\ *e_1&=e_2\wedge e_3\wedge e_4\\ *e_2&=-e_1\wedge e_3\wedge e_4\\ *e_3&=e_1\wedge e_2\wedge e_4\\\ *e_4&=-e_1\wedge e_2\wedge e_3\\
*(e_1\wedge e_2)&=e_3\wedge e_4\\ *(e_1\wedge e_3)&=-e_2\wedge e_4\\ *(e_1\wedge e_4)&=e_2\wedge e_3\\ *(e_2\wedge e_3)&=e_1\wedge e_4\\ *(e_2\wedge e_4)&=-e_1\wedge e_3\\ *(e_3\wedge e_4)&=e_1\wedge
e_2\\ *(e_1\wedge e_2\wedge e_3)&=e_4\\ *(e_1\wedge e_2\wedge e_4)&=-e_3\\ *(e_1\wedge e_3\wedge e_4)&=e_2\\ *(e_2\wedge e_3\wedge e_4)&=-e_1\\ *(e_1\wedge e_2\wedge e_3\wedge e_4)&=1\end{aligned}
It’s not a difficult exercise to work out the relation $**\eta=(-1)^{k(n-k)}\eta$ for a degree $k$ tensor in an $n$-dimensional space.
Today I want to run through an example of how we use our new tools to read geometric information out of a parallelogram.
I’ll work within $\mathbb{R}^3$ with an orthonormal basis $\{e_1, e_2, e_3\}$ and an identified origin $O$ to give us a system of coordinates. That is, given the point $P$, we set up a vector $\
overrightarrow{OP}$ pointing from $O$ to $P$ (which we can do in a Euclidean space). Then this vector has components in terms of the basis:
and we’ll write the point $P$ as $(x,y,z)$.
So let’s pick four points: $(0,0,0)$, $(1,1,0)$, $(2,1,1)$, and $(1,0,1)$. These four point do, indeed, give the vertices of a parallelogram, since both displacements from $(0,0,0)$ to $(1,1,0)$ and
from $(1,0,1)$ to $(2,1,1)$ are $e_1+e_2$, and similarly the displacements from $(0,0,0)$ to $(1,0,1)$ and from $(1,1,0)$ to $(2,1,1)$ are both $e_1+e_3$. Alternatively, all four points lie within
the plane described by $x=y+z$, and the region in this plane contained between the vertices consists of points $P$ so that
for some $u$ and $v$ both in the interval $[0,1]$. So this is a parallelogram contained between $e_1+e_2$ and $e_1+e_3$. Incidentally, note that the fact that all these points lie within a plane
means that any displacement vector between two of them is in the kernel of some linear transformation. In this case, it’s the linear functional $\langle e_1-e_2-e_3,\underline{\hphantom{X}}\rangle$,
and the vector $e_1-e_2-e_3$ is perpendicular to any displacement in this plane, which will come in handy later.
Now in a more familiar approach, we might say that the area of this parallelogram is its base times its height. Let’s work that out to check our answer against later. For the base, we take the length
of one vector, say $e_1+e_2$. We use the inner product to calculate its length as $\sqrt{2}$. For the height we can’t just take the length of the other vector. Some basic trigonometry shows that we
need the length of the other vector (which is again $\sqrt{2}$) times the sine of the angle between the two vectors. To calculate this angle we again use the inner product to find that its cosine is
$\frac{1}{2}$, and so its sine is $\frac{\sqrt{3}}{2}$. Multiplying these all together we find a height of $\sqrt{\frac{3}{2}}$, and thus an area of $\sqrt{3}$.
On the other hand, let’s use our new tools. We represent the parallelogram as the wedge $(e_1+e_2)\wedge(e_1+e_3)$ — incidentally choosing an orientation of the parallelogram and the entire plane
containing it — and calculate its length using the inner product on the exterior algebra:
e_1+e_2,e_1+e_2\rangle&\langle e_1+e_2,e_1+e_3\rangle\\\langle e_1+e_3,e_1+e_2\rangle&\langle e_1+e_3,e_1+e_3\rangle\end{pmatrix}\\&=\det\begin{pmatrix}2&1\\1&2\end{pmatrix}\\&=\left(2\cdot2-1\cdot1\
Alternately, we could calculate it by expanding in terms of basic wedges. That is, we can write
\displaystyle\begin{aligned}(e_1+e_2)\wedge(e_1+e_3)&=e_1\wedge e_1+e_1\wedge e_3+e_2\wedge e_1+e_2\wedge e_3\\&=e_2\wedge e_3-e_3\wedge e_1-e_1\wedge e_2\end{aligned}
This tells us that if we take our parallelogram and project it onto the $y$-$z$ plane (which has an orthonormal basis $\{e_2,e_3\}$) we get an area of ${1}$. Similarly, projecting our parallelogram
onto the $x$-$y$ plane (with orthonormal basis $\{e_1,e_2\}$ we get an area of $-1$. That is, the area is ${1}$ and the orientation of the projected parallelogram disagrees with that of the plane.
Anyhow, now the squared area of the parallelogram is the sum of the squares of these projected areas: $1^2+(-1)^2+(-1)^2=3$.
Notice, now, the similarity between this expression $e_2\wedge e_3-e_3\wedge e_1-e_1\wedge e_2$ and the perpendicular vector we found before: $e_1-e_2-e_3$. Each one is the sum of three terms with
the same choices of signs. The terms themselves seem to have something to do with each other as well; the wedge $e_2\wedge e_3$ describes an area in the $y$-$z$ plane, while $e_1$ describes a length
in the perpendicular $x$-axis. Similarly, $e_1\wedge e_2$ describes an area in the $x$-$y$ plane, while $e_3$ describes a length in the perpendicular $z$-axis. And, magically, the sum of these three
perpendicular vectors to these three parallelograms gives the perpendicular vector to their sum!
There is, indeed, a linear correspondence between parallelograms and vectors that extends this idea, which we will explore tomorrow. The seemingly-odd choice of $e_3\wedge e_1$ to correspond to $e_2$
, though, should be a tip-off that this correspondence is closely bound up with the notion of orientation.
So, why bother with this orientation stuff, anyway? We’ve got an inner product on spaces of antisymmetric tensors, and that should give us a concept of length. Why can’t we just calculate the size of
a parallelepiped by sticking it into this bilinear form twice?
Well, let’s see what happens. Given a $k$-dimensional parallelepiped with sides $v_1$ through $v_k$, we represent the parallelepiped by the wedge $\omega=v_1\wedge\dots\wedge v_k$. Then we might try
defining the volume by using the renormalized inner product
Let’s expand one copy of the wedge $\omega$ out in terms of our basis of wedges of basis vectors
$\displaystyle k!\langle\omega,\omega\rangle=k!\langle\omega,\omega^Ie_I\rangle=k!\langle\omega,e_I\rangle\omega^I$
where the multi-index $I$ runs over all increasing $k$-tuples of indices $1\leq i_1<\dots<i_k\leq n$. But we already know that $\omega^I=k!\langle\omega,e_I\rangle$, and so this is squared-volume is
the sum of the squares of these components, just like we’re familiar with. Then we can define the $k$-volume of the parallelepiped as the square root of this sum.
Let’s look specifically at what happens for top-dimensional parallelepipeds, where $k=n$. Then we only have one possible multi-index $I=(1,\dots,n)$, with coefficient
$\displaystyle\omega^{1\dots n}=n!\langle e_1\wedge\dots\wedge e_n,v_1\wedge\dots\wedge v_n\rangle=\det\left(v_j^i\right)$
and so our formula reads
So we get the magnitude of the volume without having to worry about choosing an orientation. Why even bother?
Because we already do care about orientation. Let’s go all the way back to one-dimensional parallelepipeds, which are just described by vectors. A vector doesn’t just describe a certain length, it
describes a length along a certain line in space. And it doesn’t just describe a length along that line, it describes a length in a certain direction along that line. A vector picks out three things:
• A one-dimensional subspace $L$ of the ambient space $V$.
• An orientation of the subspace $L$.
• A volume (length) of this oriented subspace.
And just like vectors, nondegenerate $k$-dimensional parallelepipeds pick out three things
• A $k$-dimensional subspace $L$ of the ambient space $V$.
• An orientation of the subspace $L$.
• A $k$-dimensional volume of this oriented subspace.
The difference is that when we get up to the top dimension the space itself can have its own orientation, which may or may not agree with the orientation induced by the parallelepiped. We don’t
always care about this disagreement, and we can just take the absolute value to get rid of a sign if we don’t care, but it might come in handy.
Yesterday we established that the $k$-dimensional volume of a parallelepiped with $k$ sides should be an alternating multilinear functional of those $k$ sides. But now we want to investigate which
The universal property of spaces of antisymmetric tensors says that any such functional corresponds to a unique linear functional $V_k:A^k\left(\mathbb{R}^n\right)\rightarrow\mathbb{R}$. That is, we
take the parallelepiped with sides $v_1$ through $v_k$ and represent it by the antisymmetric tensor $v_1\wedge\dots\wedge v_k$. Notice, in particular, that if the parallelepiped is degenerate then
this tensor is ${0}$, as we hoped. Then volume is some linear functional that takes in such an antisymmetric tensor and spits out a real number. But which linear functional?
I’ll start by answering this question for $n$-dimensional parallelepipeds in $n$-dimensional space. Such a parallelepiped is represented by an antisymmetric tensor with the $n$ sides as its
tensorands. But we’ve calculated the dimension of the space of such tensors: $\dim\left(A^n\left(\mathbb{R}^n\right)\right)=1$. That is, once we represent these parallelepipeds by antisymmetric
tensors there’s only one parameter left to distinguish them: their volume. So if we specify the volume of one parallelepiped linearity will take care of all the others.
There’s one parallelepiped whose volume we know already. The unit $n$-cube must have unit volume. So, to this end, pick an orthonormal basis $\left\{e_i\right\}_{i=1}^n$. A parallelepiped with these
sides corresponds to the antisymmetric tensor $e_1\wedge\dots\wedge e_n$, and the volume functional must send this to ${1}$. But be careful! The volume doesn’t depend just on the choice of basis, but
on the order of the basis elements. Swap two of the basis elements and we should swap the sign of the volume. So we’ve got two different choices of volume functional here, which differ exactly by a
sign. We call these two choices “orientations” on our vector space.
This is actually not as esoteric as it may seem. Almost all introductions to vectors — from multivariable calculus to vector-based physics — talk about “left-handed” and “right-handed” coordinate
systems. These differ by a reflection, which would change the signs of all parallelepipeds. So we must choose one or the other, and choose which unit cube will have volume ${1}$ and which will have
volume $-1$. The isomorphism from $\Lambda(V)$ to $\Lambda(V)^*$ then gives us a “volume form” $\mathrm{vol}\left(\underline{\hphantom{X}}\right)=n!\langle e_1\wedge\dots\wedge e_n,\underline{\
hphantom{X}}\rangle$, which will give us the volume of a parallelepiped represented by a given top-degree wedge.
Once we’ve made that choice, what about general parallelepipeds? If we have sides $\left\{v_1\right\}_{i=1}^n$ — written in components as $v_i^je_j$ — we represent the parallelepiped by the wedge
$v_1\wedge\dots\wedge v_n$. This is the image of our unit cube under the transformation sending $e_i$ to $v_i$, and so we find
\displaystyle\begin{aligned}\mathrm{vol}\left(v_1\wedge\dots\wedge v_n\right)&=n!\langle e_1\wedge\dots\wedge e_n,v_1\wedge\dots\wedge v_n\rangle\\&=\det\left(\langle e_i,v_j\rangle\right)\\&=\det\
The volume of the parallelepiped is the determinant of this transformation.
Incidentally, this gives a geometric meaning to the special orthogonal group $\mathrm{SO}(n,\mathbb{R})$. Orthogonal transformations send orthonormal bases to other orthonormal bases, which will send
unit cubes to other unit cubes. But the determinant of an orthogonal transformation may be either $+1$ or $-1$. Transformations of the first kind make up the special orthogonal group, while
transformations of the second kind send “positive” unit cubes to “negative” ones, and vice-versa. That is, they involve some sort of reflection, swapping the choice of orientation we made above.
Special orthogonal transformations are those which preserve not only lengths and angles, but the orientation of the space. More generally, there is a homomorphism $\mathrm{GL}(n,\mathbb{R})\
rightarrow\mathbb{Z}_2$ sending a transformation to the sign of its determinant. Transformations with positive determinant are said to be “orientation-preserving”, while those with negative
determinant are said to be “orientation-reversing”.
And we’re back with more of what Mr. Martinez of Harvard’s Medical School assures me is onanism of the highest caliber. I’m sure he, too, blames me for not curing cancer.
Coming up in our study of calculus in higher dimensions we’ll need to understand parallelepipeds, and in particular their volumes. First of all, what is a parallelepiped? Or, more specifically, what
is a $k$-dimensional parallelepiped in $n$-dimensional space? It’s a collection of points in space that we can describe as follows. Take a point $p$ and $k$ vectors $\left\{v_i\right\}_{i=1}^k$ in $\
mathbb{R}^n$. The parallelepiped is the collection of points reachable by moving from $p$ by some fraction of each of the vectors $v_i$. That is, we pick $k$ values $t^i$, each in the interval $\left
[0,1\right]$, and use them to specify the point $p+t^iv_i$. The collection of all such points is the parallelepiped with corner $p$ and sides $v_i$.
One possible objection is that these sides may not be linearly independent. If the sides are linearly independent, then they span a $k$-dimensional subspace of the ambient space, justifying our
calling it $k$-dimensional. But if they’re not, then the subspace they span has a lower dimension. We’ll deal with this by calling such a parallelepiped “degenerate”, and the nice ones with linearly
independent sides “nondegenerate”. Trust me, things will be more elegant in the long run if we just deal with them both on the same footing.
Now we want to consider the volume of a parallelepiped. The first observation is that the volume doesn’t depend on the corner point $p$. Indeed, we should be able to slide the corner around to any
point in space as long as we bring the same displacement vectors along with us. So the volume should be a function only of the sides.
The second observation is that as a function of the sides, the volume function should commute with scalar multiplication in each variable separately. That is, if we multiply $v_i$ by a non-negative
factor of $\lambda$, then we multiply the whole volume of the parallelepiped by $\lambda$ as well. But what about negative scaling factors? What if we reflect the side (and thus the whole
parallelepiped) to point the other way? One answer might be that we get the same volume, but it’s going to be easier (and again more elegant) if we say that the new parallelepiped has the negative of
the original one’s volume.
Negative volume? What could that mean? Well, we’re going to move away from the usual notion of volume just a little. Instead, we’re going to think of “signed” volume, which includes the possibility
of being positive or negative. By itself, this sign will be less than clear at first, but we’ll get a better understanding as we go. As a first step we’ll say that two parallelepipeds related by a
reflection have opposite signs. This won’t only cover the above behavior under scaling sides, but also what happens when we exchange the order of two sides. For example, the parallelogram with sides
$v_1=a$ and $v_2=b$ and the parallelogram with sides $v_1=b$ and $v_2=a$ have the same areas with opposite signs. Similarly, swapping the order of two sides in a given parallelepiped will flip its
The third observation is that the volume function should be additive in each variable. One way to see this is that the $k$-dimensional volume of the parallelepiped with sides $v_1$ through $v_k$
should be the product of the $k-1$-dimensional volume of the parallelepiped with sides $v_1$ through $v_{k-1}$ and the length of the component of $v_k$ perpendicular to all the other sides, and this
length is a linear function of $v_k$. Since there’s nothing special here about the last side, we could repeat the argument with the other sides.
The other way to see this fact is to consider the following diagram, helpfully supplied by Kate from over at f(t):
The side of one parallelogram is the (vector) sum of the sides of the other two, and we can see that the area of the one parallelogram is the sum of the areas of the other two. This justifies the
assertion that for parallelograms in the plane, the area is additive as a function of one side (and, similarly, of the other). Similar diagrams should be apparent to justify the assertion for
higher-dimensional parallelepipeds in higher-dimensional spaces.
Putting all these together, we find that the $k$-dimensional volume of a parallelepiped with $k$ sides is an alternating multilinear functional, with the $k$ sides as variables, and so it lives
somewhere in the exterior algebra $\Lambda(V^*)$. We’ll have to work out which particular functional gives us a good notion of volume as we continue.
• Recent Posts
• Blogroll
• Art
• Astronomy
• Computer Science
• Education
• Mathematics
• Me
• Philosophy
• Physics
• Politics
• Science
• RSS Feeds
• Feedback
Got something to say? Anonymous questions, comments, and suggestions at
• Subjects
• Archives
April 2014
M T W T F S S
« Sep | {"url":"https://unapologetic.wordpress.com/category/geometry/analytic-geometry/","timestamp":"2014-04-21T12:14:28Z","content_type":null,"content_length":"104792","record_id":"<urn:uuid:1e5b8dc2-1bc7-4988-be93-6101b806dea7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by kaykay on Friday, April 22, 2011 at 6:37pm.
Luisa and Connor had $360 altogether. after conner gave Luisa 3/5 of his money, she had the same amount of money as he did. how much did conner have in the beginning?
• math - typo? - MathMate, Friday, April 22, 2011 at 7:32pm
Please check if there is a typo.
It is not possible that after Connor gave more than half of his money to Luisa and they still have an equal amount... unless Luisa had a negative amount (i.e. owes money) to start with.
• math - kaykay, Friday, April 22, 2011 at 7:42pm
yes there is a typo it's supposed to be that he gave her 2/5 of his money.
• math - Ms. Sue, Friday, April 22, 2011 at 7:48pm
KayKay -- please check the links below the Related Questions.
• math - kaykay, Friday, April 22, 2011 at 7:49pm
I don't understand those answers and wanted to see of anyone explained it differently.
• math - MattsRiceBowl, Friday, April 22, 2011 at 9:11pm
Start by writing down what we know.
--There is $360 between two people.
--The names of the two people are Connor (which we will now call "C") and Luisa (we will call her "L").
--C gave L 2/5 of his money. They then had the same amount of money.
So let's break it down into what we can write for math. Since they both have $360, we can say if we add C and L's money together, there is $360.
We also know that if C gives L 2/5 of his money, they will have the same amount.
So L gets 2/5 of C's money.
L + 2/5C
That is equal to C losing 2/5 of his money (since he gave it to L)
C - 2/5C
Since they are now the same, we can put them together and say they are equal.
L + 2/5C = C - 2/5C
Now we have 2 equations:
L + 2/5C = C - 2/5C
C+L = 360
Now what we want to do is put one letter by itself in any equation. (2nd one is easier to do).
C + L = 360
L = 360 - C
Now we can take that and change out the "C" in the other equation:
L + 2/5C = C - 2/5C
Anywhere we see a "L" we will change it to "360-C"
360-C + 2/5C = C - 2/5C
Now we just have to get C by itself.
360 = C - 2/5C + C - 2/5C
360 = 2C - 4/5C
360 = 1 1/5C
360 = 6/5 C
300 = C
So we know C had $300
The only question now is how much did L have?
Remember "C + L = 360?"
Since C had $300, we can say
300 + L = 360
L = 60
So C had $300
L had $60
Now we can check. C gave L 2/5 of his $$
300(2/5) = $120.
So C gave L $120, leaving him with $180.
L got $120. She had $60. So L NOW has $180
They both have the same amount.
Hope that helps. :-D
Related Questions
math - Luisa and Connor had $360 altogether. after conner gave Luisa 3/5 of his ...
math !!!! :O - Luisa and Connor had $360 altogether. After Connor gave Luisa 2/5...
math !!!! :O - Luisa and Connor had $360 altogether. After Connor gave Luisa 2/5...
maths - Harry and Potter had $224 altogether. When Harry gave 1/5 of his money ...
math - Doreen and Raju had the same amount of money. Lihua and Raju had $3406 ...
math - Lexi and Maria had $250 altogether. After Lexi spent 2 over 5 of her ...
math word problem - lexi and maria had $250.oo altogether. After lexi spent 2/5 ...
math - Lexi and Marie had $250 altogether. After Lexi spent 2/5 of her money and...
math - David and Marie had $300 altogether. After David spent 2/5 of his money ...
maths - 1)There are 7 groups of Scouts attending a campfire with an average of ... | {"url":"http://www.jiskha.com/display.cgi?id=1303511841","timestamp":"2014-04-16T19:19:16Z","content_type":null,"content_length":"11201","record_id":"<urn:uuid:40d8eb9c-04a7-4bb6-a9b2-c979b7427dc1>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lafayette Hill Science Tutor
...Unlike the one-size-fits-all test-prep courses, and the overly-structured national tutoring companies, I always customize my methods and presentation for the student at hand. There is a big
difference between a student striving for 700s on the SAT and one hoping to reach the 500s. Likewise, a student struggling with Algebra I is a far cry from one going for an A in honors
23 Subjects: including ACT Science, English, calculus, geometry
...My other experience working with children involves being a tutor in third grade English and Mathematics for the 2012 fall term through Champion Learning Center in New York. I graduated from
Johns Hopkins University with a Bachelor's degree in Biology and am very competent in the subject. I am c...
14 Subjects: including nutrition, physiology, biology, anatomy
...I have earned both a BA from Temple University and an MA from McGill University. I am also currently enrolled in an MA Psychology program. In addition, I worked for three years at Temple
University's Writing Center.
25 Subjects: including psychology, reading, writing, English
ANITA is a dynamic, natural-born teacher, performer and educator. She has offered and mastered classes in local school districts including creating a jazz flute workshop for middle school and high
school students. Her education includes, a B.S. from Rutgers University, graduate education courses a...
51 Subjects: including anatomy, biology, chemistry, physiology
...I am confident that I can help you to excel in whichever area you feel that you need help in. Thank you for considering me as your future tutor! I have taken several courses in genetics and I
am currently pursuing a Masters of Science in Genetic Counseling.
11 Subjects: including anatomy, biology, grammar, prealgebra | {"url":"http://www.purplemath.com/Lafayette_Hill_Science_tutors.php","timestamp":"2014-04-20T04:01:54Z","content_type":null,"content_length":"24139","record_id":"<urn:uuid:ccedfac2-c5f4-410a-b42f-de1de87ee9a9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plainfield, IL Trigonometry Tutor
Find a Plainfield, IL Trigonometry Tutor
...I am currently a student at Joliet Junior College with a 4.0 GPA. I am a Mechanical Engineering major. In high school I consistently had a 3.5 GPA and graduated in the top 10% of my class I
attended Plainfield Central High School.
12 Subjects: including trigonometry, biology, algebra 1, algebra 2
...I will work with you to set goals for your learning and create a plan for you to achieve those goals. I completed the class discrete mathematics for computer science while in college. The
topics covered were logic, proofs, mathematical induction, sets, relations, graph theory etc.
31 Subjects: including trigonometry, chemistry, calculus, statistics
...I am currently a Senior in College at Northern Illinois University studying BioMedical Engineering. I have my last two classes left and will be officially done! I have listed courses that I
have passed with a C or better which is required by the Engineering Department.
26 Subjects: including trigonometry, reading, calculus, chemistry
...I can be helpful in homework assignments and solving problems as well as building good concepts in these subjects. I have a PhD and 2 Masters' degrees in physics. I have been teaching Math and
Physics at colleges and Universities in Michigan and Illinois.
11 Subjects: including trigonometry, physics, algebra 2, geometry
...I believe that students are able to learn anything with the right instruction and I know my passion for science and math will contribute greatly to this process. I strive to find creative ways
to make subject matter interesting and relatable for all individuals while presenting material in an ea...
25 Subjects: including trigonometry, chemistry, calculus, physics
Related Plainfield, IL Tutors
Plainfield, IL Accounting Tutors
Plainfield, IL ACT Tutors
Plainfield, IL Algebra Tutors
Plainfield, IL Algebra 2 Tutors
Plainfield, IL Calculus Tutors
Plainfield, IL Geometry Tutors
Plainfield, IL Math Tutors
Plainfield, IL Prealgebra Tutors
Plainfield, IL Precalculus Tutors
Plainfield, IL SAT Tutors
Plainfield, IL SAT Math Tutors
Plainfield, IL Science Tutors
Plainfield, IL Statistics Tutors
Plainfield, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/Plainfield_IL_Trigonometry_tutors.php","timestamp":"2014-04-17T13:44:24Z","content_type":null,"content_length":"24270","record_id":"<urn:uuid:0ddfb9e6-bda9-473b-bc9c-817608d25648>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
What Does Your Garden Grow? Unit Update...
Hi there friends! We have been busy the past few weeks learning all about plants.
We began the unit by experimenting with seeds.
The students completed the What's A Coat? experiment. They were excited to dissect the seeds and use their magnifying glass to investigate! They recorded their findings in their plant journal.
The next activity, the students had to predict which would grow faster, a seed with a coat or a seed without? Each students took the coats off 2 seeds and placed them in a bag. They did the same for
2 seeds with the coats on. We hung them outside for 5 days to see what would happen.
The seeds without the coats grew the fastest! The kids were a little baffled by this, until we went back and redid the experiment. Since we had been learning all about the life cycle, some students
realized removing the coat sped up the process of the cycle. They were pretty amazed!
We have worked our we through much of the unit.
Last week we focused on plant needs.
We have been adding to our KWL and it is filling up!
We have been observing our garden grow from this...
to this...
We had to add a trellis because the cucumbers are out of control!
This week we have been working on plant vocabulary.
Which couldn't have come at a better time, because we have flowers on the cucumbers, green beans, and pumpkins!
The kids have been asking what will happen now, so the book From Seed to Plant was a great way to introduce the vocabulary and the next step in the life cycle!
This week will be all about wrapping it up with a life cycle diagram and flipbook.
This is last year's flipbook. I have fancied it up in my new unit. =)
So that is where we are at in the unit!
[S:The rules are simple!:S]
[S:For each one you follow, leave a comment! That will give you 4 entries to win!:S]
[S:Please leave an e-mail with your comment! :S]
[S:I will randomly draw 3 winners on Wednesday, April 18th in the evening!:S]
Good luck and thanks for stopping by!
65 comments:
1. I just dicovered your blog this week when hunting for 'plant unit' ideas' and have added you to my favs and am now a follower. I would love..love...love to be a winner!!!!
2. I follow your blog!
3. I follow your TpT store!
4. I follow your TN store!
5. I follow your facebook page!
6. I follow your blog!!
7. I follow your TpT store!
8. I follow your TN store!
9. I follow your blog.
10. I follow your TPT store.
11. I follow your TN store.
12. I liked your FB page.
13. I follow your blog.
Yearn to Learn Blog
14. I follow your blog!
15. I follow your tpt store!
16. I follow your TN store!
17. I follow your blog!!
18. I follow your TN shop!!
19. I follow your TPT store!!
20. I like Ms.Smarty Pants on Facebook!!
21. I follow your blog!
❤ Sandra
Sweet Times in First
22. I follow your TpT store!
❤ Sandra
Sweet Times in First
23. I like your TN store!
❤ Sandra
Sweet Times in First
24. I like ya on FB!
❤ Sandra
Sweet Times in First
25. I follow your blog!
An Education Lasts a Lifetime
26. I follow you on TN!
An Education Lasts a Lifetime
27. I like you on facebook!
An Education Lasts a Lifetime
28. I follow you on TpT too!
An Education Lasts a Lifetime
29. Found you when looking for plant things to supplement our unit. Just added your blog to the ones I follow in google reader.
tokyoshoes (at) hotmail (dot) com
30. ALso following your TpT store. (BTW, your link takes people to TpT instead of your store...)
tokyoshoes (at) hotmail (dot) com
1. Thanks so much! I just had a blog make-over and changed my name! =) All fixed now!
31. Also favorited your TN store.
tokyoshoes (at) hotmail (dot) com
32. I follow your blog
33. I follow your TPT Store
34. I follow your Teacher's Notebook Store
35. I liked Ms. Smarty Pants on Facebook
36. I am a new blog follower.
37. I follow your TPT store.
38. I follow your TN store.
39. I like you on Facebook.
40. Glad I found your blog...I'm your newest follower! Hope to have you stop by my blog soon...
41. I just discovered your blog, now I'm a follower! Thanks!
42. I started following your TPT store today!
43. I added you to my favorite stores on TN. Thanks!
44. I "LIKE" Ms. Smarty Pants Beattie on FB :)
45. I added you to my favorite store on TN.
46. I am a follower of TpT!
47. I liked you on Facebook.
48. I am a follower of your blog.
49. I am your newest follower!
Teaching, Learning, & Loving
P.S-I am having a giveaway too!! Come check it out!
50. Just came across your blog and the give away...everything looks great!!! =) Happy to be a new follower!
Stop by and check out my blog sometime:
51. I follow your blog! We dissected seeds last week too. I love your garden also. We are growing lima beans in a cup. :)You can check out what we did on my blog, Primary Reading Party. http://
52. I liked your FB page! I also shared your giveaway on my page!
53. I follow you on TPT!
54. I follow your TN store!
55. Getting ready to start a plant unit with 2nd graders. Your unit looks so cute!! I joined your blog so am looking forward to more fun things from you!! stuchaca@yahoo.com
56. I follow your blog! :o)
jennkeys @ gmail . com
57. I follow your TpT store! :o)
jennkeys @ gmail . com
58. I follow your TN shop! :o)
jennkeys @ gmail . com
59. I just dicovered your blog this week! I would love..love...love to be a winner!!!! I follow your blog.
60. I also follow your TpT store! It's awesome.
61. I love your TN store. I have found this site a must have for any teacher no matter what level you teach. Thanks.
62. I "liked" you on Facebook too!
63. I liked you on FB!
64. Wow! Amazing garden! I wish I could keep things ALIVE :) I have a team teacher who has a green thumb! I count on her! Glad I found your blog on TBA :) I am new to blogger and having a blast! If
you are interested in learning about integrating technology, maybe stop by my new blog! I am your newest follower! | {"url":"http://msbeattie-samantha.blogspot.com/2012/04/what-does-your-garden-grow-unit-update.html","timestamp":"2014-04-19T09:24:27Z","content_type":null,"content_length":"280462","record_id":"<urn:uuid:2f97ac86-5961-478c-8900-857501cc46f8>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
The P-and-NP Show comes to Caltech
Here are the PowerPoint slides for a physics colloquium I gave at Caltech yesterday, on “Computational Intractability as a Law of Physics.” The talk was delivered, so I was told, in the very same
auditorium where Feynman gave his Lectures on Physics. At the teatime beforehand, I was going to put both milk and lemon in my tea to honor the old man, but then I decided I actually didn’t want to.
I’m at Caltech till Tuesday, at which point I leave for New Zealand, to visit my friend Miriam from Berkeley and see a country I always wanted to see, and thence to Australia for QIP. This Caltech
visit, my sixth or seventh, has been every bit as enjoyable as I’ve come to expect: it’s included using Andrew Childs as a straight man for jokes, shootin’ the qubits with Shengyu Zhang, Aram Harrow,
and Robin Blume-Kohout, and arguing with Sean Carroll over which one of us is the second-funniest physics blogger (we both agree that Luboš is the funniest by far). Indeed, John Preskill (my host)
and everyone else at the Institute for Quantum Information have been so darn hospitable that from now on, I might just have to shill for quantum computing theory.
John Sidles Says:
Comment #1 January 19th, 2007 at 10:32 pm
Scott, wouldn’t P=NP resolve the “hard-AI” problem too?
E.g: “Is there a plausible Turing Test whose answers can be checked by a deterministic algorithm that is demonstrably in complexity class NP, and if so, is there a program of length 10 TBytes or less
that yields answers that pass this test?”
Rather than express an opinion for myself, here is the (very enjoyable) Turing Test Page, which has many references.
Scott Says:
Comment #2 January 19th, 2007 at 11:02 pm
John: There’s a very real question how much of the problem of simulating human intelligence is in NP to begin with! I.e. can you give a polytime algorithm to recognize a great symphony when you hear
That said, certainly if P=NP there would be dramatic consequences for AI. For example, you could efficiently find the shortest program that outputs (say) the complete works of Shakespeare or stock
market data in a reasonable amount of time. The question is just how much that would do for you — i.e. once you had that short description, could you use it to predict the future or write plays that
Shakespeare himself never wrote?
Kea Says:
Comment #3 January 19th, 2007 at 11:17 pm
Cool! Are you going to be in NZ after Jan 29? If so, you are hereby cordially invited to give a talk in the Dept of Physics at the University of Canterbury. I’ll arrange it – or you could just email
the seminar organiser. Anyway, where are you going in NZ? You can’t miss the South Island.
Kea Says:
Comment #4 January 19th, 2007 at 11:18 pm
Bonus: if you like hiking, just let me know.
John Sidles Says:
Comment #5 January 19th, 2007 at 11:22 pm
Hmmmm … output Shakespeare? Too easy.
Let’s see … how about, “the shortest program that outputs valid proofs of every theorem in Serge Lang’s Algebra (Third Edition)”.
Heck, we might as well make it “That can prove every theorem and solve every problem in <every math and physics book ever published>”.
Then the tough question is: would it be possible to learn anything important about math and physics by inspecting this code?
More loosely, would this program necessarily be able to explain what it was doing? Would it make a good thesis advisor?
Dave Bacon Says:
Comment #6 January 20th, 2007 at 1:11 am
Bridge lecture hall was where I took freshman physics…or at least where I was supposed to go to take freshman physics…I didn’t really like going to class much.
BTW, ask Daniel Gottesman about that room, Feynman, and the talk he gave there.
Scott Says:
Comment #7 January 20th, 2007 at 1:23 am
Cool! Are you going to be in NZ after Jan 29? If so, you are hereby cordially invited to give a talk in the Dept of Physics at the University of Canterbury.
Sorry, Kea! I’m leaving on Jan. 29 for Brisbane, since QIP starts the morning of the 30th. And to make matters worse, I’ll be in Auckland, which (looking at a map) seems to be quite a ways from
Scott Says:
Comment #8 January 20th, 2007 at 1:26 am
John (and everyone else): Please remember to close your italics tags — WordPress being stupid, it seems that otherwise you involuntarily italicize all the other comments! In this case, the way I
fixed the problem was by putting a closing </i> tag at the beginning of my own comment.
John Sidles Says:
Comment #9 January 20th, 2007 at 1:41 am
Apologies for the html markup bug. Good thing it wasn’t an unclosed “<strike>” … [S:doh !!!:S]
(no, I didn’t really do it)
Ari Says:
Comment #10 January 20th, 2007 at 6:40 am
Argh, sorry I missed it … I’m an applied math guy at Caltech, but I enjoy the blog and would have liked to attend the talk. Any other events/lectures planned before you depart Pasadena?
Scott Says:
Comment #11 January 20th, 2007 at 7:25 am
Sorry, Ari! I guess I should announce talks in advance.
If you’d like, come by 156B Jorgensen around 1PM on Monday and join our group for lunch.
Ari Says:
Comment #12 January 22nd, 2007 at 6:53 am
Thanks for the invite, Scott, but unfortunately I don’t think I can make it. (Too much work, presenting a poster on Tuesday … such is the life of a grad student.) Perhaps I’ll catch the P-vs-NP show
the next time it hits the west coast. Thanks again!
Scott Coon Says:
Comment #13 January 23rd, 2007 at 10:37 am
This probably falls in the category of “there really are stupid questions”, but is there any connection between Bennett et al. ’97′s n/2 result and the fact that the complex numbers are a quadratic
extension of the reals (i.e., complex phases vs. real probabilities)?
Scott Says:
Comment #14 January 23rd, 2007 at 10:43 am
is there any connection between Bennett et al. ’97’s n/2 result and the fact that the complex numbers are a quadratic extension of the reals (i.e., complex phases vs. real probabilities)?
First of all, it’s sqrt(n), not n/2.
The fact that quantum search takes sqrt(n) time is directly related to quantum mechanics being based on the 2-norm instead of the 1-norm. It’s not related to real vs. complex numbers, since you’d get
exactly the same result (sqrt(n)) with real amplitudes only.
Pascal Koiran Says:
Comment #15 January 24th, 2007 at 4:07 am
Does the characterization of PP as PostBQP help to solve any of the open problems on the closure properties of PP ?
(I suspect it doesn’t, or you would have already done it,
right ?)
Scitt Says:
Comment #16 January 24th, 2007 at 12:31 pm
Open problems — like what?
The Quantum Pontiff » Scirate.com Says:
Comment #17 January 24th, 2007 at 2:56 pm
[...] Hey, since Scott Aaronson can now claim to be “the second funniest physics blogger,” maybe with these skills I can claim to be “the second least funny computer science blogger!” [...]
Pascal Koiran Says:
Comment #18 January 25th, 2007 at 4:41 am
> Open problems — like what?
According to the Complexity Theory Companion, there were a few generalizations of the “closure under intersection” result. For instance, in 1996 Fortnow (who would become famous later for his
Computational Complexity blog) and Reingold proved closure under truth-table reductions.
But the book claims that closure under polynomial time Turing reductions is open.
Scott Says:
Comment #19 January 26th, 2007 at 2:00 am
Oh, that! You can certainly use the PostBQP characterization to reprove Reingold’s result on closure under truth-table reductions — I say that in the paper.
As for closure under Turing reductions, if that held it would mean P = PP^PP! And that would be extremely surprising — Beigel gave an oracle relative to which not even P^NP is contained in PP. So
most likely, the reason my characterization doesn’t give you closure under Turing reductions is that it’s false!
Pascal Koiran Says:
Comment #20 January 26th, 2007 at 4:33 am
Thanks for the quick answer – you meant PP = P^PP, right ?
Speaking of relativizations, does the closure under intersection result relativize ?
Scott Says:
Comment #21 January 26th, 2007 at 1:36 pm
Yeah, I meant PP = P^PP — sorry. Miriam, my hostess here in Auckland, is not leaving me a lot of time to double-check my blog comments.
Yes, the closure under intersection result relativizes. | {"url":"http://www.scottaaronson.com/blog/?p=190","timestamp":"2014-04-20T21:03:26Z","content_type":null,"content_length":"20041","record_id":"<urn:uuid:3b9b043e-ab28-4794-8039-a5af6fe2baf3>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
ORBi: Ivanova K. - Statistical derivation of the evolution equation of liquid water path fluctuations in clouds
Statistical derivation of the evolution equation of liquid water path fluctuations in clouds
Ivanova, K. [> > > >]
Ausloos, Marcel [Université de Liège - ULg > Département de physique > Physique statistique appliquée et des matériaux >]
Journal of Geophysical Research. Atmospheres
Amer Geophysical Union
Yes (verified by ORBi)
[en] liquid water path ; Fokker-Planck equation ; statistical derivation ; fluctuations
[en] [1] How to distinguish and quantify deterministic and random influences on the statistics of turbulence data in meteorology cases is discussed from first principles. Liquid water path (LWP)
changes in clouds, as retrieved from radio signals, upon different delay times, can be regarded as a stochastic Markov process. A detrended fluctuation analysis method indicates the existence of
long range time correlations. The Fokker-Planck equation which models very precisely the LWP fluctuation empirical probability distributions, in particular, their non-Gaussian heavy tails is
explicitly derived and written in terms of a drift and a diffusion coefficient. Furthermore, Kramers-Moyal coefficients, as estimated from the empirical data, are found to be in good agreement with
their first principle derivation. Finally, the equivalent Langevin equation is written for the LWP increments themselves. Thus rather than the existence of hierarchical structures, like an energy
cascade process, strong correlations on different timescales, from small to large ones, are considered to be proven as intrinsic ingredients of such cloud evolutions.
Researchers ; Professionals ; Students | {"url":"http://orbi.ulg.ac.be/handle/2268/89752","timestamp":"2014-04-20T21:07:06Z","content_type":null,"content_length":"14677","record_id":"<urn:uuid:5111a0b1-eedd-4d37-b842-f96d4d91a7ae>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reliability Conditions
Total a = the smaller frequency of “positive” identification by one test
b = the larger frequency of “positive” identification by the other test
Point by point a = the number of cases with the same identification from both tests
b = the total number of cases
Occurrence a = the number of cases who tested “positive” in both tests
b = the number of cases who tested “positive” at least in one test
Nonoccurrence a = the number of cases who tested “negative” in both tests
b = the number of cases who tested “negative” at least in one test | {"url":"http://www.hindawi.com/journals/ijoto/2012/852714/tab5/","timestamp":"2014-04-17T21:49:29Z","content_type":null,"content_length":"2544","record_id":"<urn:uuid:5d327a4e-b543-4581-a42d-e970a47b133f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
F=kx (Hooke's Law) Problem
February 6th 2009, 02:29 PM
F=kx (Hooke's Law) Problem
Can someone guide me through this problem:
2. A mass of 500 g stretches a spring 8.0 cm when it is attached to it.
What additional weight would you have to add to it so that the spring is stretched 10 cm?
__________________________________________________ __________________________
I tried plugging in numbers into the rule and that didn't lead me anywhere!
February 6th 2009, 02:41 PM
$F = kx$
$mg$ is weight ... force due to gravity
$mg = kx$
$500g = k \cdot 8$
let $m$ = the additional mass (in grams) required ...
$(500+m)g = k\cdot 10$
divide the 2nd equation by the first ...
$\frac{(500+m)g = k\cdot 10}{500g = k \cdot 8}$
$\frac{500+m}{500} = \frac{10}{8}$
solve the last equation for $m$
February 7th 2009, 03:43 PM
Alternative Method
As stated, the equation for Hooke's Law can be rewritten as follows:
The gravitational constant $g=9.81\left(\frac{m}{s^{2}}\right)$ is substituted for acceleration $a$, $k$ represents the spring constant, and $x$ is the displacement (distance stretched in
Rearrange in terms of $k$ and solve:
$-k=\frac{(0.50kg)(9.81\left(\frac{m}{s^{2}}\right)) }{0.08m}$
Now rewrite the equation in its standard form $F=-kx$ and evaluate using the new length.
The rest is pretty straight-forward, just remember that weight is a measure of gravitational force and mind your significant figures (2).
February 7th 2009, 10:02 PM
Hooke's Law
Hello s3a
Can someone guide me through this problem:
2. A mass of 500 g stretches a spring 8.0 cm when it is attached to it.
What additional weight would you have to add to it so that the spring is stretched 10 cm?
__________________________________________________ __________________________
I tried plugging in numbers into the rule and that didn't lead me anywhere!
Forget formulae. Just use common sense. Hooke's Law says that:
□ The tension in an elastic string is proportional to its extension.
In other words, if you double one, you double the other; if you increase one by 50%, you increase the other by 50%, and so on.
You want to increase the extension from 8 cm to 10 cm; in other words, by 25%. So increase the tension by 25% as well. Put on an extra mass of 25% of 500 gm. In other words, you'll need an extra
125 gm.
That was easy, wasn't it?
February 8th 2009, 01:10 AM
Well, you can solve it using common sense. But using formula, and understand enable you to solve any question.
500g=8k (1) initial condition
let say the weight to strecth the spring 10cm is M
Mg=10k (2)
you will get M = 625g
so it will require additional 125g
February 8th 2009, 02:02 AM
Hooke's Law
Hello elliotyang
Of course, I'm not suggesting that you should always forget the formula. But you should always look for a simple approach first! When solving a quadratic, for example, always try to factorise
before reaching for the formula.
February 8th 2009, 02:12 PM
But the answer is 1.2N.
February 8th 2009, 02:29 PM
February 8th 2009, 02:50 PM
As stated, the equation for Hooke's Law can be rewritten as follows:
The gravitational constant $g=9.81\left(\frac{m}{s^{2}}\right)$ is substituted for acceleration $a$, $k$ represents the spring constant, and $x$ is the displacement (distance stretched in
Rearrange in terms of $k$ and solve:
$-k=\frac{(0.50kg)(9.81\left(\frac{m}{s^{2}}\right)) }{0.08m}$
Now rewrite the equation in its standard form $F=-kx$ and evaluate using the new length.
The rest is pretty straight-forward, just remember that weight is a measure of gravitational force and mind your significant figures (2).
If you had followed my methodology, then you would have arrived at the correct answer. The above process is most likely the preferred one for a high school physics student.
To conclude where I left off above:
$F=-(-61.31\left(\frac{N}{m}\right))(0.10m)$Notice how the units cancel.
$F_(original)=(0.50kg)(9.81\left(\frac{m}{s^{2}}\ri ght))=4.905N$
February 8th 2009, 04:53 PM
Dear Knowledge,
I think your approach is kind of long winded.
For this case, F=-kx=mg
it is obviously seen that g and k are constant when compare the two situation since g is 9.81m/s^2 while k=spring constant. (since using the same spring, this vqlue should be the same for both)
So from this, we can deduce that m is directly proportional to x.
use this relation would be much faster.
instead you plug in the value into the formula compute, and then reverse it. of course it is not wrong, just kind of wasting time.
February 8th 2009, 07:02 PM
Dear Knowledge,
I think your approach is kind of long winded.
For this case, F=-kx=mg
it is obviously seen that g and k are constant when compare the two situation since g is 9.81m/s^2 while k=spring constant. (since using the same spring, this vqlue should be the same for both)
So from this, we can deduce that m is directly proportional to x.
use this relation would be much faster.
instead you plug in the value into the formula compute, and then reverse it. of course it is not wrong, just kind of wasting time.
While my approach/explanation may be kind of "long winded" for people who have already had advanced math/physics classes, it is obvious the OP did not follow the rationale of using a proportion,
nor did the OP understand the concept of Hooke's Law; i.e. the difference between mass and weight (hence the reason I included all of the units).
Furthermore, it is my opinion that this is not a forum to merely supply answers for a student's homework problems, but a place to further their understanding of the concept(s) and reinforce
proper methods in preparation for higher level courses.
If the OP, or anyone for that matter, wants to be successful in advanced physics, it is a good idea to develop sound habits for problem solving from the beginning - because the concepts,
equations, and methods only become more complex and involved.
February 8th 2009, 07:16 PM
it is a good idea to develop sound habits for problem solving from the beginning - because the concepts, equations, and methods only become more complex and involved.
I agree must understand the concept for problem solving. I just try to provide alternative, easier way to solve. Reduce it to the simplest form then only plug in the value. Too many value will
sometimes lead to careless, besides need to take care about significant figures and decimal places all.
February 22nd 2009, 12:51 PM
Easiest way using canadian teaching method
First easiest way to do it is to first get all your units in their easiest form in canada on the end of year exam they usually ask you use kg-m-N
500g becomes .5kg -- F=mg F=.5(9.8) = 4.9N
80cm becomes .08m
so now using the rule F=kx you need
Force in N=force constant N/m * displacement in m
4.9N = k * .08m
It is the same spring so same K value, we are now given a new displlacement
k=61.25 N/m
x=10cm = .1m
With this info we can find the Force
Now that we have the new force you can get the total weight
that is the total weight they asked you how much was added on so just take your total weight and substract your initial weight
.625-.5= .125kg which is your answer
and thats that. the way your expected to do it on governement exam in june (Wink) | {"url":"http://mathhelpforum.com/math-topics/72203-f-kx-hookes-law-problem-print.html","timestamp":"2014-04-19T10:41:14Z","content_type":null,"content_length":"22903","record_id":"<urn:uuid:75f594a0-096a-4e28-90a6-020e8c91baa2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
Knights On A 5x5 Chessboard
The knights puzzle is an beautiful puzzle. It is a single player game, played on a 5x5 chess board with several knights. The search for an optimal solution turned into a mini project for me and
some others around the world. I first saw this puzzle in an old PC game called "The 7th Guest" but it may have originated elsewhere. In that game you only had to solve the puzzle but I was more
interested in finding an optimal solution (minimum number of moves) and in enumerating all minimal solutions. A close group of friends and I first tussled with the problem for a week or so in May
2000. I was after an analytical solution but was unable to make many inroads in that direction, so we resorted to looking for software solutions as the next best thing. Surprisingly we ran into
snags there as well. I then posed the problem to the Usenet group rec.puzzles and was pleasantly surprised by the response. Presented below is a collation and summary of all the results and
discussion of this puzzle. As always, I give credit where due. If I've missed any contributers please let me know so I can rectify the oversight.
To my knowledge, a general solution to this puzzle is still very much an open question. Analysis is still thin and underdeveloped and not enough is known to extend the puzzle to arbitrary odd
dimension. If you have insights of any kind into this puzzle please email me. I will be delighted to add your contribution to this page. It would be nice to see the problem wrapped up with an
analysis along the lines of Donald Knuth's brilliant paper on Leaper Graphs (generalised knights tours on an mxn chessboard).
The following people contributed in some way to understanding this puzzle and its solution. Many of them can be found lurking somewhere around rec.puzzles.
│ Name or Alias │ Email │ Web Site │
│ Gene Wagenbreth │ genew@netcom.com │ - │
│ Hugo van der Sanden │ hv@crypt0.demon.co.uk │ http://crypt.org/hv/puzzle/index.html │
│ James Dow Allen │ jamesdowallen@yahoo.com │ http://freepages.genealogy.rootsweb.com/~jamesdow/ │
│ Justin Leck │ justin_leck@my-deja.com │ - │
│ Niels L Ellegaard │ gnalle@dirac.ruc.dk │ - │
│ Peter Ellis │ peter_ellis@optusnet.com.au │ - │
│ QSCGZ │ qscgz@aol.com │ - │
│ Rich Grise │ richgrise@vel.net │ - │
│ Tim Morrow │ tmorrow@iinet.net.au │ http://members.iinet.net.au/~tmorrow/ (you are here now) │
│ Wayne Kerr │ - │ - │
The Puzzle
Consider a configuration of 12 white and 12 black knights arranged on a 5x5 chessboard. Using only knight moves, you need to interchange the black and white knights. Graphically you need to
convert from
In the above diagrams you will have to imagine the black disks are black knights and the white disks are white knights (sadly, drawing the horsie is far beyond my artistic talent).
Now it isn't that hard to achieve this feat in say 60 or more moves. However, I want you to achieve this transformation in the minimum number of moves possible. Furthermore I want to know in
exactly how many ways a minimal solution can be found. I'll give you a big hint (more details later) that may surprise you:
There are 36 moves in a minimal solution.
The Knights Game Program
Thanks to your truely and to a good friend of mine who goes by the handle Wayne Kerr, we are able to provide a Windows 32 bit program that can be used to play the game. See if you can produce a
36 move minimal solution. The program should comfortably run on any standard Windows or NT Operating System. There are no fancy frills - just the basics. You can move by dragging and dropping or
by double clicking. Under options you can select player=computer and watch the computer play back some of the 1200 minimal solutions to this game.
I am ashamed of the source code so won't supply it at this stage, at least not until it is cleaned up significantly. There are several improvements that I would like to make to the code -
recording player moves, playback of selected sequences, undo, saving etc. that may not ever get done.
The following Knights Program is provided as is with no guarantees. I have not intentionally placed any viruses / trojans or any other malicious code into the program. You should however
thoroughly virus check all of the downloaded software. No registry keys are used. It is largely untested on several hardware / software environments and I can take no responsibility for what may
happen when you run it. The worst thing I could image it doing is failing to run. You are on your own in any event. Download KnightGame.zip (29 KB).
Most of what follows are my own analytical results about this puzzle. Some of the results were motivated by empirical results obtained by computer aided analysis and solutions provided by others.
Result 1
There are 67603900 states (board configurations) for the 5x5 knight game.
We can derive this by noting that there are 25 distinct locations for the empty square. If you then consider the 12 white pieces then you can place them on the 24 remaining squares in C(24,12)
ways, the 12 black knights can only be placed in the remaining 12 squares in 1 way. We therefore have 25*C(24,12)*1 = 67603900 distinguishable states.
By using an identical argument for the generalised knight game on a (2n-1)x(2n-1) board with 2n(n-1) white and black knights, the number of states is an astonishing, astronomical (2n-1)^2*C(4n
Taking the 7x7 board with n=4 we have 49*C(48,24) = 1580132580471900 distinguishable states. You can see how quickly the number of states grows with board size.
While the 5x5 knight game may be amenable to a computer exhaustive search, it is clear that there is no hope for exhaustively solving the larger dimension games without some extra tools and
analysis at our fingertips. It is known for the 5x5 game that all states are reachable. It is likely but not proved that this result is true in the general case. Some of the following results are
specific to the 5x5 knight game and will not generalise to higher dimension.
Result 2
Let T denote the number of moves in any solution to the knights puzzle. T is even for every solution including a minimal solution.
This is a property of knight moves. The empty square changes colour from move to move (white, black, white etc). Every second move places the empty square on the same colour as the starting
colour. The empty square must end in the middle (where it started) which can only occur after an even number of moves.
Even a small result like this is useful and helps make the problem more tractable. The next result gives us a lower bound.
Result 3
The minimal number of moves T in a solution is at least 30.
Empirical evidence shows that the correct minimum is T=36. I will demonstrate this weaker minimum through some analysis specific to the 5x5 board and then improve upon it after establishing some
other results.
Firstly, I need to introduce some notation. One way to refer to the knights and positions on the 5x5 board is to use coordinates (0,0) for the top left corner through to (4,4) in the bottom right
corner. It is often easier to translate this to the numbers 00-24 via the mapping f(x,y) = 5x + y. We can then track the knight moves by following the empty square by a sequence of numbers
a-b-c-... meaning the empty square moves from a to b to c to ... See the first diagram below. Note that white squares are even and black squares are odd and that 12 is the centre square.
Now consider the black pieces. They have to all move from top-left half to the bottom-right half of the board. Ignore the white pieces completely for now, pretend they are not on the board.
Consider the 4 positions on the board indicated by A, B, C, D in the second diagram below.
00 01 02 03 04 A x x x x
05 06 07 08 09 B x x x x
10 11 12 13 14 x x x x
15 16 17 18 19 x x x x C
20 21 22 23 24 x x x x D
Moving the black knight at A to the lower-right half-board requires a minimum of 2 moves, by one of the 6 sequences of moves in the first list of moves below.
Moving the black knight at B to the lower-right half-board requires a minimum of 2 moves, by one of the 8 sequences of moves in the second list of moves below.
Moving a black knight to C requires a minimum of 2 moves, by one of the 8 sequences of moves in the third list of moves below.
Moving a black knight to D requires a minimum of 2 moves, by one of the 6 sequences of moves in the fourth list of moves below.
00-07-04 05-02-09 01-08-19 02-13-24
00-07-14 05-02-13 01-12-19 06-13-24
00-07-18 05-12-09 03-12-19 06-17-24
00-11-08 05-12-19* 05-12-19* 10-17-24
00-11-18 05-12-23 11-08-19 16-13-24
00-11-22 05-12-21 11-22-19 20-17-24
05-16-13 15-12-19
05-16-23 15-22-19
Note that the sequence marked with * above accomplishes both B and C objectives. Thus at a minimum, 3 black pieces must make at least 2 moves and the remaining 9 pieces must make at least 1 move
in their transition to the lower-right half-board. In total at least 3*2+9*1=15 moves are required. The same argument applies to moving the 12 white knights to the upper-left half-board. Thus any
solution requires a minimum of 15+15=30 moves.
In the above argument we have ignored any move hampering restrictions due to having the 'other colours pieces' or even the same colour pieces in the way at various stages during the exodus from
initial to final state. We will need another result before we can improve our lower bound on T.
Result 4
Define W = sum(w_i) and B = sum(b_i) where w_1 - w_12 denote the number of moves made by the 12 white knights in any solution to the puzzle. The b_i's are defined similarly. W and B are both
T = B + W and T is even by Result 2 so B and W are both even or both odd. Now I'll demonstrate that neither B nor W can be odd.
If W is odd then at least one of w_i's is odd (in fact an odd number of the w_i's must be odd). Suppose x of the odd w_i's start on white squares and y of the odd w_i's start on black squares
with x+y odd. We also have 6-x even w_i's on white and 6-y even w_i's on black initially. After moving we have
□ Number on white squares = (6-x) + y = 6+y-x = 6 implying x = y
□ Number on black squares = x + (6-y) = 6+x-y = 6 implying x = y
This implies that x + y = 2x is even which is the desired contradiction. Therefore B and W are both even. A corollory to this is that any solution of the knight puzzle must contain an even number
of odd moves for white pieces and for black pieces. Further, the knights with odd moves must be evenly distributed b/w knights on white squares and knights on black squares.
It turns out that 16 moves are sufficient to transport the knights of one colour to the other half of the board. However it can only be done at the cost of extra moves by the other colour
Result 5
An improved lower bound. The number of moves in a minimal solution to the puzzle T is at least 32.
From Result 3 we know that W is at least 15 and from Result 4 we know that W is even. We therefore conclude that W is at least 16 (even number). Similarly for B. Then T = W + B is at least 32.
The following thoughts on cycles are from Hugo van der Sanden. I have been unable to utilise these ideas yet. Hugo's discussion uses computing terms because he was thinking of the problem in the
context of software implementation.
I wonder whether this becomes more tractable if you enumerate cycles that leave the gap at the centre, and compose such cycles to find solutions.
Since the 2-move cycle has no effect, no cycle shorter than 4 moves need be considered; that means no more than 9 cycles involved.
Every location must be touched by at least one cycle for it to be a solution, so most compositions can be discarded without even looking at the resulting position - represent each cycle as a
24-bit string of the locations touched, and bitwise-or the strings to check that all locations are reached.
Each cycle is fully represented by a 24-element transition vector, so checking the resulting position is fast.
The symmetries of each cycle - both the location bitstring and the transition vector - are easily calculated, so you can fix the first move of the cycle when evaluating them. You do need to be
careful to avoid repeats though: if the second and penultimate locations are the same, you should discard (by ordering) one of the possible cycles.
I think this approach might allow enumeration of the solutions efficiently in both time and memory. The only drawback is that I don't know how many cycles there are: the number of 36-cycles might
be very large. In any event, you might need to do some pruning of trivialities during their evaluation. At a quick count there seem to be 2 primitive 4-cycles and 26 6-cycles (of which one pair
is symmetrical), so the numbers may well get too large for longer cycles.
More Cycles
I now include a contribution from Niels L Ellegaard that utilises cycles. Neils doesn't quite solve the puzzle but suggests a way you might be able to think about the problem that could help if
taken further.
My guess would be examining two cyclic motions Starting from the starting point, 0 we can move 0,a-b .... a,0. Doing this will exchange blacks and whites on the fields b-p. The problem is the
original pawn that was placed on the field a. We must find some way of disposing of it in the right way, but somehow this should be possible.
Somewhere in the middle of cycle above (b,f,j or n) you should be able to switch to the following cycle 1-8:
* a f k * 2 * * * 8
g l * p e * * 1 * *
b * 0 * j * 3 7 *
m h * d o * * 5 * *
* c n i * 4 * * * 6
Performing this 8 times will switch the pawns 2-8, but again we must find a way of disposing of the original pawn of field 1. I hope this article can serve as inspiration.
Miscellaneous Observations
I'll finish this analysis section with some miscellaneous observations, speculations and open questions that I haven't yet formalised at this stage. If you have any thoughts please let me know.
1. In any length 32 solution the 12 white (and 12 black) pieces will be of the following types:
1. 4 pieces moving 2 and 8 moving 1, which we can denote by (1^8,2^4)
2. 1 piece moving 3, 2 moving 2, 9 moving 1, i.e. (1^9,2^2,3^1)
No other combinations are possible. It may be possible to determine more by considering the C(12,4) permutations of (1^8,2^4) and the C(12,1)*C(13,2) permutations of (1^9,2^2,3^1).
2. If it can be shown analytically that no solutions of order 32 exist, then from the above results it is clear that one of the conditions W at least 16 or B at least 16 must change. i.e. W at
least 18 or B at least 18 (but not both).
3. Since empirically we know that T=36 then all minimal solutions must be of the form:
1. W=16, B=20 - asymmetric solutions
2. W=18, B=18 - symmetric & asymmetric solutions
3. W=20, B=16 - asymmetric solutions
By symmetry there are as many solutions of type (a) as of type (c). Thus the minimal solutions divide into two classes.
4. What can we say about the empty square? How often does it have to visit the centre? Justin Leck worked out empirically that the answer is 4. Does the empty square visit both sides of the
board equally often (probably not).
5. Do minimal solutions have the property that the board state at each successive move is either as good or better than the previous state under the criteria of "the number of knights in the
correct half of the board"?
Software To The Rescue
An Excellent Undergraduate Programming Problem
The 5x5 knights puzzle was ultimately solved in software. At least half a dozen people developed software solutions of varying degrees of sophistocation, from my humble code that takes a couple
of days to run to completion to some highly efficient code written by others that can find solutions in under a minute. Some people were generous enough to provide url's to their code (links
provided below). I'm not sure how long those links will last.
The 5x5 knights puzzle implemented in software is not a trivial task. Most first attempts invariably fail due to limited memory or ability to run to completion in reasonable time. Several methods
have been used to search out the state space for the problem ranging from depth first, breadth first, depth first with iterative deepening, hashing.
An exhaustive search without removing duplicates is doomed to failure within 10-15 moves due to space restrictions. Removing duplicates without an O(1) algorithm like hashing is hugely
inefficient and you are unlikely to get past depth 25 in a breadth first search before the day is out.
Due to memory restrictions most attempts at solution require some form of packing the board state into a bit string in some way. Saving the data out to disk was considered and used by some as a
way of managing the large number of states that accumulated during the search.
In short I believe the 5x5 knights puzzle implemented in software to be an ideal undergraduate computer science project, especially in a subject like Algorithms and Data Structures.
The contributions below are roughly chronological. All software is written in C.
My Software Solution
After a week of coding and several aborted searches I was able to come up with a 42 move solution. I had sandwiched the number of moves in a minimal solution to an even number between 34 and 42.
My program was taking over a day to run.
I provide a link to my source code but you would be well advised to look the other links to source code instead as mine is a shambles. My source is a mixture of solution attempts within the same
source code - some commented out, others not. It was compiled in Visual C++ 5.0 and the current size of the state array requires at least 128Mb ram and 256Mb swap space to run. I honestly don't
know in what state I left it. I provide it only because it may furnish some ideas.
You can view my code here or instead, download it here.
Hugo's Software Solution
It was about this time when Hugo stepped in and innocently announced that he had discovered a 36 move solution but hadn't been able to record it yet. His second program produced the solution. I
believe this achievement to be the first of three major breakthroughs in the solution of this puzzle. We all now knew the answer and could start tailoring our programs to look for 36 move
solutions. Hugo also provided an excellent, informative minimum move table of how many states were reachable after n moves which I will discuss a little later. A few days later I produced the
first asymmetric 36 move minimal solution after being armed with some information on how to beef up my program.
Hugo van der Sanden's Contributions (in his words over several posts)
The code I've written for this currently peaks at around 80MB, expected to come down to 72MB in the next iteration. This consists of a packed array of 25*C(24,12) 5-bit locations, representing
the location of the gap at the previous move (40.3MB), an array of ints listing the positions still to be tracked from (which peaks around 28MB) and sundries. Unfortunately I only have 64MB here,
so the process spends most of its time swapping - it takes well over an hour to run, instead of the 20 minutes or so that I think it would otherwise take.
The program claims a shortest solution in 36 moves, but I haven't yet managed to debug the process of printing the results, so I can't validate the claim.
Bringing the memory requirements down should speed the debugging process; the next step is to store 4 bits instead of 5, by taking advantage of the fact that there is at most a choice of 8
locations we can have moved from to any position, plus the marker value for positions never reached. (It is even possible to reduce the storage requirement to 3 bits, and add some cleverness on
backtracking to decide on which choice to go for in the odd pair of cases.)
Here is the solution: label the locations 0 to 24, starting from left to right across the top row. Then the location of the gap in successive positions is:
The current code, now 298 lines, including some explanatory notes, available at http://crypt.org/hv/puzzle/kswap.c
Additional notes: I compiled the code with 'gcc -g -O6 -o ksw4bit kswap.c'; every indent is a tab, which I usually set to 4 spaces.
After processing all positions for move n, the number of entries in the list represents the number of positions that can be reached in n moves _and no fewer_. Here is the number of positions
reachable after n moves:
n #states Cumulative
39 62 67603900 = 25*C(24,12)
I was very impressed with Hugo's reachable state table. It shows that there are some states that aren't reachable until 39 moves. It also shows that the most states added occurs at depth 24.
Gene Wagenbreth's Break Through
What follows is what in my opinion is the second most important break through in solving this puzzle. This observation is a huge simplification and is big time and space saver for any software
solution. I believe Gene Wagenbreth first offered the raw idea and it was first implemented in some brilliant code by James Dow Allen that utilised dynamic hash tables and other speedups. I
present a link to James' website below where he presents his code and implementation notes. My first thought on the '1/2 moves trick' was that it would be a shortcut for finding some minimal
solutions to the puzzle but would miss some of the asymmetric solutions. In fact this gem of an idea can be extended and utilised to find all solutions.
I have been following this thread with some interest. Unfortunately I have not had the spare time to code a program yet, so you guys beat me to it.
One question. The final position is symetrical with respect to the initial position. That means that in a 36 move solution, after the 18th move you must reach a position that is 18 moves from the
finish. The finish position is just the start position with W and B switched. So if you take all the moves reachable in 18 moves and compare them with the same positions with W and B switched,
you should get a match. So you only have to do 18 moves to find a 36 move solution.
Does this make sense and would it speed things up?
James Dow Allen's Software Solution
I also wrote a program to solve this problem, and posted it on my website. My program discovers the 36-move solution on my 32-megabyte PC in just 15 seconds!! I've posted the source code and a
discussion at http://freepages.genealogy.rootsweb.com/~jamesdow/progex1.htm
My interpretation of the 1/2 moves trick
I've given some thought to the trick of only considering half the moves and its implications.
Generate all states from initial position s out to depth 18. Using Hugo's email of cumulative totals this means you will generate 7262393 states in your minimal move array. Call this set of
states S. Let the subset of states at depth 18 be denoted S18. There are 2645274 states in S18 (again from Hugo's totals). S\S18 contains 4617119 states.
Now at the same time, generate all states from the final position f out to depth 18. By symmetry you will generate 7262393 states when you do this. Call this set of states F. Let the subset of
states at depth 18 be denoted F18. There are 2645274 states in F18. F\F18 contains 4617119 states.
Now the only solutions will be obtained from connecting S18 and F18 i.e. S18 intersect F18. Now the interesting result is that there is at most 4617119 + 4617119 + 2645274 = 11879512 states that
could be part of any minimal solution to the puzzle. In fact the number will be much, much less but I can't simplify it any further at this level of analysis. i.e. 82.4% of the states could never
be part of any minimal solution. The memory requirements for the problem come down considerably don't they?
Justin Leck's Software Solution
At about the same time Justin Leck produced the third most important breakthrough. He wrote an awesome program and was able to completely enumerate all minimal solutions. Justin discovered all
1200 minimal solutions to the puzzle. Justin then went on to establish several properties of the solutions that helped give a deeper insight into the puzzle. I too have knocked together a solver
for this problem and confirm the minimum to be 36 moves. It won't break any speed records, but it seems to do the job (took about 2 hours to find the solutions). The program also only requires
about 68Mb of RAM. Assuming program correctness, the results were: 1200 solutions with 36 moves. Here's the first (squares numbered 0-24, 0 being top right, 24 bottom left):
(09->12) (02->09) (13->02) (06->13) (17->06) (24->17) (13->24) (16->13) (05->16)
(12->05) (01->12) (08->01) (11->08) (22->11) (19->22) (12->19) (03->12) (14->03)
(07->14) (00->07) (11->00) (20->11) (17->20) (10->17) (21->10) (12->21) (15->12)
(22->15) (11->22) (18->11) (07->18) (04->07) (13->04) (16->13) (23->16) (12->23)
Notes: After 36 moves you can reach 67599922 out of the 67603900 possible states.
First moves (5->12), (19->12) can't yield a solution in 36 moves. The other 6 can.
[QSCGZ] what's Solns ? How does your program work?
Solns is the number of solutions that each program displayed. I assumed that James's program produced half a solution, because it outputs the state of the board at move 18 (That's what the
asterisk was for). I now realise that he has in fact considered all 36 moves. Displaying the full solution requires his program to be run 34 times, with the execution time differing, depending on
how many moves the program has to consider. Even so, his is still the quickest solver.
I took the easy approach in writing a solver and used two stages. There are 67603900 possible board states, so I have an array of this this size. The first stage involves filling in the array
with the minimum number of moves to reach a specific state from the initial board state. If I understand Hugo's definitions correctly, I used a breadth first approach to fill in the array.
Stage two involves using a depth first search to walk through the minimum move array, starting from the end position to the start position. While searching through the array, the various moves
are collected and the set of moves is displayed when the intial position is found. It also backtracks so that all possible solutions are displayed.
I have implemented these two stages as two different programs, with the first stage dumping a 67Mb file to the hard disk. The second stage is fairly quick (about 1 minute), and has the advantage
that it can dump the set of minimum move solutions from _any_ ending position to the starting position, without the need to perform the first stage again.
The 16Mb space reduction comes from the fact that I use a byte to represent the move number. A 6 bit number could be used instead, so saving (67603900 / 4) bytes, at the expense of greater
complexity and slower execution speed. Writing the program in assembler would probably halve the execution time as well.
Not writing out the state array to the hard disk will save about a minute. The programs do an awful lot of conversions from a state number (long word) to a 5x5 array representing the board and
vice-versa. The various permissible moves for each gap position could also be precalculated. Implementing these would probably halve the execution time.
I've been waiting for someone else to enumerate all the solutions and verify my findings.
The 1200 solutions my program has found should be all the solutions. I've thought about posting 600 solutions (the 600 can be recreated by playing the other solutions backwards), but the size is
still about 59Kb. Here are some stats on the solutions:
1200 solutions in total, 56 of which are totally symmetrical.
Number of times the gap is on a specific square:
Solns Once Twice Thrice Fourfold
1 1 * 1 1
1 1 2 * 1
1 * 2 1 1
1 1 * 1 1
* = These squares either have the gap once or twice
Number of moves for white / black knights
Solns White Black
Number of times a specific knight is moved:
Solns Once Twice Thrice
864 12 12 -
12 knights move once, 12 knights move twice (Here's a symmetrical one. There
are also many non-symmetrical solutions)
13 knights move once, 10 knights move twice, 1 knight moves three times
14 knights move once, 8 knights move twice, 2 knights move three times
14:8:2 (Not symmetrical)
The above solutions show the position of the gap with the board numbered 1
(top left) to 25 (bottom right)
You can view Justin's code, or instead, download Justin's code. I'm grateful to Justin for allowing me to host his work on my site.
Here are Justin's 1200 minimal solutions. Note that the viewing page is 140Kb of HTML while the zipped up download is only 12Kb. View the 1200 solutions or download the 1200 solutions.
Wayne Kerr's Software Solution
Wayne Kerr was able to duplicate Justin's results and produce the same 1200 solutions using an independently developed program. I am fortunate to be able to present the source code for Wayne's
excellent software solution to this puzzle. As with all the source code on this page, it is a "use at your own risk" download. I include Wayne's email on how to use his program.
I can't recall for certain but I think these where the events:
1. Run knights.cpp with one set of START1 and START2 defines and fopen out set to outputa.txt
2. Run knights.cpp with the other set of START1 and START2 defines and fopen out set to outputb.txt
3. Run compare.cpp which will create outputc.txt
4. Run knightsa.cpp with one set of START1 and START2 defines and fopen out set to outputd.txt
5. Run knightsa.cpp with the other set of START1 and START2 defines and fopen out set to outpute.txt
6. Run match.cpp which will create outputf.txt - the set of all 1200 solutions.
Note the two sets of START1 and START2 defines are the starting board state and the ending board state. START1 is the bit state of the board pieces whilst STATE2 is the space position.
Outputa.txt and outputb.txt are all the enumerated board states at depth 18. The files will be about 25Mb each. Outputc.txt are the common enumerated board states between outputa.txt and
outputb.txt. Outputd.txt and outpute.txt are the two half list of all the moves (I think). Outputf.txt is the final list of all the moves. I suggest you might want to compile and run the above
steps to make sure that the above steps are correct. In particular you will probably want to change my hard coded directory paths.
You can view Wayne's code or instead, download Wayne's code.
QSCGZ's Contribution
QSCGZ contributed much to the discussion on this problem. What follows are some of his thoughts and insights into the puzzle. QSCGZ also posed an interesting related puzzle that I haven't had a
chance to look at yet.
(It is even possible to reduce the storage requirement to 3 bits, and add some cleverness on backtracking to decide on which choice to go for in the odd pair of cases.)
You could always calculate all possibilies of -say- 5 continous moves and mark the reached positions with the new iteration number. Whenever during this process , you reach a position marked with
an old iteration number, you can backtrack immediately. Or when you find an unexact (e.g. 20-24) iteration number, you can just calculate the exact one (e.g.23) by doing some moves on the fly. So
you only need 36/5 = 3 bits for the iteration-numbers but at the cost of speed.
Follow-Up-Puzzles: If we play the position as in chess, who wins? Does white end up with more knights than black? Can he capture all black knights? How many moves are required? What about using
other start-positions, other (fairy) chess-pieces?
Rich Grise's Contribution
Rich Grise was generous enough to offer me some tips on how to save the state of my program to disk that could be retrieved later. His ideas were based on a program he wrote to solve Hi-Q.
Well, actually, it's been awhile since I've done any actual programming, but I'd think, depending on how fast it goes, every hundred or 1000 iterations, check the keyboard. If there's a keystroke
waiting, check to see if it's the "stop and save" command, then if so, just dump the whole array to a file. Then, depending on OS, load up the next run from the command line, or (in for example
VB or probably VC) give a menu option to load the latest pattern.
In my Hi-Q program, I knew that at any level there are only a certain number of moves, and I wrote it recursive / reentrant. So it wasn't hard to pick up where it left off - at any given level,
there are a finite number of moves, and all the higher-level moves are in the array. So you'd go to the next move, record it, call the iterator, it'd check all its moves, calling the iterator
each time, and when you get to one peg, announce a win. Then, of course, return back up the iteration chain to try the rest of the possibilities. And of course, each iteration or level has its
own stack, where it keeps a record of its previous moves. Local variables, actually, but they're stored in the stack frame, at least in 8086 C.
The knight move thing would be a challenge because one would have to figure out a convenient way to order possible moves. Almost makes me wish I had some ambition. Maybe something like,
iteration() {
for each piece,
check available moves.
if move is available,
do the move.
check next iteration
end if
check if done, etc
end for
if (iteration count mod 1000 == 0)
print progress report
check for keystrokes;
if keystroke is waiting,
if it's the "save and exit" key, then
save the whole array
(presumably you'd need a static variable somewhere for this one)
exit the program;
Wrapping It Up
Some people may like to read the posts on this puzzle in their entirety. I provide a zip file of the entire Usenet thread of 44 posts here. The posts are in *.nws format viewable in most news
readers and have the numbers 01-44 at the end of the filenames so you can read them in order.
Alternatively you can browse your favourite news archive site like deja-news and search for the thread "Swapping Knights Puzzle On a 5X5 Chess Board". Enjoy! | {"url":"http://members.iinet.net.au/~tmorrow/mathematics/knights/knights.html","timestamp":"2014-04-16T07:13:40Z","content_type":null,"content_length":"42852","record_id":"<urn:uuid:103000e4-57a5-483a-98e1-1abb691e6d45>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
gBoost: A mathematical programming approach to graph classification and regression
Hiroto Saigo, Sebastian Nowozin, Tadashi Kadowaki, Taku Kudo and Koji Tsuda
Machine Learning Volume 75, , 2009.
Graph mining methods enumerate frequently appearing subgraph patterns, which can be used as features for subsequent classification or regression. However, frequent pat- terns are not necessarily
informative for the given learning problem. We propose a mathe- matical programming boosting method (gBoost) that progressively collects informative pat- terns. Compared to AdaBoost, gBoost can build
the prediction rule with fewer iterations. To apply the boosting method to graph data, a branch-and-bound pattern search algorithm is developed based on the DFS code tree. The constructed search
space is reused in later iterations to minimize the computation time. Our method can learn more efficiently than the simpler method based on frequent substructure mining, because the output labels
are used as an extra information source for pruning the search space. Furthermore, by engineering the mathematical program, a wide range of machine learning problems can be solved without modifying
the pattern search algorithm.
PDF - Requires Adobe Acrobat Reader or other PDF viewer.
EPrint Type: Article
Project Keyword: Project Keyword UNSPECIFIED
Subjects: Theory & Algorithms
ID Code: 4401
Deposited By: Koji Tsuda
Deposited On: 13 March 2009 | {"url":"http://eprints.pascal-network.org/archive/00004401/","timestamp":"2014-04-21T15:09:56Z","content_type":null,"content_length":"7728","record_id":"<urn:uuid:a37b0258-f00a-4458-9bfd-0a912645eff1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |
The HELM project is meant to integrate the current tools for the automation of formal reasoning and the mechanization of mathematics (proof assistants and logical frameworks) with the most recent
technologies for the development of web applications and electronic publishing, eventually passing through the Extensible Markup Language. The final aim is the development of a suitable technology
for the creation and maintenance of a virtual, distributed, hypertextual library of formal mathematical knowledge.
In contrast with current digital libraries, where the real document is usually considered as a single ``text'' element, preventing any automatic elaboration of its content, as well as any form of
internal navigation, our purpose is that of modeling the inner structure of mathematical documents, at different levels of detail.
The broad goal of the project goes far beyond the trivial suggestion to adopt XML as a neutral specification language for the ``compiled'' versions of the libraries, or even the observation that in
this way we could take advantage of a lot of functionalities on XML-documents already offered by standard commercial tools.
First of all, having a common, application independent, meta-language for mathematical proofs, similar software tools could be applied to different logical dialects, regardless of their concrete
nature. This would be especially relevant for all those operations like searching, retrieving, displaying or authoring (just to mention a few of them) that are largely independent from the specific
logical system.
Moreover, if having a common representation layer is not the ultimate solution to all inter-operability problems between different applications, it is however a first and essential step in this
Finally, this ``standardization'' process naturally leads to a substantial simplification and re-organization of the current, ``monolithic'' architecture of logical frameworks. All the many different
and often loosely connected functionalities of these complex programs (proof checking, proof editing, proof displaying, search and consulting, program extraction, and so on) could be clearly split in
more or less autonomous tasks, possibly (and hopefully!) developed by different teams, in totally different languages.
This is the new, content-centric architectural design of future systems. | {"url":"http://helm.cs.unibo.it/","timestamp":"2014-04-20T10:46:29Z","content_type":null,"content_length":"8628","record_id":"<urn:uuid:b168bf93-f11d-4eec-bec9-783cc1b2fd2d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
Square-1 shape probability distribution
09-04-2008 #11
Super-Duper Moderator
Join Date
Jul 2007
Where the rolling foothills rise
WCA Profile
What version of mathematica are you using, Lucas? I have access to 5 on the physics computers here, if you are using that... and will try this tomorrow. Otherwise, I'll have to find a way to get
an earlier version...
Edit: You can use Diag^n, as it is a diagonal matrix; it is a special case, which is why it is so useful.
Short example:
N^x = (P.D.P^-1)^x = P.D.P^-1.P.D.P.... = P.D^x.P^-1
Try Eigenvalues[N[]], and see if any of those are over 1... they should all be less than or = 1.
Yeah, I know about diagonalizing, but maybe I don't know enough about other eigenstuff. Anyhow, I think the issue is with Mathematica handling stuff, I'll try it again when I get version 6.
So, through dinner, I set my computer to compute the explicit, exact eigenvalues of the transition matrix. The results were so surprising, I actually laughed out loud at their beauty:
Of the 170 eigenvalues, exactly 90 are non-zero. That is certainly not a coincidence; there are 90 shapes, and hence 90 unique rows. (From minors and determinants, I can imagine that you get just
the right amount of zeros. but it's still pretty, and some time I'd like to fully understand why.)
Anyhow, the 90 non-zero eigenvalues are of three forms:
□ 1 (The 1 root of 1-x)
□ The 64 roots of - 4991442219 + 386948115252*x + 138762924658242*x^2 - 3103222624916364*x^3 - 877640411309612746*x^4 + 6767666047147902260*x^5 + 2012337578608379698422*x^6 -
8958764174373753804464*x^7 - 2105450969798079776924812*x^8 + 11046600222902203991043312*x^9 + 1123406093781088176157756896*x^10 - 8114779285022492769604795424*x^11 -
325472049644439264188809738432*x^12 + 2962546562996861776481954741760*x^13 + 56287388574936353280108941504768*x^14 - 629949208560303852773245517141504*x^15 -
6105099366584694343157749747552256*x^16 + 86874786230163907381580006692296704*x^17 + 406640998126424132369607623831846912*x^18 - 8273097635823613171991383746279940096*x^19 -
12847215092645959516368232004260052992*x^20 + 563542116540418623958067303673961512960*x^21 - 380249727800812172360337111385392087040*x^22 - 27876394040223235000410822110796320866304*x^23 +
70200985683043152366609324564260935958528*x^24 + 995061174735599595418926716008043747737600*x^25 - 4377190735910344574289281403129530741161984*x^26 -
24483749753436884219275245277429039028502528*x^27 + 172693480479595766346198778149152449118601216*x^28 + 344268199358502700321288117865500491022073856*x^29 -
4765625577836958680705390357889539456775487488*x^3 0 + 772858710665023800732181764002139803146518528*x^31 + 93400462329952268786337346115591965649619910656*x^ 32 -
173918077700241084337687739506139579202876211200*x ^33 - 1244927272170148943873781598261313236445492674560* x^34 + 4715876999443357796546772004551277481760550551552* x^35 +
9204457647103604719171194562707870353618733367296* x^36 - 74472880131178952274776792185662826546928673619968 *x^37 + 17443350194669512509483766280210174159010360459264 *x^38 +
75191391004923599087061417545533284299211717948211 2*x^39 - 13927791249227776078240490903762146831911538246287 36*x^40 - 43280920416000808833869800194972791652726169833308 16*x^41 +
18110807337525247976881146547787829180504891067465 728*x^42 + 26446806236562635365428496318374472606945724250193 92*x^43 - 12495775864656449852395485032743618177001077355210 3424*x^44 +
18148554882138983236427680894317657290624795908295 8848*x^45 + 40633112472130591883866628308152868033344685251074 4576*x^46 - 15481266283450907844717709600230446897185206707756 72832*x^47 +
54971332503775711093793716428432978484848488873145 1392*x^48 + 55522495833400005878653420117344785319027260232082 39104*x^49 - 10580714260899344013132135083176015760588301879893 557248*x^50 -
25105401045309405161267504284860436825811213099791 81056*x^51 + 35729814707763782643150469325553615343782749037989 462016*x^52 - 47192725438731643771622400943584186952215159805015 228416*x^53
- 14671327554181472737539318334576376054183452695928 504320*x^54 + 12167762673059436924667104845242709903144850159253 1140608*x^55 - 15352842715520548924920201552945956831797069825261
4369280*x^56 + 31072468697838310383691091990364643611438025213753 688064*x^57 + 16618470174861176831016701006962041571324603760502 8462592*x^58 -
27899649455767723350693763788559080462889847255743 2274944*x^59 + 24487369234780003959879339853438500289437687257896 0588800*x^60 - 13677849536791490721479177714902447832424297082721
0883072*x^61 + 49139518222010279228071785788096857437263861359797 338112*x^62 - 10462695095295681452359646651435512209985692421953 945600*x^63 +
10111597944446833081475094750380629249919058448062 87360*x^64
□ The 25 roots of -41864 + 3136179*x + 386613206*x^2 - 16868236200*x^3 - 402432346544*x^4 + 21212854848768*x^5 - 58429160850880*x^6 - 5476338956938432*x^7 + 47709591633416832*x^8 +
458983451386437120*x^9 - 6981888874709569536*x^10 - 1163160407514488832*x^11 + 392768286252547276800*x^12 - 1520451906669906886656*x^13 - 7565285380895954436096*x^14 +
68553662577750099099648*x^15 - 78233310032907038883840*x^16 - 913347325546567116521472*x^17 + 4118199373346723755720704*x^18 - 3435110820984016104062976*x^19 - 24441765912940143332818944*x^20
+ 99523299112333703179665408*x^21 - 183592589424988418224422912*x^22 + 191248395143239778961457152*x^23 - 108940123945387453071753216*x^24 + 26498949067796948044480512*x^25
Okay, someone explain that.
(Note: This makes 90 roots: 1^2+5^2+8^2. Why squares?)
There is a lot of fascinating math about Square-1.
There's those equations, and I have no idea about the coefficients yet.
And I still don't know why the 613 applies to the eventual distribution. Maybe there's something to do with shape symmetries?
garron.us | cubing.net | twisty.js | ACube.js | Mark 2 | Regs | Show people your algs: alg.cubing.net
Okay, I think I can explain the probability distribution. Apparently it's mostly what I suspected.
Each shape is weighted (probability/3678) by the its degree in the full graph divided by its AUF/ADF symmetries.
So, it perfectly corresponds to the 3678, and Bruce's explanation is half of it.
Consider each shape the Square-1 can have, such that it's currently /-twistable (there's a crack going around the entire middle, and the 16 edge and corner pieces are essentially four
half-sides). Each of these 3678 shapes has exactly the same probability. (If you want to consider the middle two pieces, divide the probability in half for its two possible states. But really, we
can ignore the middle for most of this.)
This distribution is stable. Consider 3678 Square-1s, each in one of those shapes. The distribution of the "170" shapes is given by the probabilities I found.
Now, imagine giving each of these a / twist. The 3678 shapes will be permuted randomly (2-cycles), but each of them is still represented and each of the "170" shapes still exist with the same
distribution. Performing a random AUF&ADF of each does not change the distribution, as each AUF&ADF of a shape are represented equally already, and will go to each other with equal probability.
With a little insight, it's intuitive that the distribution is stable, and it's not too hard believe that it's unique and all random walks will approximate it.
So, I think the "natural" shape distribution is resolved. But thanks to cuBerBruce and JCBM, without whom I probably wouldn't have found it, and qqwref, who specifically helped me think through
the final "solution."
So all that only (yeah right, "only") leaves me to wonder now why the 90-degree characteristic polynomial factors into 3 polynomials with 1, 25, an 64 roots. (Which root corresponds to which
shape? Does that even make sense to say?)
garron.us | cubing.net | twisty.js | ACube.js | Mark 2 | Regs | Show people your algs: alg.cubing.net
I'm having troubles getting it to diagonalize as well. It looks like it is just due to small decimal errors in mathematica though; P.Pinv isnt even giving me I, but some other matrix filled with
small floating point errors. The code I posted earlier at least works for the distribution problem Stefan posted in the WCA forum, so I don't think diagonalizing like this is a problem; either
something in the initialization is causing it to calculate approximately instead of exactly, or Mathematica just doesn't want to do it (I'm on a cruddy computer right now.. it took a few minutes
just to open mathematica :/). I wish the other lab were open now; but I give up and will accept approximations til I can get in there.
The probability distribution is quite interesting. I was able to generate that polynomial (well, more or less... its giving me an imaginary part right now too and won't let me Re[]). It therefore
also is having trouble factoring. As to why it ends up as 3 polynomials like that, I have no idea. It is certainly at least possible to match the shapes up with eigenvalues/roots, but as to if
certain shapes fall into a certain polynomial, I don't know if a pattern will emerge.
Cool visual implementation I found (not necessarily what we are looking for, though); but something like this could be done for probabilities of shape nodes: http://demonstrations.wolfram.com/
Also cool (of course!): http://demonstrations.wolfram.com/RubiksCube/
And again:
Got it working in 6; it was a stupid bug... Rotate[] is already reserved (defined for some graphics thing) and I hadn't replaced all instances of it.
Package to import is just <<'Combanitorica'... I'm not sure if the correction thing replaced something else too, though. Anyhow, it looks like its working.
Yet again:
There we go. I like my computer
After about an hour, I'm pausing this thing. I really wish there were some way to monitor the evaluation's progress, but it appears not. I almost wonder if I could do this by hand faster, since I
already know the eigenvalues. Will let it keep running when I go to bed.
And finally:
After it looks like 11 hours, I'm killing this. I don't know why its taking so long... all I did was Eigenvectors[adj].
Last edited by JBCM627; 09-06-2008 at 08:47 AM.
lucas you should start speedcubing with the square-1 again
unless you have been
So, we finally covered markov chains in class, after which I hit my head, and went back to Lucas's notebook.
Finding exact values is a bit easier than I initially thought - you only need to consider the eigenvalue 1 (which is now obvious to me as this is the only eigenvalue that will remain nonzero in
the limit when you raise the eigenvalues to an infinite power). This also serves to further clarify that the approximate numbers are correct:
p = NullSpace[IdentityMatrix[170] - Transpose[adj]]
This gives you relative values. So to find the exact probabilities, you will need to multiply by 1/S, where S is the Sum of your elements. Not being too familiar with mathematica, I did this in a
sort of roundabout way:
f = 1/Tr[p*IdentityMatrix[170]]
So, your probabilities are simply f*p.
{4/1839, 4/1839, 4/1839, 4/1839, 5/3678, 6/613, 6/613, 6/613, 8/613, \
8/613, 8/613, 6/613, 6/613, 6/613, 2/613, 4/613, 4/613, 4/613, \
16/1839, 16/1839, 16/1839, 4/613, 4/613, 4/613, 4/1839, 2/613, 2/613, \
2/613, 8/1839, 8/1839, 8/1839, 2/613, 2/613, 2/613, 2/1839, 6/613, \
4/613, 4/613, 4/613, 6/613, 6/613, 4/613, 4/613, 6/613, 2/613, 4/613, \
8/1839, 8/1839, 8/1839, 4/613, 4/613, 8/1839, 8/1839, 4/613, 4/1839, \
4/613, 8/1839, 8/1839, 8/1839, 4/613, 4/613, 8/1839, 8/1839, 4/613, \
4/1839, 4/613, 8/1839, 8/1839, 8/1839, 4/613, 4/613, 8/1839, 8/1839, \
4/613, 4/1839, 6/613, 4/613, 4/613, 4/613, 6/613, 6/613, 4/613, \
4/613, 6/613, 2/613, 6/613, 4/613, 4/613, 4/613, 6/613, 6/613, 4/613, \
4/613, 6/613, 2/613, 4/613, 8/1839, 8/1839, 8/1839, 4/613, 4/613, \
8/1839, 8/1839, 4/613, 4/1839, 4/613, 8/1839, 8/1839, 8/1839, 4/613, \
4/613, 8/1839, 8/1839, 4/613, 4/1839, 6/613, 4/613, 4/613, 4/613, \
6/613, 6/613, 4/613, 4/613, 6/613, 2/613, 2/613, 4/1839, 4/1839, \
4/1839, 2/613, 2/613, 4/1839, 4/1839, 2/613, 2/1839, 6/613, 4/613, \
2/613, 6/613, 4/613, 2/613, 6/613, 4/613, 2/613, 8/613, 16/1839, \
8/1839, 8/613, 16/1839, 8/1839, 8/613, 16/1839, 8/1839, 6/613, 4/613, \
2/613, 6/613, 4/613, 2/613, 6/613, 4/613, 2/613, 2/613, 4/1839, \
2/1839, 4/1839, 4/1839, 4/1839, 4/1839, 5/3678}
Digging this out for clarification requests...
Do I understand this thread correctly, that this is achieved by randomly picking a position from all with equal probability? So could one for example randomly permute the 16 pieces and then put
them in that order clockwise from the top front gap, then clockwise from the bottom front gap (just making sure it's actually two halves and starting over if not)? Would that represent the
many-random-moves idea?
Do the results change if we use a more 3x3x3-like definition like this?
Random sequence of moves of U, R and D without trivial cancellations.
In other words, don't treat U and D differently from R in the definition. Does this end up being the same definition as yours or having the same result as yours? And if not, which definition is
How does the current WCA scrambler work? Looks like random moves, but I don't understand the code and how it chooses the turns (one of our two definitions?).
Also: The WCA-scrambled state is always /-turnable. But should it be? I find that slightly unnatural.
the number of shapes if we consider rotations different (e.g. a square counts as 3 because it has three possible orientations)
Why 3? Shouldn't that be 2?
Ugh. I should have searched for a thread like this when I was working on my square-1 BLD method. I was thinking it would be nice to have probabilities of shapes so I could learn them in order of
likelihood of getting them, so I could be more likely to get the ones I had already memorized. But I guess I'm glad now that I didn't, since otherwise I might not have been as motivated to learn
them all. I guess I didn't remember this thread because it wasn't something that interested me that much back then.
Why not 4? For my memorization purposes, a square has 4 possible orientations. It makes getting to square consistently a little tricky because I need to make sure I always orient it the same from
the previous position. (I solved it by always using minimal movement - zero or 30 degrees.) Maybe I don't understand what "orientation" means in this context.
(And by the way, I hate the lack of ability to automatically get quote trees with the new site for cases like this. Stefan's "Why 3? Shouldn't that be 2?" quote is completely meaningless without
the preceding quote.)
My square-1 BLD method: http://skarrie.se/square1blind/
Do I understand this thread correctly, that this is achieved by randomly picking a position from all with equal probability? So could one for example randomly permute the 16 pieces and then put
them in that order clockwise from the top front gap, then clockwise from the bottom front gap (just making sure it's actually two halves and starting over if not)? Would that represent the
many-random-moves idea?
I would like to do this, but I don't know a reasonably efficient solving algorithm, except for Jaap's program which uses many MB of tables and doesn't have any graphical interface. Is there one?
Could someone make one? Even a program which uses many MB of tables could work fine if it could be prettied up in CubeExplorer fashion, because we already use one external program to make
scrambles and it wouldn't really be too much to ask to have another.
I don't know about results, but there is a fundamental difference between U/D and R in the Square-1 (as it is a dihedral puzzle, not a symmetrical one where all layers work equally) so it would
be inappropriate to treat the layers equally when scrambling. As for "better", well, see above - we should be trying to move towards a random-state scrambler.
I don't find this unnatural. You wouldn't provide a 3x3 scramble with the top layer turned 45 degrees and say that it's fine since two layers can be turned. Even though the Square-1 is bandaged,
I think the same reasoning makes sense: we can always use positions that allow all layers to be turned, so we should, because other positions are between the kind of real states that we would be
able to see during a non-canceling move sequence.
Computer cube PB averages of 12: [Clock: 5.72] [Pyraminx: 3.33] [Megaminx: 49.52]
[2x2: 2.66] [3x3: 8.45] [4x4: 29.06] [5x5: 52.69] [6x6: 1:34.78] [7x7: 2:20.34]
There's one here now:
Don't know how it works and how much resources it takes, though.
Why? For example, what is wrong with my above definition which does treat them equally?
I now think it does differ, at least compared to the WCA scrambler. I just tested the WCA scrambler a bit, did this three times:
- Generate 10 scrambles of 999 moves to get a lot of data
- Remove (0,0) from the ends cause I don't want to count that
- Count (0,y), (x,0) and (x,y)
The results:
First attempt: 1211, 1207, 4138
Second attempt: 1184, 1159, 4113
Third attempt: 1212, 1166, 4124
So (U,D) turns with U or D being 0 consistently occured significantly more than half the time.
Now consider my scramble definition: random non-canceling sequence in <U,R,D>. It can still be written in the current WCA notation, of course. Imagine we just had an R turn. Next must be a U or D
turn, let's say it's D. Then next must be a U or R turn. I see two options to weigh them:
- Don't weigh them, they have equal probability. Half the time it's the R and we get a (0,D) or (U,0) turn. So the (U,D) turns with U or D being 0 should occur half the time.
- Weigh them according to how many possibilities each layer has. R only has one, U probably has several. So the (U,D) turns with U or D being 0 should occur significantly *less* than half the
Since the WCA scrambler appears to have them significantly *more* than half the time, not half the time or significantly *less* than half the time, we seem to differ.
Absolutely. But my question is whether my scrambling-by-turning definition results in a different probability distribution, affecting how the random-state scrambler picks the random state.
Imagine someone who's not already biased by knowing how to use the puzzle. In other words, a non-cuber. I'm quite sure they align the 3x3x3 properly, but not necessarily the square-1! They might
very well end up with a U turn only resulting in a UF gap but not a UB gap.
Last edited by Stefan; 09-20-2010 at 11:43 AM.
There's one here now:
Don't know how it works and how much resources it takes, though.
Neat. But it's php based so there's no way to check whether it works (or how well it works) without hacking. Maybe he has some random-state code feeding into Jaap's solver and it's all run on a
server somewhere.
I now think it does differ, at least compared to the WCA scrambler. I just tested the WCA scrambler a bit, did this three times:
- Generate 10 scrambles of 999 moves to get a lot of data
- Remove (0,0) from the ends cause I don't want to count that
- Count (0,y), (x,0) and (x,y)
The results:
First attempt: 1211, 1207, 4138
Second attempt: 1184, 1159, 4113
Third attempt: 1212, 1166, 4124
So (U,D) turns with U or D being 0 consistently occured significantly more than half the time.
I see, the third number is the first two PLUS the (x,y) with nonzero x and y.
I looked at the code and it uses Jaap's scrambler which I do not think is the best one. It has some weird biases and the functioning is pretty much opaque. Of course it is pretty much impossible
to get any new scrambler past the WCA so that will have to do. The scrambler in qqtimer is better and functions as follows:
- Choose a random move of the form (x,y), with x and y chosen completely randomly. If x=y=0 (unless it's the first move) or the move is impossible, try again. If we don't have enough remaining
moves to do this, try again.
- After performing it, if we have at least one remaining move, do a /.
- Repeat.
Now consider my scramble definition: random non-canceling sequence in <U,R,D>. It can still be written in the current WCA notation, of course. Imagine we just had an R turn. Next must be a U or D
turn, let's say it's D. Then next must be a U or R turn. I see two options to weigh them:
- Don't weigh them, they have equal probability. Half the time it's the R and we get a (0,D) or (U,0) turn. So the (U,D) turns with U or D being 0 should occur half the time.
- Weigh them according to how many possibilities each layer has. R only has one, U probably has several. So the (U,D) turns with U or D being 0 should occur significantly *less* than half the
Since the WCA scrambler appears to have them significantly *more* than half the time, not half the time or significantly *less* than half the time, we seem to differ.
We differ because I initially thought the official scrambler used the good scrambling algorithm, which it doesn't. I also assumed that when you said the U/D/R layers would be treated equally you
were essentially forcing the first weighing. I think the second weighing is a good way to go, if we must generate the moves randomly.
I don't think we should ever go by what people who don't "know[...] how to use the puzzle" think. If you think we should, why not just disassemble the puzzles and put them back together? Or, even
worse, why not take off the stickers and reapply, for our scrambles? I prefer to go by the logical route of mixing up a puzzle as close to uniformly randomly as possible, without getting in the
way of its proper functioning. Having a Square-1 in a position where / is impossible does not seem any more reasonable than having it halfway through a / move, or having a 3x3 off by 45 degrees,
or having a 5x5 halfway through a V-lockup.
Computer cube PB averages of 12: [Clock: 5.72] [Pyraminx: 3.33] [Megaminx: 49.52]
[2x2: 2.66] [3x3: 8.45] [4x4: 29.06] [5x5: 52.69] [6x6: 1:34.78] [7x7: 2:20.34]
09-04-2008 #12
Super-Duper Moderator
Join Date
Jul 2007
Where the rolling foothills rise
WCA Profile
09-05-2008 #13
09-05-2008 #14
11-17-2008 #15
09-20-2010 #16
09-20-2010 #17
Super Moderator
09-20-2010 #18
09-20-2010 #19
09-20-2010 #20 | {"url":"http://www.speedsolving.com/forum/showthread.php?6165-Square-1-shape-probability-distribution/page2","timestamp":"2014-04-17T09:56:15Z","content_type":null,"content_length":"132906","record_id":"<urn:uuid:6a2aa957-b732-4f38-8c67-49f6737df773>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
339 threads found:
171. Pre-Algebra and Algebra
Links for building sailboats for a math project.
- Date of thread's last activity: 7 August 2006
172. Factoring pairs and factorization
Ways to incorporate disciplines other than math into the teaching of factoring
- Date of thread's last activity: 21 September 2006
173. Math/science fair
Where to find math related experiments for math and science fairs
- Date of thread's last activity: 26 September 2006
174. How to assign homework that uses the Web?
Web sites that can monitor students' math progress and also give them feedback
- Date of thread's last activity: 27 August 2006
175. Pumpkin Math
Examples of fun ways to use pumpkins in math
- Date of thread's last activity: 22 October 2001
176. Building bridges
Lessons involving constructing or building bridges
- Date of thread's last activity: 20 May 2000
177. Bulletin Boards
Effective use of bulletin boards in the high school mathematics classroom
- Date of thread's last activity: 23 January 2007
178. Decorating my classroom
Using mathematics themes to decorate a classroom
- Date of thread's last activity: 2 February 2009
179. Olympics
Using the Olympic Games as a context for teaching mathematics
- Date of thread's last activity: 9 January 2007
180. Writing math journals
What should students write about in math journals? What are some good topics?
- Date of thread's last activity: 10 August 2007 | {"url":"http://mathforum.org/t2t/browse/branch.taco?level_child=middle&start_at=171","timestamp":"2014-04-20T01:02:31Z","content_type":null,"content_length":"8104","record_id":"<urn:uuid:e4295ded-a90f-4221-9823-f7895273f9ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Function Linear-Regression-Minimal-Summaries
< n > < x > < y > < x2 > < y2 > < xy > )
Calculates the slope and intercept of the regression line. This function
differs from linear-regression-minimal' in that it takes summary statistics:<br> x' and y' are the sums of the independent variable and dependent variables,<br> respectively; x2' and y2'
are the sums of the squares of the independent<br> variable and dependent variables, respectively; and xy' is the sum of the
products of the independent and dependent variables.
You should first look at your data with a scatter plot to see if a linear model
is plausible. See the manual for a fuller explanation of linear regression | {"url":"http://common-lisp.net/project/cl-mathstats/documentation/cl-mathstats-package/function-linear--regression--minimal--summaries.html","timestamp":"2014-04-17T21:40:05Z","content_type":null,"content_length":"2029","record_id":"<urn:uuid:194fee4c-9221-4869-b86d-633a8e422cd2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating required pressure to maintain flowrate
I think you're confusing what is involved in the viscous and invicsous flow assumptions. Any value of viscocity is going to make a contribution to the shear forces at the wall due to viscosity of the
fluid. Making the assumption of inviscid flow depends on what area of the flow you are interested in. If you're after a value inside the boundary layer, close to the wall, then you might need
equations to describe viscid flow. Otherwise their contributions to the flow somewhere else is usually negligible in problems like this one.
It really depends on the Reynolds number of the flow. For example, Air at high speeds is going to have more of an effect on the flow than oil at low speeds, even though oil has a much higher value
than air. | {"url":"http://www.physicsforums.com/showthread.php?p=4155549","timestamp":"2014-04-17T03:53:22Z","content_type":null,"content_length":"37953","record_id":"<urn:uuid:249e77df-7525-447f-bb64-cdba0b1235a8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modeling of parasitic losses
When modeling a proportional or servovalve for subsequent simulation, output data for flow metering and leakage are steady state. As such, they are arranged similar to those that would be collected
during testing a real valve, such as an automated data acquisition system. Figure 1 contains the flow metering and land-to-land leakage data over the spool position range of ±100% for the valve model
from previous discussions.
The first thing to note in Figure 1 is that the valve is critically lapped. Looking at the flow metering curve, the slope of the flow metering curve does not increase or decrease as it goes through
zero. This is an example of a sort of “perfect” flow grinding result. This is, more or less, the aim of the flow grinding technician in a real valve. Such near-perfect performance is not always
achievable in a real valve because of the need for practical tolerances and cost constraints. However, it is simple to achieve in simulation, even though finding critical lap is a trial-and-error
Figure 1. Flow metering and land-to-land leakage characteristics of the valve as dictated by the input data are represented. This graph contains two of 14 simulation output variables.
The second thing to note is that the peak value of the flow is about 1.84 in.³/sec, whereas the specified input is 2.75 in.³/sec. This discrepancy is caused by unknowns existing at the outset of the
simulation. We do not know where the spool will be when the leakage maximizes, so we can only guess at what that might be.
The value of the peak leakage in the input data was set to the value of the two flow functions (hyperbola and straight line) at their point of transition. However, the point of transition is not
necessarily the point where the leakage peaks.
The results in Figure 1 depict a critically lapped spool. We do not always know from the actual test data if the valve is critically lapped. Therefore, a trial-and-error process is applied (multiple
successive program executions with different values of overlap) until critical lap is achieved in the output. The peak leakage then becomes whatever that degree of overlap delivers.
Pressure metering
Pressure metering, as explained in a previous edition, gives the information needed to understand how the valve brings the actuator to a stop. Figure 2 shows the pressure metering characteristics per
port. The data were all generated from the math model of the valve. It clearly shows how the metering is confined to the center region of spool travel. In fact, as shown, the graph is of little value
because the metering is so compressed.
Figure 2. Per port pressure metering shows how the work port deadhead pressures vary with spool position, and further, how all the metering is confined to the null region of the valve.
Nonetheless, the redeeming value is that a pressure metering rule-of-thumb is valid: Pressure metering is confined to a valve’s overlap plus about 5% of spool travel. This is certainly supported by
the graph in Figure 2. But the figure also points out the need to expand the null zone so that the metering is more fully displayed and with greater resolution. This is done by running the simulation
program again, but with the spool travel range substantially shortened, which will be done shortly.
Valve coefficient metering
The entire model development has been directed at synthesizing mathematical functions whose evaluations produce data that mimic the actual performance of the valve — more particularly, development of
mathematical functions that describe the variations in the flow coefficients in going from overlapped to underlapped and vice versa. It seems fitting, then, that the flow coefficient data be
displayed as well.
ISO 10770-1 mandates that the per land flow metering be presented for all four lands in a display similar to that of Figure 3, which shows the flow coefficient, not the flow. However, the
coefficients and the flows differ only by the square root of the constant pressure that was used in testing. Therefore, no difference exists in the shape nor the conclusions to be drawn therefrom.
Figure 3. The metering of the powered lands, PA and the return land, AT, show that at null when one opens the other closes, and vice versa: a desired result.
Figure 3 shows only two of the lands, not all four. The rationale for using only two is that for the input data, the other two lands are identical to the ones shown here. Therefore, the curves would
perfectly overlay those shown unless one uses the suggested ISO 10770-1 graphing format. There, cartesian quadrants III and IV are shown, the other two lands are “flipped” vertically and are, thus,
separated and displayed in the lower half of the graph.
ISO 10770-1 graphing format is important when there are significant amounts of asymmetry among the four lands. For our mathematically pure and totally symmetrical model, there is no new information
to be added, thus Figure 3 suffices.
Recall in the model development that two representative regions consist of the overlap (closed) and the underlap (open). Figure 3 shows the two regions and also that the transition from one function
to the next takes place right at the origin. The transition takes place sharply and over a very narrow range of spool travel — again, a desired result for a nominally “zero-lapped” valve.
This further supports the conclusion that the valve is critically lapped and represents the best in flow grinding that can be accomplished. In the context of simulation, the flow grinding is actually
analogous to the trial-and-error process of tweaking the input value of the overlap to get the best possible flow metering. In actual flow grinding — also a trial and error process — the overlap is
literally ground off the overlapped spool.
Handbook serves electrohydraulic system designers
The newly published fourth edition of the Designers’ Handbook for Electrohydraulic Servo and Proportional Systems contains even more useful information than its predecessor, the highly successful
third edition, which has become the defacto Bible for electrohydraulics technology.
Now you can learn even more about electrohydraulic systems and their design, including:
• how to calculate and control pressure losses in plumbing, subplates, and manifolds,
• how to analyze and control a variety of mechanical loads, including conveyors and belts and triangulated loads,
• valve dynamic properties and how to include them in your system,
• electronics, especially transducers and signal conditioning, and
• mobile equipment electrical systems, including batteries and charging systems.
There is no limit as to how electrohydraulics is going to revolutionize our industry, so order your copy to secure your career in this dynamic technology. And if certification in electrohydraulics is
your goal, the fourth edition of the Designers’ Handbook is essential to your preparation. Don’t risk being left behind in a world where the only constant is rapid change.
To order, visit our Bookstore. Print the PDF order form, fill it out, and mail, fax, or e-mail it to us. | {"url":"http://hydraulicspneumatics.com/hydraulic-valves/modeling-parasitic-losses","timestamp":"2014-04-19T05:26:08Z","content_type":null,"content_length":"79103","record_id":"<urn:uuid:24abbef9-40a0-4d26-981d-5f79dec4373e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Examples of Probability - Simple Events
Use These Examples of Probability To Guide You Through Calculating the Probability of Simple Events.
Probability is the chance or likelihood that an event will happen. It is the ratio of the number of ways an event can occur to the number of possible outcomes. We'll use the following model to help
calculate the probability of simple events.
As you can see, with this formula, we will write the probability of an event as a fraction. The numerator (in red) is the number of chances and the denominator (in blue) is the set of all possible
outcomes. This is also known as the sample space.
Let's take a look at a few examples of probability.
Example 1
Now let's take a look at a probability situation that involves marbles.
Example 2
Hopefully these two examples have helped you to apply the formula in order to calculate the probability for any simple event.
Now, it's your turn to try! Check out the spinner in the practice problem below.
Practice Problem
Answer Key
Great Job! You've got the basics, now you are ready to move on.
We would love to hear what you have to say about this page!
Probability Lessons
Subscribe To This Site | {"url":"http://www.algebra-class.com/examples-of-probability.html","timestamp":"2014-04-18T18:11:02Z","content_type":null,"content_length":"26906","record_id":"<urn:uuid:84eedd5d-2b4e-47c2-b7b9-38ec75dcce54>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Curved Space-time and Relative Velocity
This "demonstrably false" notion arises out of the fact that the singularities north and south poles have been chosen simultaneously.
There are no singularities on a sphere, the curvature is everywhere finite.
If you chop off either the north or the south pole from the sphere the demonstration provided by Dr Greg fails
No. Because of the symmetry you can do this from any point on the sphere, it is just easier to describe verbally from the poles.
In his example Dr Greg has used spatial geodesics!
So what? Spacelike paths are perfectly acceptable paths for parallel transport and need not even be geodesic.
Anamitra, you don't seem to understand the very basics of parallel transport and intrinsic curvature. The most important example of parallel transport is to transport a vector around a closed loop
back to its original position (NB a loop is generally a non-geodesic path). In a curved space the parallel transported vector will be rotated from the original vector by an amount which depends on
the area enclosed by the loop as well as the direction of the loop. The Riemann curvature tensor describes exactly this property of curved space in the limit of infinitesimal loops.
In parallel transport the covariant derivative which is zero is the covariant derivative of the transported vector, the path along which it is transported need not have a zero covariant derivative,
and need not even be smooth.
If you are so stuck on your preconcieved notions that you are not willing to learn these basic and fundamental geometric concepts then you may as well just stop even attempting to learn general
relativity as it will be completely futile. I would recommend that you view Leonard Susskind's lectures on General Relativity which are available on YouTube. | {"url":"http://www.physicsforums.com/showthread.php?t=423334&page=3","timestamp":"2014-04-20T14:10:06Z","content_type":null,"content_length":"86087","record_id":"<urn:uuid:3969020f-c515-46d3-b9c3-dbf29b94fbc8>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
Remembering Socrates
When asked whether the square root of 2 can be written as an exact fraction, i.e., as the ratio of two integers, many people will say they don't know. This has always seemed interesting to me,
because of course they do know - which is to say, they are in possession of all the information and understanding necessary to know the answer. If the square root of 2 equals M/N then 2 = (M/N)^2,
and if they remember that numbers factor uniquely into primes, it's immediately obvious that 2N^2 = M^2 is impossible, because a square can't equal twice a square. (The exponent of the prime 2 in M^2
is even, whereas the exponent of 2 in 2N^2 is odd.)
Even without invoking unique factorization, Euclid described a simple proof "from scratch" that anyone can easily follow. If the square root of 2 equals M/N for some integers M,N, then 2 = (M/N)^2,
and we can assume that M,N are not both even, because if they were we could divide them both by 2 while preserving the ratio. Thus, at most one of M,N is even. Writing the equation in the form 2N^2 =
M^2, we see that M^2 is even, and we also know that the square of an odd number is odd, so M itself must be even, and therefore N must be odd. Now, since M is even, there is an integer m such that M
= 2m, so we can substitute this M into the equation 2N^2 = M^2 to give 2N^2 = 4m^2, which implies N^2 = 2m^2. But this shows that N^2, and therefore N, must be even, contradicting the fact that there
must be a solution with M,N not both even.
It's interesting that nearly this very same example was used by Socrates to illustrate the same point, i.e., how people know more than they think they do. In Plato's "Meno" he recounts a dialogue
between Socrates and a fellow Athenian on the subject of whether virtue can be taught. A recurring theme in Socrates' thought was that all the knowledge we are capable of possessing is already within
us, and the process of reasoning something out is really just an act of recollection, i.e., remembering things we already (in some sense) know.
To illustrate this, Socrates questions an un-schooled servant boy about a simple geometrical proposition. Socrates draws a square, and asks the boy how he would go about constructing a square twice
as large. Initially the boy says he doesn't know, but under further questioning he thinks the answer is to make the edges twice as long as the edges of the original square, making a figure like this:
But then Socrates asks him how much area this new square covers in relation to the original, and the boy correctly observes that it has four time the area. So Socrates re-iterates the question: how
would you construct a square with just twice (not four times) the area of the original? Obviously we need a square with half the area of the one just constructed. Socrates asks the boy if we can cut
each of the four squares in half by drawing a line connecting opposite corners, and the boy answers Yes. They draw these lines ("clever men call this the diagonal") and arrive at the figure below:
They agree that the four diagonals describe a square, and its area is twice the area of the original square.
Socrates: What do you think, Meno? Has he, in his answers, expressed any opinion that was not his own?
Meno: No, they were all his own.
Socrates: And yet, as we said a short time ago, he did not know?
Meno: That is true.
Socrates: So these opinions were in him, were they not?
Meno: Yes.
Socrates: So the man who does not know has within himself true opinions about the things that he does not know?
Meno: So it appears.
Socrates: These opinions have now just been stirred up like a dream, but if he were repeatedly asked these same questions in various ways, you know that his knowledge about these things would
be as accurate as anyone's.
Meno: It is likely.
Socrates: And he will know it without having been taught, but only questioned, and find the knowledge within himself?
Meno: Yes.
Socrates: And is not finding knowledge within oneself recollection?
Socrates then goes on to speculate on when or how the boy had acquired his true opinions about geometry, and suggests that it must not have been during his present life (since Meno assures Socrates
that the boy has had no instruction in geometry).
Of course, we might observe that what clever men call the line connecting opposite corners ("diagonal") was not one of the boy's own opinions. He was taught this by Socrates, so one could argue that
the boy has in fact been taught something which he did not know, and which he (presumably) could never "recollect" simply by examining his opinions in isolation. This highlights two different kinds
of knowledge, one that derives uniquely from first principles (the common notions about geometrical shapes) and the other that is accidental and arbitrary (terminology). More fundamentally, one could
argue that people DO learn and acquire common notions about spatial relations and proportions during their formative years, as they organize their primitive sense perceptions. On the other hand,
certain very basic aspects of our experience may be "hard-wired" into the biology of our brains and sense organs. This seems to be a mode of transmission for information that Socrates doesn't
To his credit, Socrates concludes
I do not insist that my argument is right in all respects, but I would content... that we will be better men, braver and less idle, if we believe that one must search for the things one does not
Incidentally, the very next section of the "Meno" dialogue contains another geometrical example that is raised by Socrates to make a point about whether knowledge is teachable. Unfortunately the
exact sense of his words is unclear, and the available translations are all slanted toward one particular interpretation or another. Thomas Heath says that this example is
much more difficult [than the previous example], and it has gathered round it a literature almost comparable in extent to the volumes that have been written to explain the Geometrical Number of the
Republic. C. Blass, writing in 1861, knew thirty different interpretations; and since then many more have appeared. Of recent years, Benecke's interpretation seems to have enjoyed the most
acceptance; nevertheless I think that it is not the right one...
Heath then goes on to give the interpretation that he thinks most closely fits the text (based on ideas of S. H. Butcher and E.F. August). However, it seems to me that Heath's proposed interpretation
is not much more persuasive than any of the others for exactly what Socrates (or Plato) had in mind.
The translation of Plato's text available from most sources today is based on Heath's interpretation. Here is how Heath thinks the passage should be read:
If we are asked whether a specific area can be inscribed in the form of a triangle within a given circle, [we] might say... if that area is such that when one has applied it as a rectangle to the
given straight line in the circle it is deficient by a figure similar to the very figure which is applied, then [we have our answer].
This is not abundantly clear. Heath gives a somewhat mundane reconstruction based on ordinary rectangles and triangles on the diameter of the circle, and his explanation is nominally plausible (under
the interpretation he provides). However, Socrates' peculiar description has always reminded me of something else entirely.
Recall that Plato became a pupil and friend of Socrates in 407 BC, and Socrates himself lived from 469 to 399 BC. One of the most striking geometrical results of Greek mathematics was the quadrature
of the lune, accomplished by Hipppocrates around 440 BC. This would have been one of the most talked-about results during the years when Socrates was beginning his teaching, because it was the first
time anyone was able to construct, by classical methods, the area of a region with a curved boundary.
Moreover, it connects directly to the simple example of "doubling the square" that is discussed earlier in Meno, as can be seen from the drawing below:
The key to Hippocrates' argument is that the quadrant of the main circle (consisting of the regions A and B) obviously has 1/4 the area of the main circle. Also, the smaller circle has a diameter
equal to 1/sqrt(2) of the larger circle, because it is what clever men call the diagonal of a square whose sides are half of the main circle's diameter. Consequently we know that the smaller circle
has exactly half the area of the larger circle, which implies that the smaller half-circle (the regions B and C) has exactly the same area as the larger quarter-circle (the regions A and B). Hence we
have A + B = B + C, and so A = C. In other words, the area of the "lune" (region C) equals the area of the inscribed triangle A.
In other words, the area of the lune is inscribed as a simple triangle in the circle if, when we construct a circle on the edge of that triangle, the region that is excluded from the main circle is
equal to the area of the inscribed triangle. Also, the triangle is deficient (relative to the quadrant of the circle) by the very same shape by which the smaller semi-circle exceeds the required
area. Recall Heath's translation of Plato's account of Socrates' dialogue
... a specific area can be inscribed in the form of a triangle within a given circle... if that area is such that when one has applied it ... to the given straight line in the circle it is deficient
by a figure similar to the very figure which is applied, then [we have our answer].
Here I've omitted the phrase "as a rectangle". Without going back to the ancient Greek text, it's difficult to say how each term and phrase was intended, and words like "similar" vs "equivalent" are
often up to the translator to decide based on his understanding of the context. Heath's translation was naturally slanted toward his own guess as to what mathematical construction Socrates was
describing, whereas other scholars have read and translated the same text differently. It would be interesting to return to the original Greek text, with quadrature of the lune in mind, to see if a
translation based on this interpretation is feasible.
Return to MathPages Main Menu | {"url":"http://mathpages.com/home/kmath180/kmath180.htm","timestamp":"2014-04-19T03:22:02Z","content_type":null,"content_length":"26042","record_id":"<urn:uuid:753bc406-2409-414f-b3e9-97c2f904be29>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
R script to calculate QIC for Generalized Estimating Equation (GEE) Model Selection
Generalized Estimating Equations (GEE) can be used to analyze longitudinal count data; that is, repeated counts taken from the same subject or site. This is often referred to as repeated measures
data, but longitudinal data often has more repeated observations. Longitudinal data arises from studies in virtually all branches of science. In psychology or medicine, repeated measurements are
taken on the same patients over time. In sociology, schools or other social distinct groups are observed over time. In my field, ecology, we frequently record data from the same plants or animals
repeated over time. Furthermore, the repeated measures don't have to be separated in time. A researcher could take multiple tissue samples from the same subject at a given time. I often repeatedly
visit the same field sites (e.g. same patch of forest) over time. If the data are discrete counts of things (e.g. number of red blood cells, number of acorns, number of frogs), the data will
generally follow a Poisson distribution.
Longitudinal count data, following a Poisson distribution, can be analyzed with
Generalized Linear Mixed Models (GLMM)
or with GEE. I won't get into the computational or philosophical differences between conditional, subject-specific estimates associated with GLMM and marginal, population-level estimates obtained by
GEE in this post. However, if you decide that GEE is right for you (I have a paper in preparation comparing GLMM and GEE), you may also want to compare multiple GEE models. Unlike GLMM, GEE does not
use full likelihood estimates, but rather, relies on a quasi-likelihood function. Therefore, the popular AIC approach to model selection don't apply to GEE models. Luckily,
Pan (2001)
developed an equivalent QIC for model comparison. Like AIC, it balances the model fit with model complexity to pick the most parsimonious model.
Unfortunately, there is currently no QIC package in R for GEE models.
is a popular R package for GEE analysis. So, I wrote the short R script below to calculate Pan's QIC statistic from the output of a GEE model run in geepack using the geese function. It currently
employs the Moore-Penrose Generalized Matrix Inverse through the
MASS package
. I left in my original code using the identity matrix but it is preceded by a pound sign so it doesn't run.
[edition: April 10, 2012] The input for the QIC function needs to come from the geeglm function (as opposed to "geese") within geepack.
I hope you find it useful. I'm still fairly new to R and this is one of my first custom functions, so let me know if you have problems using it or if there are places it can be improved. If you
decide to use this for analysis in a publication, please let me know just for my own curiosity (and ego boost!).
# QIC for GEE models
# Daniel J. Hocking
# 07 February 2012
# Refs:
# Pan (2001)
# Liang and Zeger (1986)
# Zeger and Liang (1986)
# Hardin and Hilbe (2003)
# Dornmann et al 2007
# # http://www.unc.edu/courses/2010spring/ecol/562/001/docs/lectures/lecture14.htm
# Poisson QIC for geese{geepack} output
# Ref: Pan (2001)
QIC.pois.geeglm <- function(model.R, model.indep) {
# Fitted and observed values for quasi likelihood
mu.R <- model.R$fitted.values
# alt: X <- model.matrix(model.R)
# names(model.R$coefficients) <- NULL
# beta.R <- model.R$coefficients
# mu.R <- exp(X %*% beta.R)
y <- model.R$y
# Quasi Likelihood for Poisson
quasi.R <- sum((y*log(mu.R)) - mu.R) # poisson()$dev.resids - scale and weights = 1
# Trace Term (penalty for model complexity)
AIinverse <- ginv(model.Indep$vbeta.naiv) # Omega-hat(I) via Moore-Penrose
generalized inverse of a matrix in MASS package
# Alt: AIinverse <- solve(model.Indep$vbeta.naiv) # solve via identity
Vr <- model.R$vbeta
trace.R <- sum(diag(AIinverse %*% Vr))
px <- length(mu.R) # number non-redunant columns in design matrix
# QIC
QIC <- (-2)*quasi.R + 2*trace.R
QICu <- (-2)*quasi.R + 2*px # Approximation assuming model structured correctly
output <- c(QIC, QICu, quasi.R, trace.R, px)
names(output) <- c('QIC', 'QICu', 'Quasi Lik', 'Trace', 'px')
2 comments:
1. This comment has been removed by the author.
2. This looks so useful, but I can't figure out how to actually use it. Could you provide an example of its use? Thanks so much. | {"url":"http://quantitativeecology.blogspot.com/2012/03/r-script-to-calculate-qic-for.html","timestamp":"2014-04-17T03:58:13Z","content_type":null,"content_length":"115895","record_id":"<urn:uuid:88725d5c-77ce-434b-adad-fb117eb57dd8>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
National City, CA Calculus Tutor
Find a National City, CA Calculus Tutor
...By the time I graduated, I was a member of the International Thespians Society. During those four years, I became an honors member after completing over 600 cumulative hours of involvement in
the theater. I used to be a theater student in high school, and would regularly get on stage for my performances.
42 Subjects: including calculus, reading, English, Spanish
...I learned a great deal about our education system and its inequality during those sessions. I learned that every student is different, and I learned several methods by which to attack their
weaknesses and promote confidence in their abilities. Later, after transferring to North Carolina, I began more formal education training and continued tutoring.
26 Subjects: including calculus, Spanish, chemistry, physics
...My tutoring experience includes study sessions with fellow classmates who were struggling with the subject. So although it is not certified tutoring experience, my fellow classmates' consistent
appreciation and gratitude for my help makes me confident in my ability to help those who are struggling. My tutoring style is committed, patient, and thorough.
10 Subjects: including calculus, chemistry, algebra 1, algebra 2
...Moreover, I have worked both with big tutoring companies, learning their expensive secrets, and independently, so that I have the freedom to teach at the pace of each student--not a curriculum.
Over the last year, I have primarily tutored high school and college students, helping them prepare fo...
54 Subjects: including calculus, Spanish, English, chemistry
...Today I have hundreds of hours of experience, with the majority in Algebra and Statistics, and I would be comfortable well into college math. During the learning process, small knowledge gaps
from past courses tend to reappear as roadblocks down the line. By identifying and correcting these problems, I help students become effective independent learners for both current and future
14 Subjects: including calculus, physics, geometry, statistics | {"url":"http://www.purplemath.com/National_City_CA_Calculus_tutors.php","timestamp":"2014-04-18T03:43:24Z","content_type":null,"content_length":"24508","record_id":"<urn:uuid:0e29dd6e-3ec6-4a5e-b4bd-9be03af9320b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00338-ip-10-147-4-33.ec2.internal.warc.gz"} |
Journal of Econometrics 30 (1985) 171-201. North-Holland
EXPLICITLY INFINITE-DIMENSIONAL BAYESIAN ANALYSIS OF PRODUCTION TECHNOLOGIES
A. Ronald GALLANT and John F. MONAHAN
North Carolina State University, Raleigh, NC 27695-8203, USA
The firm's cost function is viewed as a point in a function space and data is viewed as following some probability law that has as its parameters various functionals evaluated at the firm's cost
function. The Fourier flexible form is used to represent a cost function as an infinite-dimensional vector whose elements are the parameters of the Fourier form. This representation is used to assign
a prior distribution to the function space. A procedure for numerical computation of the posterior distribution of an elasticity of substitution is set forth. The ideas are illustrated with an | {"url":"http://people.duke.edu/~arg/pubs/joe85b.html","timestamp":"2014-04-21T15:29:15Z","content_type":null,"content_length":"1524","record_id":"<urn:uuid:5647d71a-5f08-4278-ae3f-b73b9e47605e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00204-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem with calculations
Problem with calculations
This program should allow you to enter the percentage desired
at the end of a course, the total number of points in the
course, the points earned to date, and the total points to
date. The program should then calculate and display:
1) the number of the remaining points in the class needed to
earn the desired percentage
2) the percentage of the remaining points needed to earn the
desired percentage in the course.
Read the input in the order below: (Don't read anything else.)
1) Percentage desired
2) Total points in the course
3) Points earned to date
4) Total points to date
Heres is what I have got so far. I am not sure how to convert the formula over to C code
#include <stdio.h>
int main ()
float percent;
int totalpoints;
int earnedpoints;
int totaltodate;
printf("Enter Percentage Desired:");
scanf (" %d", & percent);
printf("Enter Total points in the course:");
scanf ("%d",& totalpoints);
printf("Enter Total points earned to date:");
scanf("%d ", &earnedpoints);
printf("Enter Total points to date:");
Welcome! Read the announcements, especially the part about using code tags.
The first step would be to write out the formulas using standard math. Then coding it in C is pretty simple.
Damn! Beaten.
I think it'd also be a better idea if the program calculated the % for you....
#include <stdio.h>
int main(){
float curPoints;
float maxPoints;
float percent;
printf("Enter your current points:\t");
printf("\nNow enter the max points possible:\t");
percent = (curPoints/maxPoints) * 100;
printf("\nHey, you have a %f percent, guy!",percent);
return 0;
Just use some good ol' math, honey. Also...it seems that you're entering a value for "total points"...twice?
totalpoints and totaltodate? | {"url":"http://cboard.cprogramming.com/c-programming/62583-problem-calculations-printable-thread.html","timestamp":"2014-04-19T08:58:00Z","content_type":null,"content_length":"8898","record_id":"<urn:uuid:b9bbb898-cc7c-4d58-9922-8583b841c8b8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
Transcript of keynote video-conference address, August 23, 1999
Stephen Wolfram
Wolfram Research, Inc.
Note: This talk represents the state of Stephen Wolfram's thinking on these topics at the time of IMS '99. Since then he has made considerable further progress, which will be described (along with a
very great many other things) in his forthcoming book A New Kind of Science.
Well I'm very happy to be here today, if only in virtual form.
I guess when this conference was first being planned, I had hoped very much to be able to come also in physical form.
But as some of you probably know, for the last eight years I've been doing a very very big science project, and to get it finished, particularly recently, I've had to become more and more of a
In fact you would probably be surprised at how little I'm getting out now: in fact the very last conference I traveled any distance to was the IMS in Southampton four years ago.
So anyway I'm very much looking forward to having my science project finished, and being kind of free again.
And it's perhaps because I've been a recluse so long I've decided that here I would try to talk about some somewhat advanced and abstract things--about the foundations of mathematics, and their
relationship to my big science project, and their relationship to Mathematica.
Some of you probably know a little about what my science project is about--though I haven't talked much about it. What I'm doing is something very ambitious.
I'm basically trying to build a whole new kind of science, certainly as big as any of the existing sciences, like physics and chemistry and things, and perhaps in some ways bigger.
The basic point of the science I'm trying to build is this. If you look at the history of science for the last 300 years, it's probably fair to say the hard sciences have been following one basic
idea: to find mathematical equations that represent things in nature.
Well, that idea worked pretty well for Newton and friends in studying orbits of planets and things like that. But in lots of other cases--for example in biology--it's really never worked at all.
So the question is: what else can one do?
Well, I think it's reasonable to assume that nature follows definite laws, definite rules: otherwise we couldn't do science at all. But the question is: why should those rules be based on the
constructs that we happen to have come up with in traditional mathematics?
Why should they be about numbers, and derivatives, and all those kinds of things in mathematics?
Well, what I decided to try to do nearly 20 years ago now was to see what science would be like if one thought about using more general kinds of rules: the kinds of rules that can easily be embodied
in computer programs, but can't necessarily be represented easily in traditional mathematics.
Some of you probably know some of the things I did in the early 1980s on cellular automata, and getting the field of complex systems research started. And I'm pretty happy with the work I did then,
though I'm actually far from thrilled with the way the field developed in general terms afterwards.
I figured out quite a lot of stuff in the early 1980s: enough to convince me that the idea of generalizing the kinds of rules one uses to study nature isn't completely crazy.
But then in the mid-1980s I got kind of stuck--I didn't have very good tools to use for my computer experiments, and I seemed to be spending all my time writing little pieces of code and gluing them
together and so on. And it was about then that I had the idea that perhaps I should build a big software system that would be able to do all the things I needed, and might even be useful to some
other people as well.
And that was kind of one part of how Mathematica came to be. Of course since I'm a practical fellow, I tried to design Mathematica to be as useful as possible to as many other people as possible,
But a big part of my motivation for building Mathematica was that I wanted to use it myself.
And I'm happy to say that starting just after Version 2 came out, I was able to start doing that very seriously.
Well, I've discovered a huge huge amount of science with Mathematica. And I'm very much looking forward to telling everyone about it. But it's a big intellectual structure that I've been building,
and I'm not quite ready to talk about all of it yet.
I'll talk about a few pieces here. And I hope that when my book about all of this--it's called A New Kind of Science--is out you'll all have a chance to really read about it more completely.
Well, what I want to do today is actually to talk about a topic that's sort of at the intersection of my two most favorite kinds of topics: my new kind of science and Mathematica.
It turns out that at that intersection are questions about the foundations of mathematics, and there are some new things I've realized about the foundations of mathematics from working on my new
science, and also from working on Mathematica.
I'm particularly happy to be talking about this here because there are several people in the Mathematica community who I've really enjoyed interacting with on these topics--though they definitely
don't always agree with me--particularly including Bruno Buchberger, Dana Scott, Klaus Sutner, and Greg Chaitin.
OK, well what I'm going to say here may be somewhat abstract, but I hope and believe that with all of your background in Mathematica, you'll be almost uniquely in a position to really get something
out of what I'm trying to say. So let me try to get started.
I'll tell you a little bit about my new kind of science, then I'll tell you how it relates to the foundations of mathematics, and Mathematica.
There are some pretty big shifts in intuition involved in my new science, and I certainly won't be able to explain all of it here. But to understand the rest of what I'm going to say, I have to spend
a few minutes explaining some of what my science is about.
Well, as I said, what my science is based on is thinking about what arbitrary sets of rules do: in effect, what arbitrary simple computer programs do.
Now normally when one builds computer programs, one sets up the computer programs to do specific things. But what I'm interested in in my new science is what arbitrary computer programs--say ones one
might choose at random--do.
It's very hard--in a sense provably impossible--just to figure this out by pure thought.
But it's easy to do experiments. And what's actually particularly fun about these experiments is that they're so easy that they could even have been done probably thousands of years ago.
Maybe they were. I actually don't think so. Because I think if people had ever seen the results they give then science would have actually developed along a very different track than the one it's
actually followed.
OK, so let's take a look at these experiments, and let's look at what some simple programs do.
I'm going to talk first of all about some things called cellular automata, because they happened to be the first things I looked at in the early 1980s. But one of the things I've discovered in the
last eight years is that nothing I'm saying is really in the slightest bit specific to cellular automata.
OK, so what is a cellular automaton?
One can set one up by having a line of cells, each one let's say black or white.
And then one has a rule for evolving the thing down the page. So let me show you an example.
So this might be a rule for the cellular automaton. It says if you have a cell, then that cell will become black on the next step unless it and its immediate neighbors are all white.
So let's ask what happens if we take this particular rule and just start it off from a single black cell.
OK, so this is what we get: it's very straightforward. We start off with a single black cell at the top there, we apply this rule over and over again, and we just get a simple uniform pattern.
OK, well let's try changing that rule a little bit and see what happens to the kind of pattern we get. Here's a rule, very similar to the previous one, yet slightly different.
Let's see what that rule does.
So, what we see is a checkerboard pattern.
Well, at this point we might guess that there's some kind of theorem. And the theorem might say if you have a sufficiently simple rule, and you start it off with a sufficiently simple initial
condition, then what you'll get is a simple, let's say periodic or repetitive, pattern.
Well, let's try another rule. We can just change the rule we've been using a little bit.
Let's try this rule here. Let's see what pattern that produces.
Well, that produces a rather different pattern. What's this pattern going to do? Let's run it, let's say for 100 steps. Here's the result that we get.
And what we see here is that this particular cellular automata rule, when it starts off from a single black cell, instead of just making a repetitive pattern, it makes a nested self-similar pattern.
This is one of those Sierpinski gasket fractals.
Well, just to emphasize how simple a kind of program a cellular automaton actually is, we could for example just write a cellular automaton program in Mathematica. Let me show you what a step of
cellular automaton evolution looks like in a modern Mathematica. Here it is.
So here for example, a is the current state, and num is the rule number. So here for instance, this was rule number 90. Let me regenerate the picture that I had above, by just running this little
ListConvolve thing, which is the cellular automaton rule.
Looking at what we get here, with our simple cellular automaton rule, starting off from a simple initial condition, we again get a pattern that we can recognize as quite simple. It's a nested
pattern. So again we might guess that the true theorem is if you start off with a simple rule and a simple initial condition, then you either get a repetitive pattern or a nested pattern. But we
might guess--and certainly that was my original guess--that if one has a sufficiently simple setup like that then there's nothing more complicated that one can get.
OK, well let's try this particular creature here. Here's another cellular automaton rule, same kind of idea as the previous ones but different particular choices of outcomes for different
Let's see what this does.
Well this does something very strange. We're starting off from just a single black square here, but now instead of making something that looks like it's obviously a repetitive pattern or a nested
pattern, it looks like some more complicated mess. Well, let's go on. We can run that rule for a bit longer, see what it does.
It doesn't seem to do anything terribly simple. We can keep running it. Let's keep running it, let's say for 400 steps. Here's the pattern that we get.
Well you can see there's some regularity in all this. But much of it is pretty random. And actually I know rather well that it's random, because essentially we've been using the center column of this
pattern as the thing that makes integer random numbers in Mathematica for the last 11 years. And so a lot of people have tested it, and nobody's ever found any deviation from randomness.
But let's just look at this picture.
It's just nothing like we would expect.
I mean, from building things ourselves and doing engineering and so on, we're used to the idea that if we want to get something complicated we'd better have a complicated set of plans or rules--or
else we'll just get something very simple out. But here we have some simple rules, there they are, and we start off from this very simple initial condition, yet we get something very complicated out.
So the question is: what's going on here?
Well, I think what's going on is that we're basically seeing in a very direct form what I think is essentially what has been sort of the big secret of nature for a really long time.
You see, there's a funny thing. If you look at two objects: one's an artifact, one's something made by nature, it's a pretty good heuristic that the one that looks more complicated is the one that
nature made. And the one that looks simpler is the one that humans made.
And I suppose it's kind of that sort of thing that made people assume that there had to be some kind of superhuman supernatural intelligence behind the things that got built in nature.
But one of the things that's happened in the past few hundred years is that it's been pretty much figured out what a lot of the underlying rules in nature are. And actually they're really pretty
So that makes it even more bizarre that nature can produce all the complicated stuff we see.
And it's been a big mystery how that can possibly happen.
But I think--and actually now I've accumulated a huge amount of evidence for it--that this thing that I've seen in simple programs like the cellular automaton that's called rule 30, is the key to
what's going on.
See, the rules that we use for engineering are essentially special ones that we've set up to be able to do the tasks we want to do. And to make that work these rules have to produce only fairly
simple behavior--behavior whose outcome we can reasonably foresee.
Otherwise we wouldn't be able to use those rules to achieve particular tasks. But the point is that nature doesn't have any constraint like that. So it can effectively pick its rules much more at
And the big fact that I've discovered is that if one does that a lot of the rules one sees--even if they themselves are very simple indeed--can produce incredibly complicated behavior.
And I think that's basically what the sort of underlying secret of nature is. There are a huge number of questions in all sorts of areas of science that one can tackle for real once one knows this.
And that's part of what I've been doing for the past eight years.
But this phenomenon is also important for thinking about mathematics. In fact, I think the general approach I've been taking should lead not only to a new kind of science, but also to something
that's essentially a major generalization of mathematics. And that's a large part of what I want to talk about here.
Well let me explain why that is.
One can think of mathematics as being something that effectively tries to work out the consequences of abstract sets of rules.
The question is what kinds of rules one uses.
And I think what's happened is that mathematics has ended up using only rather special kinds of rules.
But in fact there's a vast universe of other rules out there, which mathematics has essentially never looked at.
And actually the character of what those rules do, and what one can do with those rules, is somewhat different from what happens with the ones that are usually dealt with in mathematics.
I should say that if one wants to, one can certainly write a cellular automaton like the one I was showing there, in sort of mathematical form. This is in fact its rule. If P, Q, and R are the colors
of neighboring sites, this is the rule for the color of the next site.
But it turns out that even though one might be able to state this as a mathematical rule, it's not a terribly natural thing to do. And the usual ideas of, for example, algebra that one might try to
apply to something like this, tell one almost nothing about the behavior that one gets from the rule.
So effectively what's going on here is something that ends up being quite disconnected from the kinds of things that we usually think of as mathematics.
OK, so one question is, if mathematics isn't dealing with genuinely arbitrary abstract rules, what exactly is it dealing with?
I think in the end to figure this out, one effectively has to look at history.
I think defining mathematics is a little like defining life. One knows definitions of life that say something is living if it can move itself around, or if it can reproduce itself, or if it eats
things, and so on. But as one goes on and looks at all these definitions, one realizes that there are lots of devices, physical processes, and so on, that satisfy all of these various definitions,
just like life does. And in the end the only definitions that actually work and correspond to what we intuitively think of as being a living system, are ones that involve the specifics of containing
DNA and things like that. And essentially involve the particular history that life has taken on Earth over the course of geological time.
And I think it's sort of the same with mathematics. One can come up with all sorts of abstractions for what mathematics might be, but in the end one basically has to look at history.
And essentially one has to go back to things that were happening in ancient Babylon.
What seems to have happened in Babylon is that two things were developed: essentially arithmetic, and geometry.
And I think that what's happened is that since then essentially all the mathematics that's been developed has somehow been derived from those original forms of arithmetic and geometry.
Actually if one looks up the definition of mathematics in most dictionaries, one will find a definition that basically agrees with this.
But it's not the definition that most professional pure mathematicians imagine is the appropriate one for mathematics. They think that somehow mathematics is about studying quite arbitrary abstract
And that's actually how I thought about mathematics when I named Mathematica "Mathematica".
But if you actually look as a practical matter at what mathematics as mathematicians actually do it is--how that's defined, it's somewhat different.
See for a long time in the history of the development of mathematics, people thought of mathematics as somehow being about describing the way our universe works. And that was the justification that
was used for the kinds of constructs that were developed in mathematics, the ones that should be considered, and the ones that should not be considered.
And that's what led for example historically to the fact that the idea of having more than three dimensions to space was a hard thing to introduce, and things like that.
And--particularly after Newton and friends did their stuff--it was kind of felt that for sure the generalizations of arithmetic and geometry that have been so successful in their efforts to kind of
put a formalism into science should be enough to describe everything that's relevant in our universe.
But one thing I can tell you pretty definitively from the science that I've done is that that's just not true.
And in fact it's that assumption that's made a lot of science get stuck.
A big part of what I've discovered is that the kinds of rules that nature really seems to follow are ones that are pretty easy to represent in simple computer programs, but almost impossible to
represent in traditional kind of arithmetic-and-geometry mathematics.
So the idea that one shouldn't make the rules in mathematics more general because that would mean going beyond anything that's in our actual universe is just wrong. That's not a reason to not think
about generalizing the kind of arithmetic and geometry-based rules in mathematics.
Of course, people in mathematics kind of thought they stopped worrying about making rules correspond to our universe about a century ago.
That was one of the big achievements of Cantor, and to some extent Galois, and friends. To start making constructs in mathematics that were purely abstract, and not intended to be linked to anything
in the actual universe.
But OK, without the constraint of linking things to what happens in the actual universe, how do people pick the constructs to use in mathematics?
Well that's an interesting story and it's sort of central to what mathematics is, and what it isn't.
And it turns out that what it all revolves around is the idea that mathematicians, at least in the last century or so, seem to have been proudest of: the idea of proof in mathematics.
I guess that mathematicians always feel that they want certainty; they want to be sure that they are figuring out things that are true.
And they have the idea that the way that one does this always has to do with proofs.
I guess that idea started with Euclid and friends. Euclid didn't want to rely on how some geometrical figure was drawn in the sand or whatever, and whether two angles that looked like they were the
same, were the same as far as he could measure them. He wanted to know precisely were these two angles the same, not just did they look the same as well as he could measure them.
He wanted a proof, some kind of logical proof, that these two angles were exactly the same.
So he got into the idea of essentially using logic to take some statement and see if it could logically be deduced from some set of axioms.
Now, to be fair, not all mathematics has historically been done that way. Particularly in areas like number theory, people--people like Gauss for example--quite often did experiments to see what was
But particularly as things got more hairy with infinitesimals, and things where everyday intuition seemed never to work, it got pretty much taken for granted that proof was the only way to move
forward in mathematics.
And in fact if you look at the Bourbaki books, for example, their opening words are: "Since the time of the Greeks, to say 'mathematics' has meant the same as to say 'proof'".
Well, I myself have never been a great fan of the idea of proof.
And perhaps that shows that I really am a scientist, not a mathematician. I like starting out without knowing what's true, and making observations and doing experiments, and then finding out what's
true. Sort of discovering things from scratch.
I don't like somehow having to guess what's true, then work backwards and try to find a proof of it.
And actually my experiences have kind of strongly conditioned me this way by now. Because in the science I've done I've discovered--by being I think a fairly careful experimenter--things that I'd
never possibly have guessed.
And actually I've discovered quite a few things that I at least had thought I had proved were impossible.
Of course, once I actually found out what's true, I could see that there were bugs in the proof.
But at least until we have things like Bruno's Theorema up and running there's no way at all to detect those bugs. It's not like Mathematica, where we can do all kinds of automated software quality
assurance and so on.
We have to rely on human reviewers in journals and things like that, which is a very unreliable basis I think for finding out what might be true.
Well even though I myself don't happen to be particularly thrilled with it, mathematics as an activity has taken proof pretty seriously.
In fact, it's taken it so seriously that it hasn't really looked much at areas where it seems like you can't do proofs easily.
And essentially I think that's why there's never been mathematics that's looked for example at my friend the rule 30 cellular automaton.
In fact, I think it's a pretty good model for how mathematics has grown--that there's a pretty good model that one can make that's pretty much this.
It all started sometime in ancient Babylon from plain arithmetic and geometry. And those in turn actually arose from practical applications in commerce and land surveying and so on.
And in these areas, some neat theorems were found, and some proofs were developed for those theorems.
And being extremely pleased with the theorems, mathematicians started looking for ways to get the most out of these theorems.
And to do this, what they tended to do was to generalize the constructs that these theorems dealt with in such a way that the theorems always stayed true.
So the idea was to make things as general possible, making that generalization by having the constraint that some theorem or another could still be proved, but then sort of take away as many pieces
as possible while still letting the theorem be proved.
So of course with this scheme one thing one can be sure of is that the systems that one was going to set up, were ones where the idea of proof still worked.
Well, one of the questions that's often asked is a question about whether mathematics is sort of invented or discovered.
I think it's pretty clear that the sets of rules that mathematics has actually ended up looking at were very much invented--in a definite human way, with particular constraints.
I suppose it's kind of weird: one usually thinks that mathematics is somehow more general and more abstract than, say, physics, or some other kind of natural science.
And the reason for that is that one thinks its rules are somehow more arbitrary.
It's somehow dealing with more arbitrary kinds of rules.
But actually the conclusion that I've come to is that that's not true, and that in fact the rules in physics--while there are many fewer of them--are chosen in a sense more arbitrarily--and are
probably much more representative of all possible rules than the ones that have typically ended up being studied in mathematics.
So, in a sense, my adventures in looking at what arbitrary programs do can be thought of as a big generalization of ordinary mathematics--in a sense a study of what arbitrary mathematicses in the
space of all possible mathematicses do.
Well as I'll talk about a little later, an arbitrary mathematics has some similarities to our particular human mathematics--the one that's developed historically--but it also has some pretty
substantial differences.
But before I go on and talk about that, let me say a little about how all this relates to Mathematica, and to thinking about what one can call the foundations of Mathematica.
Well, first let's sort of say what my kind of abstract view of Mathematica is.
I've kind of viewed my job in designing Mathematica to be to think about all possible computations that one might want to do, and then to identify definite chunks of computational work that would get
done over and over again in those computations.
And then what I've tried to do is give names to those chunks. And those are the primitive functions of Mathematica.
Well, one of the things I've learned from building my new kind of science is that I did a decent job in a certain way: the constructs that I put into Mathematica correspond pretty well with the
constructs that seem to be relevant for representing for example the rules in our universe and in the natural world.
That has a very practical consequence. If you look at my new book when it's out, you'll find the notes in the back are full of Mathematica programs. And those programs implement the various kinds of
programs that I talk about in the book.
But the point is that those implementations are mostly very very short. In other words, it doesn't take many Mathematica primitives to build the kinds of constructs one needs to represent the things
that I think are going on in our universe.
Here's an extreme example of that: one of the things I've been doing a bit of is taking what I've discovered, and trying to use it to finally come up with a truly fundamental theory of physics. In a
sense I've been trying to reduce physics to mathematics: to get a single abstract structure that is, exactly, everything in our universe.
That sounds like an extremely tall order, but for reasons I'm not going to go into here, I'm increasingly optimistic about my chances of success.
But the point is that one of the ways one can measure how well the primitives in Mathematica were chosen is how complicated the final program that is the universe needs to be, when it's written in
And actually, I had thought it was going to need to be rather long. But I noticed a year or so ago that in fact the basic system I was studying could actually be written as a surprisingly short
collection of somewhat bizarre rules for Mathematica patterns. The only problem was that they didn't run all that fast.
I suppose that's not surprising if you're trying to reproduce the whole universe, that it not run all that fast.
But in the next version of Mathematica, or perhaps the one after that, I will tell you now the slight secret that there'll be a few hidden optimizations that make the universe run a little faster in
Well, I'm pretty confident that the primitives in Mathematica are well chosen if one wants to represent fairly arbitrary computations of the kind that for example seem to get used in natural science.
But what if one wants to "do mathematics"? Are the primitives that one has the best possible primitives?
Well, obviously one can go a very long way with these primitives, as we've all done.
And actually that right there tells one something about what's involved in mathematics.
But obviously there are a lot of primitives in Mathematica--say things like Factor and Expand and so on--that are very specifically set up to fit into particular kinds of structures that people often
use in mathematics.
And then way down at the bottom level in Mathematica there are general transformation rules, which seem to be a good representation for arbitrary computations.
But the question is: what are the intermediate constructs that are not specific to particular areas of mathematics, but are constructs that support the general kinds of rules that are used in
Well, to answer this one has to know more about what mathematics actually is, and what special characteristics its rules have.
One might think that the question of "what is mathematics?" is just one of these abstract things only of interest to philosophers.
But actually if one's trying to design the most general stuff that will be useful in Mathematica, for mathematics, it becomes very important in practice.
And I should say immediately that I haven't figured the answers here out, though I think I'm now beginning to define at least some of the relevant questions.
I kind of like these kinds of problems; in fact, in a sense this is ultimately the kind of thing I've spent most of my adult life doing: Trying to kind of whittle things down to find the very
simplest most minimal constructs that one needs to do things. Whether that's cellular automata and models of nature, or whether that's constructs to support programming in Mathematica.
Of course it's the fate of someone like me who spends a huge amount of time figuring out things like this that all these things in the end come to seem obvious.
Actually I've often thought how terrific it would be if people would study a little more carefully why things in Mathematica, for example, are done the way they are. There's often a huge amount of
thought behind things that might seem obvious, and having that thought more widely understood would be really good for everyone.
Anyway, I'm getting off-topic.
I was talking about understanding what the essential features of the activity that we call "mathematics" really are.
So how does one start on a question like this? Well, like any natural scientist I think the place to start is to build a model.
By the way, just in case there are any official mathematical logic model theorists in this audience, I don't mean quite your kind of model. I mean the kind of model a physicist, for example, builds.
And the essence of that kind of model is that it's somehow an idealization of a system.
What one tries to do in making a model in natural science is to capture those aspects of a system that one cares about, and ignore all the others. A model is in the end an abstract thing. I mean, one
doesn't think a planet going around its orbit has a bunch of little cogs--or little Mathematicas--inside it solving differential equations.
Instead, the differential equation is an abstract representation of the effect of what happens.
Now, I must say that scientists--even physicists--regularly get extremely confused about this issue.
I mean, the number of times I've heard physicists say: but how can your cellular automata model be a model for such-and-such, because we know the thing itself is solving a differential equation?
Well of course that's not true, both the differential equation and the cellular automaton are just abstract representations of an idealization of what the system is doing.
The bottom line is that a model, as that term is used in natural science, is an abstract idealization of something.
OK, so what's a good model for mathematics?
Well what we need is something that captures the essential features for our purposes of mathematics, but leaves all the inessential details out.
Well, I think it's been fairly clear for a century or so that the first step is to think about mathematics in terms of operations on some kind of symbolic structures.
And there are areas of mathematics like category theory that go a certain distance in defining general operations on symbolic structures.
But I've never thought they go nearly far enough.
And when I was designing Mathematica what I ended up trying to do was to set things up so one could go much further . . . so one could set up absolutely arbitrary symbolic structures and do
essentially completely general transformations on them.
I originally came to this--actually when I was working on SMP around 1979--by thinking about how to capture and generalize what humans do when they're doing mathematics by hand.
I sort of imagined that one was always looking up books and finding rules that said that an expression like this gets transformed into an expression like that.
And then I kept on generalizing that idea, and ended up with the notion of transformation rules for patterns.
And, as we all know from Mathematica that idea is extremely successful in representing all kinds of computational and mathematical ideas. And gradually more and more people even seem to understand
that point.
Well, what I did with transformation rules in SMP and later Mathematica is a bit like the whole tradition of work in mathematical logic and in the theory of computation.
But it's different--at least in intent--in some subtle but rather important ways.
And to understand more about what people usually think of as being the discipline of mathematics I'll have to explain something about that.
Ultimately the big difference is between being interested in doing proofs and being interested in doing calculations.
But to see what's gone on I'll have to tell you again a little bit about the history of all this.
Well, I guess it was at the end of the 1800s, particularly following all the discoveries made by generalizing Euclid's axioms and so on, there got to be the idea that somehow mathematics could be
thought of as a study of the consequences of appropriately chosen sets of axioms.
And it turns out that axioms in this sense are very much like transformation rules for symbolic expressions.
These are Peano's axioms.
So this is just an example of an axiom system. These are Peano's axioms for arithmetic. And one can see that they're--I'm not going to go into this in great detail--one can think about these things
as essentially like transformation rules for symbolic expressions.
Well actually, in the way things are often set up, there are usually so-called "rules of inference" in addition to axioms. And it's these rules of inference that are the direct analog of
transformation rules.
But actually, as I'll perhaps explain later, one can just as well think about the axioms themselves as being like transformation rules.
And essentially what the axioms say is that one mathematical construct or statement can be transformed into another mathematical construct or statement.
OK. Well that sounds very much like transformation rules in Mathematica. But there's a crucial difference, and it's associated with the word "can".
You see, in Mathematica, one just takes an expression and then uses a sequence of transformation rules and looks at what comes out.
But in the axiom setup in mathematics, one is usually interested in asking whether one can find any sequence of transformation rules that will get one from one particular expression to another.
It's basically the difference between calculation and proof.
In a calculation one just wants to follow a procedure and get an answer.
In a proof one wants to know whether there's any path one can follow that goes from one statement to another.
And as a practical matter it's usually much easier to do calculations than to do proofs.
And that's sort of why Mathematica can work well, and why logic programming languages like Prolog and so on that are closer to emulating proofs never really work that well.
And that's why I've always tried to be very careful in designing Mathematica to set modest expectations with the names of functions like Simplify that have to do things that are more like proofs.
But anyway, the idea of thinking of mathematics as the study of the consequences of axiom systems did introduce the notion of transformation rules historically.
But so what were these transformation rules like?
Well, actually, some of them were actually formulated very much like the ones in Mathematica.
And in fact by the 1940s there were transformation rules being talked about that are very similar in spirit to the ones in Mathematica.
But at least when people thought about these being applied to mathematics, they normally thought about them as being used in a proof kind of way rather than in the kind of way they are used in
OK, so basically what we've learned here is that the kinds of transformation rules that are in Mathematica are decent models for the kinds of transformation rules that are in mathematics.
And of course in a sense the success of Mathematica already told us that.
But one question is, do we need all the complicated stuff that's in the transformation rules in Mathematica to be able to reproduce what are somehow the essential features of mathematics?
Well, I don't think so. Not at all in fact.
And actually that's analogous to what I've found over and over again in physics and biology and other natural sciences. The really important and general phenomena don't depend on very much, so one
can perfectly well capture them even with very simple models.
OK. So what then is the appropriate minimal model for mathematics?
What can we get rid of and still have the most important phenomena in mathematics?
Well, Mathematica patterns have already gotten rid of all sorts of hair that one usually sees in formulations of mathematics.
For example, there are no explicit types in Mathematica expressions. Only implicit ones from the structure of the expression tree or the names of heads.
I might say that when Mathematica was young, people often said: you can't do anything like mathematics without having an explicit notion of types.
But actually I think it's fairly clear that they were wrong. What we can do in Mathematica with symbolic expressions and their structures is actually much more general than what one can do with
ordinary types.
Let me make a brief historical digression, that some of you may find fun, about types.
Well, in an effort to build up all of mathematics from some kind of uniform set of primitives the first really serious efforts along these lines got made by Frege in the late 1800s and by Russell and
Whitehead in the early 1900s.
All of these folks had the idea that one should start from logic to try to build up all of mathematics.
For various reasons, I actually happen to think that the basic idea is fairly silly--that one wants to start actually from much more arbitrary symbolic structures, rather than from the particular
structure defined by logic.
But anyway, what they were trying to do was a little like what I'm talking about today in making minimal models for mathematics--or, for that matter, what I was trying to do in the underlying design
of Mathematica.
They wanted to be able to have a small set of primitives, and then assemble these to represent all the constructs of mathematics.
Well, they had a difficult time doing what they wanted to do. And of course they had many practical disadvantages compared to what we can do today. Like never being able to run the things they
But all in all I've actually always considered Principia Mathematica of Russell and Whitehead to be perhaps the worst example of language design of all time. I think some modern languages and systems
are pretty bad. But actually nothing compared to Principia Mathematica.
One of the most baroque pieces of Principia Mathematica in fact has to do with types--and later ramified types--which were originally introduced as a way to avoid various logical paradoxes and so on.
Well, anyway--it's all a long story, not all of which I even know--but this idea of types that kind of came from this detail about avoiding paradoxes and this rather baroque formalism of Principia
Mathematica somehow got assumed to be fundamental to all of the formalism of mathematics.
But I think Mathematica proves that it isn't really necessary to think about that kind of thing.
So, alright, let's assume Mathematica transformation rules are enough to represent mathematics. But what can we drop then in these transformation rules?
Well, another big feature of transformation rules is variables, and the ways variables are treated in scoping and things like that.
Another big feature of transformation rules, at least in Mathematica, is that they operate on expressions that can be thought of a bit like trees.
But let's for the time being assume we can just ignore both of these ideas.
Later, if there's time, I can talk a little bit about some rather abstract things called combinators that actually don't ignore these ideas, but still give very simple models of mathematics. But the
models are a bit harder to understand than the models that I was planning to talk about.
OK, so we're going to ignore variables and tree structures.
What's then left in Mathematica and its transformation rules and so on?
Basically what Mathematica is then doing is transforming sequences of symbols, or strings of symbols.
Well, in a practical use of Mathematica the appropriate strings tend to be pretty complicated.
In an effort to find the minimal stuff that's going on, let's see what happens if we work only with very simple strings.
OK. So let me try and show you some examples of what an absolutely minimal Mathematica would look like.
Well, let's formulate this in Mathematica. Let's have an object s that I'm going to say is Flat.
I'm going to define various rules for s of various sequences of elements. And then all I'm going to do at every step is to apply these rules to the sequence of elements I've got, just following
Mathematica's normal how-it-applies-rules scheme.
So let's take the rule s[1,0]->s[0,1,0]. This says "1,0 gets turned into 0,1,0".
OK? Let's see what that does.
Well, we can make a picture that corresponds to the outcome that I got here, and it says that at every step that rule just prepends a "0" onto the string that we have. OK?
Well so that's an extremely trivial example of what Mathematica can do, so to speak. Let's try looking at another rule. Let's look at this rule here. This rule has two pieces in it. But it's, again,
it's the same structure of rule.
Let's see what that does if we just start it off from some string. So here's what that does after a few steps.
So you can see it's doing something a little bit more complicated. And again what this is doing is it's just doing a sort of minimal version of what Mathematica does when it applies transformation
rules. Here's a slightly bigger version of what happens with that rule.
So again, just like our intuition might have suggested, if we have sufficiently simple rules here, we're doing the minimal version of what Mathematica does, the results are simple just like the rules
are simple.
Let's try another example. Here's another rule.
OK? It's slightly more complicated. Now there are three replacements in our list of Mathematica rules. Let's see what that one does.
A little bit more complicated. You can just see it going down the left. It's a little bit more complicated. You can see it's doing this sort of nested, kind of fractally thing. But again, it's
comparatively simple.
OK, let's try another example. This is another set of sort of minimal Mathematica rules. Pretty simple set of rules.
We could just write it down in Mathematica notation so to speak. In Mathematica notation, that rule would just be this thing here:
Let's see what that rule does if we apply it to some particular initial condition. Say we start off with "1 0 1".
OK, that's what this rule does. Pretty simple rule as I showed, pretty simple initial condition. But that's the result it generates. Let's try running it for a bit longer. So what I've done here by
the way is I've kind of folded this around so it goes down the first column then up to the top of the second column, and so on. Let's try running it, say just for a few hundred steps.
There's what we get. Again I've folded these columns around. So this is again an example, just like in my rule 30 cellular automaton, of a case where we have very simple rules, we start off with a
very simple initial condition, yet the behavior we get is something very complicated.
And notice that this is a minimal idealization of Mathematica. This is not something like a cellular automaton that we got from somewhere else. This is something that just comes from essentially
looking at an idealization of Mathematica, and in a sense an idealization of mathematics.
So what can we say about pictures like this.
Well, one pretty much immediate thing that one can say is that it can end up being undecidable what actually ends up happening in a picture like this.
I don't think I have time to explain lots about universal computation, and about a much stronger phenomenon that I call computational irreducibility.
But let's see what it might mean to say that something is undecidable in a system like this one.
Let me use a cellular automaton as an example. Let me pick a particular cellular automaton and let's start off running the cellular automaton with an initial condition that has just a single gray
square in it. Well it's very easy to decide what will happen to the cellular automaton. After a little while the activity will halt, and the pattern will die out.
But let's say we change the initial condition
a little bit. Instead of having just a single "1" [light gray], we have a "2 3" [dark-gray cell, black cell] in the initial conditions.
So that's what it looks like then. It's a lot less clear what's going to happen then. Let's run it for 300 steps for example.
Is it going to die out? Is it not going to die out? How can we tell? Let's try running it for say 1200 steps. OK, and here's the result we get.
Here's the result we get from that. . . scroll down. . . well we see that sometime before 1200 steps, after doing all kinds of crazy things, this pattern eventually died out.
Well let's take another initial condition, let's say a pair of ones [pair of light-gray cells].
Let's see what happens under those circumstances. Here's what we get then.
And actually, I don't know what happens in this case. I can follow it for perhaps a million steps or so and it's still just kind of globulating around, and it sort of hasn't made up its mind about
what it's going to do. It's not clear whether it's going to halt or not. And this is sort of a quintessential example of something where, the question of whether it ultimately halts or not, is
something that will be formally undecidable. Well, how can one think about this?
If one thinks about trying to work out what will happen in a system like this, what one does when one tries to predict the system is one tries to have some kind of device that will outrun the actual
evolution of the system. One can try to find some clever system that will be able to predict whatever the cellular automaton will do before the cellular automaton knows it.
But the best kind of predicting device will end up being a universal computer. It turns out that this cellular automaton itself is almost certainly a universal computer. For those of you who know
about these things, that's actually a rather surprising and stunning fact. But I won't go into that right now.
Anyway that means that one will never be able to have a predicting device that will systematically outrun the actual cellular automaton.
So one will never be able to tell for sure what will happen after an infinite time, except in effect by spending an infinite time figuring it out, just like the cellular automaton does.
And that's why one says that the halting problem of telling whether a pattern will ever die out is undecidable.
Well OK. So this undecidability thing also happens in the string rewriting systems that I talked about before.
And it also happens in dealing with transformation rules in mathematics. But the point is that one doesn't need all the hair associated with actual ordinary mathematics to get this phenomenon. It
already happens in our very simple idealization.
But back to the idealization.
This string rewriting thing that I've talked about is a decent idealization of Mathematica and of the calculational style of mathematics.
But what about of the proof style of mathematics?
Well, it turns out to be pretty easy to get an idealization of that too.
The whole thing is that in the string rewriting system that I did above, I always did things in sort of Mathematica style.
At each step, I just scanned the string, and applied the first rule that I could.
So at every step, there was a definite answer.
But in doing proof-style mathematics, one wants to do something different.
One wants to make what I call a multiway system, where one looks not just at a specific outcome from doing string rewriting in a particular way, but instead at all possible outcomes.
So let's see how that works. Here's an example of a multiway system. So the idea is that it's also a string rewriting system. This particular one uses these rules here--one square can get rewritten
to one square, or one square can get rewritten to a pair of squares.
But instead of just rewriting the string, instead of just keeping the first rewriting that we end up doing, what we do is to look at all possible rewritings of the strings. So that means every little
box here corresponds to one of the strings that can be generated, and there are arrows joining the boxes to show what can be derived from what by doing rewritings.
OK, so let me show you another example just to make clear what I'm doing here.
This one that I have here is a kind of Fibonacci-style string rewriting system, where white gets rewritten to white-black, and black gets rewritten to white. And so it's fairly easy to work out that
there will be
So what's the analog of a proof in this kind of system?
Well, it's actually pretty simple. A proof is like a path on this network. A proof is something that shows one that one particular string can be derived from another string. Remember that the axioms
were these transformation rules here, which showed how strings can be rewritten. And the point is that what one is trying to do when one derives theorems is essentially to make up sort of
super-transformation rules--things that for example say well if you have the string white-black-white, it can get rewritten to white-white-black-black-black down here.
And one could then add that as an axiom, to say that white-black-white can turn into this thing and then see what could be produced that way, and in the end the set of strings that one would generate
would be the same as the set of strings that one generates just by following these axioms, but one would be able to do it quicker.
So essentially adding theorems, deriving theorems and adding theorems allows one to collapse the network, and get new theorems more easily.
Well, what can one tell from looking at these multiway systems? One question is, what do they typically do? These ones that I've shown you here do pretty simple things. But let's see what the typical
multiway system does. One can sort of pick here essentially the axioms for the multiway systems at random and one can ask, what then is the resulting behavior of the network of theorems associated
with this multiway system. So here's an example of a rule for a multiway system, and let's see what that one does.
That one actually does something quite complicated. It's a pretty simple rule, yet again, like all of my things here. But what it does is quite complicated.
And, for example, I can ask many kinds of things about this system. For example, I could ask, starting off from this particular initial string, "when does the string that just consists of a single
square first occur?" And the answer is it takes quite a few steps before that short string first occurs. Actually if you look at all possible multiway systems, most either don't generate many strings
at all, or generate very rapidly increasing numbers of strings.
But it's not hard to find these weird and funky multiway systems that don't generate terribly many strings, but the number of strings they generate varies wildly with time. They don't have to have
terribly complicated rules, and so it's fairly clear, from the fact that you can get multiway systems like this, that it ends up being essentially impossible to tell how long a chain of strings one
will have to go through to get even from some fairly short string to some other short string.
And that's essentially directly analogous to the observation in mathematics that even pretty simple theorems can have very long proofs. But we're already seeing that phenomenon in our extremely
simplified model of mathematics.
And because we've got a nice simple model of mathematics, we can ask questions about what the distribution of lengths of proofs for propositions of certain lengths is. For example, this is a picture
that shows those strings which are eventually generated which correspond essentially to those propositions that turn out to be true, and that turn out to be provable. And it shows how far you have to
go before you can generate those things.
I might say by the way, let me just show you another example of a slightly long proof in one of these systems. This is a written out proof in one of these systems. This says you rewrite a white
square to black-white-black, you then rewrite the white square in the middle there to black-white-black, in this way, and you're kind of writing out a proof here, to show how you get from a white
square to a black square.
And in this particular system there's actually more than one way to get from a white square to a black square, actually these are all the possible shortest proofs that correspond to getting from a
white square to a black square.
OK. Let's go back to talking about phenomena in mathematics and how they relate to phenomena in this kind of simplified idealization of mathematics.
Well one very famous phenomenon in mathematics is Gödel's Theorem. And I thought I might tell you a little bit about how Gödel's Theorem is related to what I've been saying.
Well I have to admit something about the models I've been using so far. They are a bit different from what people consider usually as mathematics, because they don't have logic, they don't have a
notion of logic explicitly built into them. As I mentioned before, most of traditional and proof-based mathematics is ultimately based on logic.
I'm not sure that's the best thing, but that's the way it is.
The kinds of multiway systems that I've been showing you so far, are fine ways of representing relations in mathematics--but not quite theorems.
Actually some people may recognize my multiway systems as being things that are sometimes called semi-Thue systems.
And these are essentially semi-semigroups. You see, you can think of the transformations that I'm doing here as being like relations between words in an algebraic structure. I'm going to get slightly
technical here but it might be of interest to some people and help people to understand what's going on.
In my systems the transformations can be absolutely anything. But if every transformation can be applied in either direction then it corresponds to the relations of a semigroup. And if in addition
there are inverse elements allowed, one has a group. Now, being even more technical, I should say that the pictures I'm making you might think were Cayley diagrams, they're not. They're actually a
kind of lower level construct. They're pictures of sort of what's inside each node of a Cayley diagram--of all the words that are equivalent, say, to the identity.
OK. Well, some of you may know about the undecidability of the word problem for groups. That's essentially the same statement I'm making--about the paths that go from one word to another by applying
transformations being arbitrarily long.
Let me talk about a more familiar example. Let's think of the poor Simplify function in Mathematica. It works by applying various transformation rules to an expression. Let's say that we're trying to
determine whether some particular expression is zero. Well, some transformations may make the expression smaller, but some may make it bigger. And we can end up having to go on an arbitrarily long
path in the attempt to find out whether our expression is zero or not. And actually that's the same phenomenon I've shown you too.
OK. But back to the questions about theorems. I've said that these multiway systems of mine don't really have logic built into them. The underlying problem is that they don't have explicit notions of
True and False. They're just dealing with expressions, and transformations between expressions. If those expressions are supposed to be propositions--candidate theorems in some kind of
mathematics--they should presumably be either True or False.
So how do we set this up? Well, it's really very easy. We just need some operation on strings that's like negation, and that turns a True string into a False one and vice versa. And so as an example
the operation might just reverse the color of each element.
OK. So now how do we find all the True theorems? Well, we just start from the string that represents True, let's say a single black cell, and then we find all the strings that we can derive from it.
But. . . there are some nasty things that can go wrong. If our rules--our axioms--aren't sensible, we might be able to derive two strings that are related by negation. And that would mean that we
have some kind of inconsistency. There are two theorems we're saying are both true, but they are negations of each other.
And most mathematicians would then say that the axiom system one's picked is no good. But let's say we avoid this problem, and that we are sure nothing inconsistent like that happens. Then are all
axiom systems with this property decent axiom systems for mathematics?
Well, there's another problem. And that problem is completeness. Given a theorem, can we actually determine from the axioms whether the theorem is True or False? Well, in terms of our networks that's
an easy question to formulate.
We want to know whether anywhere on the network our particular proposition, our particular theorem will appear, starting from the string that corresponds to True. Well, it's pretty obvious there are
lots of axiom systems where tons of strings will never appear--axiom systems that are utterly incomplete. But what would a mathematician think of one of those axiom systems? They'd probably think for
many purposes that they were kind of stupid.
Let's say one had an axiom system that was supposed to have something to do with arithmetic. And it was all very nice. But it couldn't prove that addition is commutative. Mathematicians would
probably say that that was a dumb axiom system; that they need a more powerful axiom system.
OK, so here's the issue: is there any axiom system that's always powerful enough to be complete? Well, it's actually quite easy to find one.
Here is an example of an axiom system that is complete and the list of strings that it generates. The axiom system has two rules in it, like this,
it starts off from the string that represents True. And what one finds is that it generates exactly half of the possible strings of a given length. Let's see, I have to figure out what the
interpretation of this thing is--but anyway, it generates exactly half the strings of a given length, and you can apply negation to each of these strings and find out that it doesn't generate the
string that is the negation of that string.
OK. This axiom system is both consistent and complete, but it's fairly trivial. Well how about something that has all the richness of a real area of mathematics?
Well, Hilbert showed that Euclidean geometry is an example: something that is complete and consistent. And people thought in the early part of this century, that all other serious mathematical axiom
systems would be the same way.
But what happened in 1931 was that Gödel showed that that wasn't true. He showed that Peano arithmetic was incomplete--that there were propositions in Peano arithmetic that couldn't be proved in any
finite way.
Again, one might say, so what? I mean, in what's called Robinson arithmetic one has a fine set of axioms, but it so happens that it's fairly obvious that the system isn't complete. Because, for
example, you can't prove that addition is commutative. So why is it a big deal that Peano arithmetic isn't complete?
Well, it turns out that it's really a quantitative issue, not a qualitative one. The point is that Peano arithmetic is very very nearly complete. I mean, all the simple propositions are connected--or
their negations are connected--to the big network of all True propositions of arithmetic. But what Gödel showed is that there are some propositions that aren't connected to this big network of true
propositions of arithmetic. It's a network that might look something like this, except that it grows much more rapidly at every step.
It's a bit of a long story, but basically Gödel used the idea of universal computation to compile the statement "this statement is unprovable" into the very low-level machine code of Peano
arithmetic, and that's how he got his result. But the machine code of that particular expression--the proposition in sort of algebraic terms that's equivalent to "this statement is unprovable"-- is
incredibly long. So that's what was so surprising about Gödel's Theorem. That Peano arithmetic could seem almost complete--but not actually be complete.
Well, one might ask whether Gödel's statement is the shortest statement that isn't provable in Peano arithmetic. And it's been known for some time that it's not. There are some slightly simpler ones.
But they're still incredibly complicated. They're nothing like Fermat's Last Theorem, or some kind of question that mathematicians might seriously ask.
Well, as probably most of you know, Gödel's Theorem was a big deal when it came out, because it showed that Hilbert's ideas about mathematics, and about being able systematically to establish
everything from axioms, weren't going to work. But as a practical matter, in the past 68 or so years, nobody in practical mathematics has ended up paying all that much attention to Gödel's Theorem.
It's always seemed too remote from the kind of questions that mathematicians actually end up asking. And the reason for this is that in Peano arithmetic the simplest incomplete propositions--that are
known at least--are really complicated.
Well, I happen to think there are some somewhat simpler ones--I even have a potential candidate for one. But still, Peano arithmetic is basically a special almost-incompleteness-safe axiom system.
So. . . what does this mean about our efforts to find simple models of mathematics? Is this feature of Peano arithmetic something that's very hard to get? And that requires all the hair of Peano
arithmetic and that's important in having something that is a realistic model of what mathematics could be.
One might think so. But it's actually not true. Let me just say that if one searches through simple multiway systems, one can find other examples. Some systems are obviously inconsistent; some are
obviously incomplete.
But it's not hard to find ones where incompleteness happens sort of arbitrarily far away. So there's nothing special about Peano arithmetic in this respect either.
So, OK. What are all these axiom systems that seem to have all the familiar math properties? Well, you've never heard of any of them. They're just not things that have ever been reached in the
history of human mathematics. And there's a huge huge world of them out there. Some have easy interpretations to us humans. Some don't. They're sort of alternative mathematicses. So how does one
compare these mathematicses to the mathematicses we're familiar with?
I mean, each field of mathematics has a different character. Even within the mathematics that we are familiar with, different fields have different characters. How does that show up in the
idealization of mathematics that I've been talking about?
One can think of building this network of theorems for each field. Say for number theory. Or geometry. Or functional analysis, or something like that. And then one could for example start asking
simple questions like, what the statistical properties are. How connected it is? How many long paths there are in it?
Let me show you for instance for Euclid, Euclid's Elements. This is the dependency graph of the propositions of Euclid's Elements, in Book 1, Book 2, Book 3, etc.
You can unravel these to some extent. I haven't got the best unraveling yet. Euclid was a little bit optimistic in what needed to have what depend on. But anyway, this is the network that you get
from looking at the dependencies in Euclid's Elements.
I was showing you this rather messy dependency network. This is the picture of the particular theorems Euclid thought were interesting that you get from the multiway system that is geometry. It
reminds me of a quote from Gödel, when asked about mathematics, what the role of a mathematician was, his idea was that the role of the mathematician was to work out what theorems are interesting.
That a machine can generate all the possible theorems, but the role of a mathematician is to work out what theorems should be considered interesting.
Anyway, what do the properties of this kind of network mean in mathematics? When people do mathematics they talk about deep theorems, powerful theorems, easy lemmas, elegant theorems those kinds of
things. And it turns out that each of those kinds of things, one can start to try to define in a precise way in terms of the properties of a network like this.
And I think if one drew the networks for, say, number theory and say, algebra, one would be able to see by looking at them something about the very different characters of those fields.
So one question one might ask is, "What common features are there in the networks humans happen to have thought up for their mathematics?" I don't exactly know right now. That's something for which
one would require quite a bit of empirical data to be able to work out. But the critical question is whether it's all just historical, or whether some of it is a consequence of some general
principle--or perhaps some human limitation or human foible.
It's a little bit like asking about human languages, whether there are all possible forms that appear in human languages or not. The actual observation is that while there are incredibly many
exceptions, typical human languages that people actually understand are not that far from being context free.
Well, it's also interesting to think about what kind of mathematics . . . and thinking about whether the mathematics that we have is necessary, to think about what kind of mathematics the
extraterrestrials might have. People tend to assume that they'll think primes are important.
I think that's a fairly absurd possibility. I'd bet on the rule 30 cellular automaton over the primes any day. And actually I think from my studies of all possible simple programs, I can say a
certain amount about what's likely to certainly exist in the mathematics of any creature. But it's definitely not the primes, I think. I mean, even in human mathematical history, there are lots of
obvious twists and turns. If you talked to Pythagoras, for instance, he'd be much more excited about perfect numbers than about prime numbers. But it so happens that primes have won out in the
history of mathematics in the way it's developed. But I don't think that's in the slightest bit fundamental.
But OK. If we think about building Mathematica, Mathematica does reflect actual history. I mean, we have all those special functions that happen to have been identified in the nineteenth century, and
so on.
And they're very useful. There's nothing wrong with us pandering to the history of mathematics in designing Mathematica.
It's just that it would be nice to find constructs that are as general as possible to represent what happens in humanly chosen mathematicses.
So that's a challenge. Now, if one tries for example to do the proof thing in complete generality, and thinks that the idea of proof in general is the thing that is special about human mathematics,
then one will quickly fail.
I mean, let's say we try to make a function--call it say, FindRules--that takes a list of replacement rules, and instead of just having a single expression that it tries to reduce--it has two
expressions, and tries to find a sequence of replacements from the list that it's been given that will take one expression into the other.
Well, this function will be very unsatisfactory. It'll have to work by doing the same kind of thing that we have to do in one of these multiway systems. And it will have a very hard time trying to
find whether there happens to be a path from one of the expressions that we gave to the other expression that we gave.
We'd be getting tech support calls about this function all the time. And the reviewers would be doing benchmarks and complaining that the function is too slow, and so on. Because--and this is what
I've discovered from looking at random simple programs--much of the time the network of things one can get to from the replacements is a big mess, and there's no quick way to find out what's going
In a sense, people would be reporting the undecidability of the halting problem as a bug all the time. But the question is: can one do better?
And there are two ways we might be able to do better. One is by using essentially a directly historical approach, and basing everything on the kinds of rules that happen to have shown up in
arithmetic and geometry and all the things that have been derived from them.
But maybe, just maybe, one can do something more general. And to see if one can one has to understand what it really takes to be a human mathematics. I mean, if one looks at arbitrary mathematicses
that come out from the simple rules and simple programs I've investigated, one is usually thrown into undecidability and so on very quickly. And in fact the whole idea of doing proofs and things
pretty much disintegrates in that case.
But perhaps that isn't true for any set of rules that would come from axioms that might realistically be used in human mathematics. It's sort of the following question: if you see a random collection
of axioms, can you tell if they had a human origin?
It's again a little bit like asking about languages. If one has an approximate formal grammar for a language--a natural language or a computer language--can one tell if it was set up or used by
humans? Or whether it was just randomly chosen?
I'm not sure about the answer to this for mathematics, and it'll be interesting to see. One thing I am sure about is that if one doesn't put on whatever constraints are associated with human
mathematics, there's an awful lot of other mathematicses out there to investigate. Mostly they can't be investigated by the same methods as the ones that have been used so far in history in human
mathematics. Mostly I think the proof idea for example won't work. And instead one's left with--just as in most fields of science--is mostly finding out what's true by doing experiments, not by
making proofs about what has to be true. And of course in doing those experiments that's something that Mathematica can do very well in its present form.
But even though there's a lot of other mathematicses out there that one might investigate--with all sorts of weird and wonderful properties--that doesn't mean there's no more to do with our regular
old human mathematics. There certainly is. And I hope it can be done very well with Mathematica.
But as one tries to see how to do it, one has to understand what one is doing, and the fact that the mathematics that one is investigating is I think very much a historical, cultural, artifact. It
happens to be a very wonderful artifact. Intellectually I think it's much deeper than for example systems of human laws, or our schemes for art or architecture and things like that. But nevertheless
an artifact.
And that means that one cannot somehow expect it to be typical of all the possible mathematicses, or of what might happen in nature.
But that's what my new kind of science is trying to deal with: working out what all those possibilities are, what all possible mathematicses might do. And that's what I've spent the past eight years
or so investigating, and in finding out how the mathematicses that are sampled by the natural world--what those are like and what their properties typically are.
Well, I should probably stop here. As is usual I've probably gone way over time. And I'm afraid this has gotten very abstract, but perhaps I've managed to communicate at least a little bit about a
few of the things I've been thinking about. So I thank you all very much.
Copyright © 2001 Wolfram Media, Inc. All rights reserved. | {"url":"http://www.mathematica-journal.com/issue/v8i2/features/foundations/contents/html/index.html","timestamp":"2014-04-19T19:34:01Z","content_type":null,"content_length":"105742","record_id":"<urn:uuid:0c6e6f02-6d0e-4787-a86c-76ae8ed8b6ec>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Biblical value of pi
From Iron Chariots Wiki
The Bible seems to claim that the value of π (pi) is 3.
1 Kings 7:23 says:
"And he made a molten sea, ten cubits from the one brim to the other: it was round all about, and his height was five cubits: and a line of thirty cubits did compass it round about."
Atheists sometimes use this passage to demonstrate that the Bible contains a mathematical error, despite the fact that it is supposedly divinely inspired and inerrant. Since the circumference of a
circle is π times its diameter, a circular sea could only be ten cubits across and thirty cubits around if π = 30 cubits ÷ 10 cubits = 3, rather than the true value 3.14159265....
Apologetic response
Unfortunately, this claim is easily refuted in a few different ways.
1. The Bible doesn't claim that the sea was a perfect circle, only that it was "round"; it could have been slightly ellipical and 10 cubits was its longer dimension.
2. The 30 cubit measurement may have been the interior circumference while 10 cubits was the diameter from one outside edge to the other. That is, the thickness of the "brim" accounts for the
3. The passage only implies the wrong value for π if you assume (probably unwisely) that the numbers given are accurate to more than two significant figures (i.e., that they equal 10.0 and 30.0,
respectively, when rounded to the nearest tenth). Otherwise, there is quite a large range of possible values implied. If the numbers are only accurate to the nearest unit — surely an acceptable
assumption — the implied value could be anything from 2.81 (≈29.5/10.499) to 3.21 (≈30.499/9.5), a range that clearly contains the true value of π. (In other words, the measurements can both be
correct, and the shape perfectly circular, if the numbers are simply being reported to the nearest unit.)
While some atheists like to cite this as a demonstration against strict Biblical literalists, as we could certainly expect greater precision if the words of the Bible come directly from a god, the
argument tends to be viewed as trivial. This argument is certainly vastly overshadowed by the wealth of other errors, contradictions, ambiguities and atrocities contained in the Bible.
It is also worth noting that the cubit itself was an inherently ambiguous unit, being based on the length of the human forearm. Our ancient friends simply did not possess the accuracy of measurement
that we do today.
v · d Arguments against the existence of god
Existential arguments Argument from nonbelief · Who created God? · Turtles all the way down · Problem of non-God objects · Argument from incompatible attributes · No-reason argument · Santa Claus
Arguments from the Failed Prophecy · Biblical contradictions
Reasonableness Occam's Razor · Outsider test · Argument from locality · Argument from inconsistent revelations
Moral arguments Euthyphro dilemma · Problem of evil · Problem of evil (evidential) · Moral argument
Other arguments Emotional pleas against the existence of God | {"url":"http://wiki.ironchariots.org/index.php?title=Biblical_value_of_pi&oldid=15125","timestamp":"2014-04-20T08:59:25Z","content_type":null,"content_length":"22205","record_id":"<urn:uuid:9bbf155b-c021-4f8a-a13e-8f75b1866b8d>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
November 17th 2006, 11:47 AM #1
MHF Contributor
Oct 2005
Ok as some of you know, this number:
has two prime factors and RSA will give $30,000 to the person/team who can find them.
Who here knows some basic number theory on this problem and why it is so hard to factor?
Ok as some of you know, this number:
has two prime factors and RSA will give $30,000 to the person/team who can find them.
Who here knows some basic number theory on this problem and why it is so hard to factor?
It's not divisible by 2, 3, or 5. Okay, I've done my part.
It's hard because it doesn't fit in most calculators
Ummm, thanks guys. I suppose.
Obviously this number is semi-prime, and it's only factors besides 1 and itself are $p_1$ and $p_2$. So I think it's safe to say 2,3, and 5 are out, as well as any other prime less than 100
Any ideas?
At least you don't have to look farther than the sqrt of the number, which in this case is 8.60450832 × 10^105
Well I had hoped to get some serious thoughts on this, but c'est la vie. Thread closed.
Umm, I will first try Fermat's Factorization Method, find the smallest square exceeding this and keep subtracting this number. (But this is only useful when the two factors are adjacent).
You can also try the Pollard pho primality test. But I am not too familar with it, in fact, I am not familar too well with primality testing.
November 17th 2006, 12:57 PM #2
November 17th 2006, 01:18 PM #3
November 17th 2006, 01:23 PM #4
MHF Contributor
Oct 2005
November 17th 2006, 04:07 PM #5
November 18th 2006, 11:11 AM #6
MHF Contributor
Oct 2005
November 18th 2006, 01:45 PM #7
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/number-theory/7702-rsa-704-a.html","timestamp":"2014-04-25T01:19:42Z","content_type":null,"content_length":"47682","record_id":"<urn:uuid:923e1023-6c0b-4cb7-8176-f5f8870b16b8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
Perfect squares of n and n+99
July 7th 2010, 09:47 AM
Perfect squares of n and n+99
How many positive integer solutions are there n such that both n and n+99 are perfect squares?
I believe there is only 1 solution.
Any tips on how to tackle this? I haven't really come up with a method or a formula to calculate it, simply gone through the first 20 squares, and I doubt there will be anything after that :o
Please help out, thanks
July 7th 2010, 09:56 AM
i tried the first couple on paper only the number 1 is working. so i guess that it
July 7th 2010, 10:11 AM
How many positive integer solutions are there n such that both n and n+99 are perfect squares?
I believe there is only 1 solution.
Any tips on how to tackle this? I haven't really come up with a method or a formula to calculate it, simply gone through the first 20 squares, and I doubt there will be anything after that :o
Please help out, thanks
I can't provide you with an equation or algorithm but I used an Excel table and have found: $<br /> \begin{array}{cccc}nr&sqr&NR&SQR\\1& 1& 10& 100\\<br /> 15& 225& 18& 324\\<br /> 49& 2401& 50&
Now the difference between the squared numbers is 1. So there can't be any other pair of numbers which satisfy the given conditions.
July 7th 2010, 10:22 AM
How does 49 satisfy the conditions?
You must have misread my question
take n as 1
1^2 is a perfect square
(1+99)^2 = 100 = 10^2
49 does not suit this
15 does not suit this
July 7th 2010, 10:25 AM
I can't provide you with an equation or algorithm but I used an Excel table and have found: $<br /> \begin{array}{cccc}nr&sqr&NR&SQR\\1& 1& 10& 100\\<br /> 15& 225& 18& 324\\<br /> 49& 2401& 50&
Now the difference between the squared numbers is 1. So there can't be any other pair of numbers which satisfy the given conditions.
$n \in \{1, 225, 2401\}$ is correct. The answer can be obtained with a Diophantine equation.
$n = k^2$
$n+99 = m^2$
$k^2-m^2+99 = 0$
See Dario Alpern's quadratic Diophantine solver and plug in $a=1, c=-1, f=99$ and select "step-by-step".
Edit: Actually, the steps shown for this particular problem aren't as illuminating as they often are. Not sure the best way to work it out on paper.
Edit 2: See Soroban's post.. seems so obvious now.
July 7th 2010, 10:28 AM
Well, as usual, I've made your question a bit to difficult.
So take
$225 +99 = 324 = 18^2$
$2401+99 = 2500 = 50^2$
July 7th 2010, 10:36 AM
that's taking it as n^2+99 but the actual question is n+99
July 7th 2010, 10:39 AM
By the way, I worked it out by brute force initially as well using PARI/GP which is a great tool (and hopefully you won't get spoiled by it). I wrote
Could also have used
which ought to be a bit more efficient.
July 7th 2010, 10:42 AM
July 7th 2010, 10:55 AM
Hello, Mukilab!
How many positive integers $n$ are there
such that both $n \text{ and }n+99$ are perfect squares?
The answer is 3.
We have two squares that differ by 99: . $a^2 - b^2 \:=\:99$
So we have: . $(a+b)(a-b) \:=\:\begin{Bmatrix}99\cdot1 \\ 33\cdot 3 \\ 11\cdot 9 \end{Bmatrix}$
Solve the three systems of equations:
. . $\begin{array}{ccc}a+b &=& 99 \\ a-b &=& 1 \end{array} \quad\Rightarrow\quad (a,b) \:=\:(50,49) \quad\Rightarrow\quad n = 2401$
. . $\begin{array}{ccc}a+b &=& 33 \\ a-b &=& 3\end{array} \quad\Rightarrow\quad (a,b) = (18,15) \quad\Rightarrow\quad n = 225$
. . $\begin{array}{ccc}a+b &=& 11 \\ a-b &=& 9 \end{array} \quad\Rightarrow\quad (a,b) = (10,1) \quad\Rightarrow\quad n = 1$ | {"url":"http://mathhelpforum.com/algebra/150308-perfect-squares-n-n-99-a-print.html","timestamp":"2014-04-25T01:04:39Z","content_type":null,"content_length":"16590","record_id":"<urn:uuid:a21a2798-f3e9-49f3-814e-12fa6ea838ab>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Kim on Friday, February 15, 2008 at 5:58pm.
I need serious help. Tried to work it out but I just couldn't get the correct answer.
Benjamin and olivia are putting a new floor in their kitchen. To get the floor up to the desired height, they need to add 1 /8ft to subfloor. They can do this in one of two ways. They can put 1/2in
sheet on top of 5/8in board (note that the total would be 9/8ft or 1 /8ft). They could also put 3/8in board on top of 3/4in sheet.
here is a table that gives the price for each sheet of plywood
1/8in $9.15
1/4in $13.05
3/8in $14.99
1/2in $17.88
5/8in $19.13
3/4in $21.36
7/8in $25.23
1in $28.49
1. what is the combined price for a 1/2in sheet and a 5/8 sheet?
2. what is the combined price for a 3/8in sheet and 3/4in sheet?
3. what other combination of sheets of ply wood yields the need 1 1/8in thickness?
4. of the four combinations, which is most economical?
5. the kitchen is to be 12ftx12ft. Find the total cost of the plywood you have suggested in qusetion 4
• Math - Ms. Sue, Friday, February 15, 2008 at 6:27pm
1. $17.88 + $19.14 = ?
2. $14.99 + $21.36 = ?
3. 1 inch + 1/8 inch
and 7/8 inch + 1/4 inch
This gets you started. If you post your answers, we'll be glad to check them.
• Math - Damon, Friday, February 15, 2008 at 6:43pm
I am trying to reverse engineer your statement.
Do you really mean you need to raise the floor 1 1/8 INCH?
I will have to assume that is what you mean because you said that 1/2 inch + 5/8 inch = 9/8 inch was right
1. surely you can add 17.88 and 19.13 = 37.01
2. I am also sure you can add 14.99 and 21.36 = 36.35
3. What else here adds to 9/8?
we could use 1/8+1 (9.15+28.49=37.64)
we could use 7/8+1/4 (25.23+13.05=38.28)
Of course we could use more than two sheets, but the wording of the question implies do not consider the next two:
we could use 9 of 1/8 (9*9.15=82.35)
we could use 3 of 3/8 (3*14.99=44.97)
4. Now the minimum here was 36.35 for a sheet of 3/8 + a sheet of 3/4
5. 12*12 = 144 ft^2
a sheet of plywood is 4*8 = 32 ft^2
so we need 4.5 of these.
If we could get this by the half sheet, the price would be
36.35*4.5 = 163.58 which would be fine for a mathematician but not for a lumber yard which is likely to insist that you buy full sheets, in other words five of each size:
36.35*5 = 181.75
• Math - Anonymous, Monday, September 14, 2009 at 12:56am
1 1/15+3 3/10-2 4/5=
Related Questions
Math/Fractions - I really need help with this problem. I have NO IDEA how to get...
math - The floor tile is one square foot. The kitchen floor was covered with 100...
Algebra - I need help figuring out the rest of this problem. A kitchen pantry ...
math - each kitchen floor tile is 12 inches by 12 inches.a kitchen floor needing...
Science - You walk into the kitchen and see a broken egg on the floor. Which of ...
math - The Rossi family wants to install new tiles on the floor on their kitchen...
psychology - After a serious injury,Lusy has lost her sensitivity to hot an cold...
algebra - I have to find the square footage of my kitchen floor with the length ...
math;) - a kitchen is shaped like a rectangle with dimensions of 11 1/2 ft. by 9...
Math - Mrs Thompson is putting new tile on her bathroom floor. Each tile ... | {"url":"http://www.jiskha.com/display.cgi?id=1203116324","timestamp":"2014-04-17T20:13:47Z","content_type":null,"content_length":"10769","record_id":"<urn:uuid:6d6894c4-de3c-428a-967e-9b948b14d9e1>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Central Issues in Foundations 2
Harvey Friedman friedman at math.ohio-state.edu
Thu Sep 9 10:25:34 EDT 1999
This is a continuation of the Central Issues in Foundations series.
Previous ones include
#1. 4:18AM 9/1/99.
In #1, I announced that I was posting such a series on the FOM, and gave a
brief discussion of a very well known and obviously central issue in
foundations of computer science, which I will refer to simply as focc
(foundations of computational complexity). We will write cc for
computational complexity, which is the mathematical development spawned by
focc. I discussed this foundational topic in the way that I did in order to
set an example. Not of
*some arcane technicalities of interest only to a small number of insulated
but rather a topic of great general intellectual interest and of obvious
foundational character. And I chose focc because it is a foundational topic
of a mathematical character, which is not properly construed as a problem
in the foundations of mathematics. There are great foundational issues that
are not issues in the foundations of mathematics.
Even a casual glance at this topic shows that focc is of a fundamentally
different character than the preponderance of typical current research in
mathematical logic, or mathematics generally. It looks more like, say, the
inception of recursion theory, than any current practice of recursion
theory. The proper analogy is that cc looks like mathematical logic today,
but with significant differences. E.g., the biggest open problems in cc are
directly connected with focc.
In fact, when focc originated - say in the 1960's - many voices in the
mathematics community did not really regard it as properly belonging to the
mathematics community. It was all too easy for people to say "I want to
prove theorems, and not study how hard it is to construct something via an
algorithm. Once it is constructed, I move on to the next real mathematical
problem, rather than dwell on side issues." However, the great general
intellectual interest of focc was widely recognized by more reflective
people, and both focc and cc became very well known, and are respected
components of the computer science community.
In fact, a case can be made that the P = NP problem (and close variants) is
currently regarded as the most significant open mathematical question by
the scientific community - although it must be recognized that large
portions of the scientific community probably do not recognize the
significance of mathematical questions.
And how did the P = NP problem and its close variants get its present
status? Through its great general intellectual interest.
DIGRESSION: In fact, focc/cc really got on the map when a kind of "reverse
complexity" was introduced. The analogy with "reverse mathematics" is quite
instructive. Suppose you show that a lot of problems are in NP (i.e., can
be solved in nondeterministic polynomial time on the asymptotic Turing
machine model). You want to know if you can "reverse" these problems. You
show that a surprising number of them are NP complete. This can be viewed
as showing that you can simulate the effect of any NP computation using
that problem as an oracle. Similarly with regard to many other natural
complexity classes. [I leave it to someone else to polish up this rough
statement of an analogy between reverse mathematics and this essential
aspect of focc]. In any case, the "reverse" idea is at the heart of a
variety of mathematical topics of great importance, including the early
stages of recursion theory, focc/cc. END.
Now of course virtually any subject periodically needs to renew itself,
often by returning to the original issues that generated it. Cc has often
renewed itself in this way, and spawned additional areas of investigation
with their own additional foundational issues. E.g., circuit complexity,
computational geometry, cryptography, etcetera. I don't intend to get into
a thorough discussion of foundations of computer science here, but let me
say two things:
1. Many topics in the foundations of computer science form a model of
foundationally driven mathematical work that people in mathematical logic
and many other areas of mathematics could learn from. The volume of
foundationally driven current work in the pure mathematics community is
much lower.
2. Not all topics in theoretical computer science form such a model of
foundationally driven mathematical work. In fact, is to be expected that
over the years, many specialized topics in theoretical computer science
have emerged, with their
*own arcane technicalities of interest only to a small number of insulated
a phrase I have used above.
3. But there is plenty of stuff in 1 to concentrate on, rather than dwell
on 2.
Enough about foundations of computer science in this posting.
A constant critical examination of mathematical constructions and projects
in light of well defined purposes and goals of general intellectual
interest. E.g., this kind of reflection led to the placing of resource
bounds on the action of Turing machines, where one moves from recursive
sets to, e.g., polynomial time recognizable, nondeterministic polynomial
time recognizable, and polynomial space recognizable sets of finite
strings. Nobody knows for sure whether these classes are all the same or
whether they are all different. People really care - because of the general
intellectual interest.
As I have stressed many times in previous postings, the current subject
called mathematical logic, with its fairly representative division into
four subareas, arose out of work on specific mathematical problems
emanating out of the consideration of issues in the foundations of
mathematics. Mathematical logic, through its four subareas, has developed
over the years with steadily less connections to issues in the foundations
of mathematics. This is a trend that has been going on for many decades,
and by now, the connections with f.o.m. are typically remote to nonexistent.
In particular, there have been several open challenges on the FOM to the
effect of asking for a clear statement as to specifically what foundational
issues are being addressed by specifically what developments in
mathematical logic? I think that no one has attempted to meet these
challenges to date. This challenge has been most frequently made with
respect to recursion theory. However, this challenge is also appropriate
for the other three subareas - set theory, model theory, and proof theory.
Now let me say right off the bat that all four of these subareas share the
following feature: that the preponderance of the current work is not
foundationally driven. I.e., not motivated by issues in the foundations of
mathematics. And that the evaluation of research is not made with reference
to issues in the foundations of mathematics.
In fact, the tendency is to evaluate whatever foundationally driven work
there is in mathematical logic solely with regard to its purely technical
content, divorced from its foundational purposes. For example, this is like
dismissing the original work on the verification that many different models
of computation lead to the same class of partial recursive functions as
"simply routine" without regard to the crucial foundational issue being
addressed. Or dismissing the original work formulating the standard axioms
and rules of inference for intuitionistic predicate calculus as "trivial"
because there is no nontrivial theorem involved in this formulation. Or
dismissing the Godel 2nd incompleteness theorem as "good but not
outstanding" because the technical complexity of the proof is significant
but not great compared with outstanding results from number theory. Another
way of dismissing such foundationally driven work is to cite the lack of
applications of these developments to the ongoing practice of core
The previous paragraph illustrates a principal way in which the
foundational aspects of mathematical logic get expunged from the practice
of mathematical logic. What gifted person is going to spend their career
pursuing foundationally driven research when, in the evaluation process,
the foundational component is going to stripped clean in favor of the
technical component (or the applied component)? In foundationally driven
research, the technical component is there simply to serve the foundational
Having said that, the situations in these four subareas definitely differ
in detail. For all four subareas, there are central issues in foundations
of mathematics that are closely connected to that subarea, at least in one
or more of these senses:
A. There are many mathematical problems generated by issues in foundations
of mathematics for which the practitioners of that subarea are best
qualified, or at least highly qualified, to work on them.
B. The issues in foundations of mathematics directly impinge on the
suitability of the key constructions that underly the subarea.
The consequence of A is that there may be a large accessible set of
specific problems which are of more general intellectual interest than what
is being commonly being worked on. The consequence of B can be put in a
positive and a negative way. The positive is that there may be an
opportunity to overhaul the basic setup, using experience with the existing
framework, to increase the general intellectual interest of work in that
subarea. In the vernacular, this is called "reinventing oneself." The
negative is that the subarea may become obsolete in favor of a new, related
area of greater general intellectual interest.
Here are some general remarks about the branches of mathematical logic as
currently practiced.
PLEASE NOTE: These general remarks are fluid, and I expect to modify and/or
amplify on them from time to time as I receive comments from people. I
welcome your comments. In particular, if you feel that I have not properly
taken into account certain work in mathematical logic, please let me know.
1. Recursion theory. As for A, there is one topic especially which is
absolutely perfect for recursion theory, and has a growing - but still
limited - following among recursion theorists. As for B, there are some
central foundational issues which are quite difficult, and may not be
accessible to techniques from recursion theory. They are lagely ignored.
They seem to require refined intellectual instincts of a sort that haven't
played any apparent role in the subarea since the initial development of
the subarea. Some of them go right to the heart of the appropriateness of
the objects being intensively studied. The later is a crucial issue for the
subarea, since most of the research concentrates on the detailed structure
of these objects, with a forty plus year massive concentrated technical
effort. There is a recent use of work from the 1960's to differential
geometry by geometers, which has not yet been assimilated by the subarea.
There appears to be no reliable discussion by recursion theorists of this
recent work.
2. Set theory. This subarea was comparatively close to foundations of
mathematics through the sixties and perhaps seventies. At that time, the
obvious technical problems and the obvious foundational issues were
naturally closely connected, and its contact with foundations of
mathematics did not require imaginative reflection to maintain. This is no
longer the case. Currently, there are some parts of the subarea that are at
least partly driven by a foundational outlook. However, this outlook has
the appearance of being rather dogmatic and restricted, and not well
exposited. I have asked a principal practitioner with a foundational
outlook to exposit some key points of this outlook on the FOM, but he has
delayed doing this on account of "the technical complexity of an accurate
exposition of these matters." There is also an unrelated ongoing project
which is quite well conceived, with plenty of points of contact with
classical analysis. But the absolutely crucial issue for the future of this
field is whether or not the set theoretic axioms play a significant role in
normal mathematical contexts - where the objects are relatively concrete -
and the extent and nature of this role. This is all the more crucial and
pressing because the general view of the mathematical community - both at
the conscious and the subconscious level - is that the set theoretic axioms
are completely irrelevant in normal mathematical contexts. In summary,
there is plenty of B to consider, but it is largely ignored. And there is
some A that is currently well under way and thriving.
3. Model theory. This area experienced a major rethinking, and by the 80's
it had transformed itself from a focus on set theoretic contexts to a focus
on standard contexts from concrete mathematics. Some of the earlier work
developed in the set theoretic contexts proved useful in the new concrete
contexts. However, there was a cost associated with this internally
generated reform. A principal driving motive for several (not all) of the
key insiders who transformed the focus of this subarea is the largely
indiscriminate minimization of all forms of mathematical research that are
not closely associated with current core mathematics - either literally or
in spirit. This minimization includes not only the preponderance of current
research in mathematical logic outisde model theory, but also the
preponderance of current research in foundations of mathematics and
foundations of computer science. On the constructive side, this serves as a
kind of antidote for some of the worst features of the other three
subareas. Having said this, we believe that one main thread in this subarea
can be productively recast as a project in the foundations of mathematics
of general intellectual interest. This recasting leads to problems of type
A that are beginning to have a following. It is expected that other parts
of the subarea can be so recast with a similarly productive outcome. As for
B, there are foundational topics even at the most rudimentary level of the
subarea that can be productively rethought from the foundational
perspective. At this time, such foundational topics are ignored because of
the predominant dismissal of foundational thinking by the practitioners.
4. Proof theory. This subarea is much larger and more varied in Europe than
in the U.S., and I need to get a more complete understanding of how it
looks over there. This subarea is certainly the closest to foundations of
mathematics, at least in terms of the perspective of the researchers.
However, it has drifted away from issues in the foundations of mathematics,
having to some extent been caught up in the non foundational culture of
mathematical logic as a whole. The proofs of some key results are in an
unattractive state, at least to outsiders, and a project is well underway
to systematically remedy this by introducing new methods that are more
robust. As for A, there seem to be many opportunities for building on the
known intimate connections with combinatorics, and bounds in core
mathematics. There are also substantial interactions with topics in
theoretical computer science, which are well underway, particularly in
Europe. As for B, there are major issues connected with appropriate
formalizations of mathematics, and the search for significant features of
actual mathematical proofs. The latter topic is underdeveloped, at least in
the U.S.
In the next Central Issues posting, I will substantially expand on each of
these four paragraphs with specificity.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1999-September/003357.html","timestamp":"2014-04-16T07:22:57Z","content_type":null,"content_length":"17979","record_id":"<urn:uuid:0d13d52e-11fe-40f6-b2a2-e17fc38a9962>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
* * * * * * * VERY IMPORTANT * * * * * *
If you download the MPP software from this Web Page you will receive it in "zipped" form as MPP.ZIP. Once you move MPP.ZIP to your PC, it must be installed by being "unzipped" using PKUNZIP.EXE.
Install MPP on your PC by typing the following:
pkunzip -d mpp
The -D creates all the needed subdirectories that allow MPP to work properly.
MPP (Math Plotting Package) plots functions; shows roots, integrals, slopes being found; plots terms or sum of infinite series; a superb learning and teaching tool for mathematics.
MPP was written by Professor Howard Lewis Penn, with help from Jim Buchanan, Frank Pittelli, et. al. of the U.S. Naval Academy. The information below was excerpted from the file MPP.DOC for version
3.80 of the software:
Mathematics Plotting Package (MPP) contains nine modules written by Howard Lewis Penn with help from Jim Buchanan and Frank Pittelli of the U.S. Naval Academy. The modules are intended to be used in
conjunction with Calculus. This program was written in Turbo Pascal, version 5.5. The program has been compiled and, therefore, Turbo Pascal is not needed to run it. MPP can be run on any IBM
compatible computer with at least 512K of memory and a CGA, EGA, VGA, Hercules board or a board compatible with one of these. The program makes heavy use of color and therefore the authors recommend
the use of a color monitor and an EGA or VGA graphics board.
Since this program was produced by government personnel, no charge can be requested for this disk. You are free to copy this disk and distribute it to other people at your institution as long as
there is no charge other than the cost of the blank disk onto which the program is copied. We do request that anyone at another institution sends two blank, formatted disks to:
Howard Lewis Penn, Mathematics Department U.S. Naval Academy, Annapolis, MD 21402
The ten modules are: 1. MPP (Mathematics Plotting Program) 2. Root (Root finding by Newton, bisection or secant) 3. Integral (Evaluation with trapezoidal, Simpson or Riemann sums) 4. Slope
(Definition of derivative illustration) 5. Implicit (Plots implicit function & tangent lines) 6. Infinite Series (plots terms or partial sums of infinite series) 7. Contour (Plots up to 15 contour
lines) 8. Vector Fields (Plots vector fields) 9. Double Integrals (Rec. or polar coordinates) A. Triple Integrals (Rec., cylindrical or spherical coordinates)
MPP has the ability to save and recall stored files (e.g. MPP(Program) files have the .mp1 and .mp2 extensions, while Root files have the .rt1 extension). It can also print graphs on four types of
printer: most dot matrix, HP Laser Jet and compatibles, Postscript laser, and Okidata dot matrix printers that do not emulate Epson. MPP is a versatile and worthwhile software package. | {"url":"http://uhaweb.hartford.edu/eetsw/mpp.html","timestamp":"2014-04-18T20:45:43Z","content_type":null,"content_length":"3439","record_id":"<urn:uuid:c128dd20-1af9-4721-b86c-f8d2631c0a7c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Topology on extensions of topological groups
up vote 5 down vote favorite
Let $G$ and $H$ be two topological groups and let $\mathcal{E}:0 \to G \to E \to H \to 0$ be an extension of abstract groups.
Is there a way to introduce a topology on $E$ such that $\mathcal{E}$ becomes an extension of topological groups? If there is a way, is it unique?
Similarly, let $G$ and $H$ be Lie groups and let $\mathcal{E}:0 \to G \to E \to H \to 0$ be an extension of topological groups.
Is there a way to introduce a smooth structure on $E$ such that $\mathcal{E}$ becomes an extension of Lie groups? If there is a way, is it unique?
Thank you all in advance.
topological-groups lie-groups
add comment
2 Answers
active oldest votes
The answer to the first question is no. In general, the automorphism group of $G$ as an abstract group will be bigger than its continuous automorphism group. For instance, if we take $G$ to
be the additive group of $\mathbb{R}$, we can pick a basis for $\mathbb{R}$ as a $\mathbb{Q}$-vector space and permute basis elements to get many discontinuous automorphisms of $G$. Then
the semidirect product of $\operatorname{Aut}(G)$ by $G$ cannot be made into a topological extension, as conjugation would not be continuous.
up vote 2
down vote I think the answer will still be no even if we restrict to central extensions, although I do not know a counterexample off the top of my head.
@Evan: Thanks for your comment.. – jap Jul 13 '11 at 5:13
add comment
You can describe abstract group extensions of H with G by 2-cocycles of group cohomology. If you have an extension E, you get an induced H-operation on G, by conjugating in E (take any
set-theoretic section of $E\to G$). The extensions of G with H with this H-operation on G are classified up to isomorphism, by the second group cohomology. You get a 2-cocycle corresponding
to E by taking any set-theoretic section $s : G\to E$ of $E\to G$ that maps 1 to 1 and write down the 2-cocycle $c : H \times H \to G$ by $c(h,h'):=s(h)s(h')s(hh')^{-1}$. Then equip the set
$G\times H$ with the multiplication $(g,h)(g',h') := (g+h.g'+c(h,h'),hh')$. More explicitly, this is $(g,h)(g',h') = (g+s(h)g's(h)^{-1}+s(h)s(h')s(hh')^{-1},hh')$. This is again a group
extension and it is isomorphic to E.
One can show that all extensions are of the type I just constructed for a given cocycle, up to isomorphism. The extensions in the same isomorphism class differ only by a coboundary. A good
reference would be Weibel's homological algebra book.
up vote
2 down To have an extension with a topological group structure implies that the corresponding cocycle is continuous and in general, this can not be expected. Observe that continuity doesn't follow
vote from the axioms for 2-cocycles and depends on the topological structures of G and H, which you don't want to change.
So I think, it's wrong in general as well as for central extensions, which would be the case of a trivial H-action on G, where you still don't get any continuity for free. At the same time,
I don't know of any trivial counter-example. You might take any non-continuous map from the real numbers times real numbers to the real numbers and form the corresponding "twisted"
semi-direct product as sketched above. Then you can not get a topological group structure on the extension such that the extension is in the category of topological groups.
As for the smooth case, the same idea applies, where one would need the cocycle to be smooth as well.
Is it evident that "to have an extension with a topological group structure implies that the corresponding cocycle is continuous"? After all, the section $s$ used to produce the cocycle
might be discontinuous. I can see that the cohomology class will contain some cocycles that are continuous in certain places and other cocycles that are continuous in other places, but I
don't see how to get one cocycle that's continuous globally. (Am I just being dense?) – Andreas Blass Jul 13 '11 at 1:19
Thanks all for your comments. Could someone provide some example. – jap Jul 13 '11 at 5:14
add comment
Not the answer you're looking for? Browse other questions tagged topological-groups lie-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/70139/topology-on-extensions-of-topological-groups?sort=newest","timestamp":"2014-04-18T19:02:38Z","content_type":null,"content_length":"57801","record_id":"<urn:uuid:5be9c303-eeea-4e58-8446-f2a73f78e298>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
The topic that we will be examining in this chapter is that of Limits. This is the first of three major topics that we will be covering in this course. While we will be spending the least amount of
time on limits in comparison to the other two topics limits are very important in the study of Calculus. We will be seeing limits in a variety of places once we move out of this chapter. In
particular we will see that limits are part of the formal definition of the other two major topics.
Here is a quick listing of the material that will be covered in this chapter.
Tangent Lines and Rates of Change In this section we will take a look at two problems that we will see time and again in this course. These problems will be used to introduce the topic of limits.
The Limit Here we will take a conceptual look at limits and try to get a grasp on just what they are and what they can tell us.
One-Sided Limits A brief introduction to one-sided limits.
Limit Properties Properties of limits that we’ll need to use in computing limits. We will also compute some basic limits in this section
Computing Limits Many of the limits we’ll be asked to compute will not be “simple” limits. In other words, we won’t be able to just apply the properties and be done. In this section we will look at
several types of limits that require some work before we can use the limit properties to compute them.
Infinite Limits Here we will take a look at limits that have a value of infinity or negative infinity. We’ll also take a brief look at vertical asymptotes.
Limits At Infinity, Part I In this section we’ll look at limits at infinity. In other words, limits in which the variable gets very large in either the positive or negative sense. We’ll also take a
brief look at horizontal asymptotes in this section. We’ll be concentrating on polynomials and rational expression involving polynomials in this section.
Limits At Infinity, Part II We’ll continue to look at limits at infinity in this section, but this time we’ll be looking at exponential, logarithms and inverse tangents.
Continuity In this section we will introduce the concept of continuity and how it relates to limits. We will also see the Mean Value Theorem in this section.
The Definition of the Limit We will give the exact definition of several of the limits covered in this section. We’ll also give the exact definition of continuity. | {"url":"http://tutorial.math.lamar.edu/Classes/CalcI/limitsIntro.aspx","timestamp":"2014-04-17T12:30:12Z","content_type":null,"content_length":"41915","record_id":"<urn:uuid:839d84a5-658f-406b-b65e-767eeacf79c8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
Work Example II
A box with a mass of 1.0 kg is moving to the right. Its initial speed is 2 m/s. After traveling 3 m, its speed is 4 m/s. In additional to the usual forces (gravity, normal, friction) there are two
other forces acting on the box. One is a 5 N force that has an upward vertical component of 4 N and a component to the left of 3 N. The other force is 8 N to the right.
(a) What is the normal force?
(b) What is the coefficient of kinetic friction?
A good free-body diagram is needed here. | {"url":"http://physics.bu.edu/~duffy/semester1/c9_workexample2.html","timestamp":"2014-04-18T08:21:11Z","content_type":null,"content_length":"4976","record_id":"<urn:uuid:5ccf67bb-c5ab-4ecd-8c35-838dc4308ccc>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
West University Place, TX
Missouri City, TX 77459
Doctor Nick. I tutor all freshman and sophomore mathematics.
...Air Force Academy for 7 years. In addition I taught at the following universities: San Antonio College, University of Maryland, University of Colorado, Auburn University at Montgomery, AL. In
addition, I tutored the Air Force Academy football team in calculus...
Offering 10+ subjects including algebra 1 | {"url":"http://www.wyzant.com/geo_West_University_Place_TX_algebra_1_tutors.aspx?d=20&pagesize=5&pagenum=4","timestamp":"2014-04-24T14:00:06Z","content_type":null,"content_length":"61753","record_id":"<urn:uuid:fc3d1946-083a-4060-8fa3-333ba8155c68>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
<b>Euler's number (e)</b>
Euler's number (e)
Among mathematicians, e is considered to be one of the most important numbers in mathematics, along with pi (p), i (Ö-1), 0 and 1, all of which are linked to the famous and mysterious Euler Identity,
eip + 1 = 0. Moreover, e appears at the core of several important areas of modern mathematics including calculus.
The numerical value of e truncated to 50 decimal places is 2.7182818284590452 3536028747135266 249775724 709369995…..
Scottish theologian and mathematician John Napier (1550-1617), while trying to simplify multiplication by finding a model which transforms multiplication into addition, came up with the idea of
logarithm. The model is almost equivalent to what we know as logarithm today.
Napier created first table of logarithms in 1614 and used a number close to 1/e as the base, although Napier’s definition did not use bases or algebraic equations. Algebra was not advanced enough in
Napier’s time to allow such definition. Logarithmic tables were constructed; even tables very close to natural logarithmic tables, but the base, ‘e’ did not make a direct appearance till about a
hundred years later. Gottfried Leibniz (1646-1716), in his work on calculus, identified ‘e’ as a constant, but labelled it ‘b’.
As with many other concepts, it was Leonhard Euler (1707-1783) who gave the constant its letter designation, ‘e’, and discovered many of its remarkable properties. Euler’s discoveries cast new light
on the previous work, bringing out e’s relevance to a host of result and applications.
During sixteenth century it was also noticed that the expression (1+1/n)n appearing in the formula for compound interest tends to a certain limit – about 2.71828 as n increases.
The value (1+1/n)n approaches ‘e’ as n gets bigger and bigger:
The first 10-digit prime in ‘e’ is 7427466391, which starts as late as at the 99th digit. | {"url":"http://organiser.org/Encyc/2012/8/1/-b-Euler-s-number-(e)--b-.aspx?NB=&lang=4&m1=&m2=&p1=&p2=&p3=&p4=&PageType=N","timestamp":"2014-04-18T23:29:11Z","content_type":null,"content_length":"30547","record_id":"<urn:uuid:0dc425f0-3e40-4d5a-b280-d69c957f7933>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Generalized Sign Test
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Generalized Sign Test
From Steven Samuels <samplerx@earthlink.net>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Generalized Sign Test
Date Fri, 10 Oct 2008 12:21:32 -0400
Tony Lachenbruch pointed out to me that you could have been asking
about a paired data problem. I don't really understand your problem,
so I ask again: What are your data and what do you mean by
calculating "the null value from the estimation period"?
What are your data
On Oct 7, 2008, at 6:50 PM, Steven Samuels wrote:
> -
> I should add that "sign test" and "generalized sign test" are not
> proper terms for what Mai wants to do. Mai wants to test the
> hypothesis in binomial data that the true proportion P = P0, a
> specified value, against H1: P P0. As I stated, Stata's -bitest-
> is designed to do this. I should have added that -ci- will provide
> a confidence interval for the proportion, which would be a useful
> complement to a p-value.
> The sign test is a test for location with continuous, not
> categorical, data; it happens to use the binomial hypothesis test
> for inference. For example, the sign test may be used to test that
> the median of a distribution is equal to a certain value. It
> counts the number of observations which exceed the hypothesized
> median and ignores ties; thus, in contrast to Mai's problem, the
> test sample size may be less than the number of observations. The
> sign test can also test the equality of distributions for paired
> (X,Y) data, by testing the hypothesis that P(X>Y) = 1/2; form Z = X
> - Y and count the number of times Z exceeds 0. This version also
> ignores ties. The sign test is relatively simple to do because of
> the connection to the binomial distribution. However the same
> hypotheses can be tested more powerfully with Wilcoxon's signed
> rank sum test. See: P. Armitage: Statistical Methods in Medical
> Research, Wiley, 1971, pp 395-397.
> Different questions: What are Mai's data and how is a null value to
> be "calculated from the estimation period"?
> -Steve
> The sign test is a nonparametric test applied to continuous data
> -bitest-
> On Oct 7, 2008, at 3:15 PM, mai7777 wrote:
>> Hi,
>> Is there a way in Stata to perform a generalized sign test which
>> allows the null hypothesis to be different from 0.5. I am using it
>> for
>> an event study and I would like the null to be calculated from the
>> estimation period rather than a standard 0.5.
>> Thanks
>> *
>> * For searches and help try:
>> * http://www.stata.com/help.cgi?search
>> * http://www.stata.com/support/statalist/faq
>> * http://www.ats.ucla.edu/stat/stata/
Steven Samuels
18 Cantine's Island
Saugerties, NY 12477
EFax: 208-498-7441
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-10/msg00577.html","timestamp":"2014-04-21T07:18:57Z","content_type":null,"content_length":"7793","record_id":"<urn:uuid:e84e6925-31a9-4d2c-9506-2473aa0821c7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pullies and Tension
September 18th 2009, 01:51 PM #1
Junior Member
Mar 2009
Pullies and Tension
In earlier days, horses pulled barges down canals in the manner shown in Fig. 5-47. Suppose the horse pulls on the rope with a force of 8000 N at an angle of θ = 25° to the direction of motion of
the barge, which is headed straight along the positive direction of an x axis. The mass of the barge is 9600 kg, and the magnitude of its acceleration is 0.15 m/s2. What are (a) the magnitude and
(b) the direction of the force on the barge from the water? (Measure the direction clockwise from the direction of motion, and give your angle to the nearest degree.)
Fig. 5-47
Problem 48.
In earlier days, horses pulled barges down canals in the manner shown in Fig. 5-47. Suppose the horse pulls on the rope with a force of 8000 N at an angle of θ = 25° to the direction of motion of
the barge, which is headed straight along the positive direction of an x axis. The mass of the barge is 9600 kg, and the magnitude of its acceleration is 0.15 m/s2. What are (a) the magnitude and
(b) the direction of the force on the barge from the water? (Measure the direction clockwise from the direction of motion, and give your angle to the nearest degree.)
Fig. 5-47
Problem 48.
You seem to have posted all your homework questions on this topic. MHF does provide that sort of service. Please show some effort with these questions. What have you tried? Where do you get
September 18th 2009, 03:24 PM #2 | {"url":"http://mathhelpforum.com/math-topics/103001-pullies-tension.html","timestamp":"2014-04-23T19:33:05Z","content_type":null,"content_length":"34152","record_id":"<urn:uuid:0971879e-0be0-4683-b80d-9bce2f188c66>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
Testing the order of Markov dependence in DNA sequences
Pardo Llorente, Leandro and Menéndez Calleja, María Luisa and Pardo Llorente, María del Carmen and Zografos, Konstantinos (2011) Testing the order of Markov dependence in DNA sequences. Methodology
and computing in applied probability, 13 (1). pp. 59-74. ISSN 1387-5841
Restricted to Repository staff only until 31 December 2020.
Official URL: http://www.springerlink.com/content/q2j600m165xg7r2n/fulltext.pdf
DNA or protein sequences are usually modeled as probabilistic phenomena. The simplest model is created on the assumption that the nucleotides at the various sites are independently distributed.
Usually the type of nucleotide at some site depends on the type at another site and therefore the DNA sequence is modeled as a Markov chain of random variables taking on the values A, G, C and T
corresponding to the four nucleotides. First order or higher order Markov models provide better fit to a DNA sequence. Based on this remark, the aim of this paper is to present and study a family of
test statistics for testing order Markov dependence in DNA sequences. This new family includes as a particular case the classical likelihood ratio test. A simulation study is presented in order to
find test statistics, in this family, with a better behaviour than the likelihood ratio test.
Item Type: Article
Uncontrolled DNA sequence; Markov dependence; Likelihood ratio test; Phi-divergence test statistics; Divergence; Chain
Subjects: Sciences > Mathematics > Applied statistics
ID Code: 17330
References: Avery PJ, Henderson DA (1999) Fitting Markov chain models to discrete state series such as DNA sequences. Appl Stat 48:53–61
Bejerano G, Friedman N, Tishhy N (2004) Efficient exact p-value computation for small sample, sparse and surprising categorical data. J Comput Biol 11:867–886
Bell GI, Sánchez-Pescador R, Laybourn PJ, Najarian RC (1983) Exon duplication and divergence in the human preproglucagon gene. Nature 304:368–371
Billingsley P (1961a) Statistical methods in Markov chains. Ann Math Stat 32:13–39
Billingsley P (1961b) Statistical inference for Markov processes. The University of Chicago Press, Chicago
Ewens WJ, Grant GR (2005) Statistical methods in bioinformatics (2nd edn). Springer, New York.
Hoel PG (1954) A test for Markov chains. Biometrika 14:430–433
Menéndez ML, Pardo JA, Pardo L (2001) Csiszar’s ϕ-divergences for testing the order in a Markov chain. Stat Pap 42:313–328
Menéndez ML, Pardo JA, Pardo L, Zografos K (2006) On tests of independence based on minimum φ-divergence estimator with constraints: an application to modeling DNA. Comput Stat
Data Anal 51(2):1100–1118
Patel NR (2003) An exact test for homogeneity of a Markov chain. www.cytel.com
Pardo L (2006) Statistical inference based on divergence measures. Chapman & Hall/CRC, New York
Pardo L,Morales D, Salicrú M, MenéndezML (1993) The ϕ-divergence statistic in bivariate multinomial populations including stratification. Metrika 40:223–235
Read TRC, Cressie NAC (1988) Goodness-of-fit statistics for discrete multivariate data. Springer, New York
Reinert G, Schbath S, Waterman MS (2000) Probabilistic and statistical properties of words: and overview. J Comput Biol 7:1–46
Zografos K (1993) Asymptotic properties of φ-divergence statistic and applications in contingency tables. Int J Math Stat Sci 2:5–21
Deposited On: 05 Dec 2012 09:20
Last Modified: 07 Feb 2014 09:45
Repository Staff Only: item control page | {"url":"http://eprints.ucm.es/17330/","timestamp":"2014-04-19T02:22:19Z","content_type":null,"content_length":"33396","record_id":"<urn:uuid:12938e3e-55ed-4e85-a34a-464b3706fa1a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rate of Change
May 1st 2008, 03:56 PM #1
Mar 2008
Rate of Change
Can someone assist me with this one:
A person obeserves an airplane directly overhead which moves at 500ft/sec at an altitue of 8000ft. The plane maintains constant altitude and horizontal velocity. What is the rate of change of the
distance between the obeserver and the airplane after 12 seconds?
Did you draw a triangle. We can use Pythagoras.
dx/dt=500, y=8000, dy/dt=0 because it remains constant.
After 12 seconds, the plane will have travelled 6000 feet.
By Pythagoras, $D=\sqrt{8000^{2}+6000^{2}}=10,000$
Solve for dD/dt.
May 1st 2008, 04:05 PM #2 | {"url":"http://mathhelpforum.com/calculus/36827-rate-change.html","timestamp":"2014-04-16T14:19:19Z","content_type":null,"content_length":"33097","record_id":"<urn:uuid:d2fc0082-25f4-47e5-9239-2ad64215bd07>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hard Derivative.
October 22nd 2009, 09:00 PM
Hard Derivative.
find y'
how would you go about doing this one?
im studying for my "mastery test" on monday and i have never seen anything raise to a number - x
could someone show me how to do this please?
thank you
October 22nd 2009, 09:02 PM
Chris L T521
Note that $\sqrt{10^{5-x}}=10^{\frac{1}{2}(5-x)}$. Then use the fact that $\frac{\,d}{\,dx}a^u=a^u\ln a\cdot\frac{\,du}{\,dx}$.
Can you take it from here?
October 22nd 2009, 09:07 PM
yeah i can. but i seem to get a different answer then the answer of my review sheet.
i get -10^(.5(5-x)) * ln(10)
the answer on the sheet is -10^(.5(5-x)) / 2
October 22nd 2009, 09:08 PM
Use logarithmic differentiation.
sqrt of 10^(5-x) is the same as 10^[(5-x)/2]
Take the natural log of both sides.
ln y = ln 10^[(5-x)/2]
ln y = [(5-x)/2] ln 10 because lnx^a = a lnx
then differentiate:
(1/y)(dy/dx) = (-1/2)ln10
solve for dy/dx:
dy/dx = (-1/2)(ln10)(10^[(5-x)/2])
edit: The answer sheet is wrong? :P
October 22nd 2009, 09:18 PM
haha ok thank you very much =) | {"url":"http://mathhelpforum.com/calculus/109816-hard-derivative-print.html","timestamp":"2014-04-18T08:20:42Z","content_type":null,"content_length":"6120","record_id":"<urn:uuid:10d68702-1384-49f9-b8ae-5be2656ebcfb>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US4243956 - Automatic equalizer for a synchronous digital transmission signal
The invention comes within the field of transmission. Equalization is the compensation of linear distortions due to a transmission channel. Such compensation is provided by means of corrector
circuits called equalizers which are inserted in the transmission channel and which fulfil the transfer functions such that the overall response obtained has a flat amplitude and a linear phase in
the frequency band occupied by the transmission signals. An equalizer is referred to as being automatic when it has a variable transfer function adjusted from the equalized signal which allows it to
be adapted to the characteristics of an imperfectly known transmission channel as is the case for example with a transmission channel in a switched telephone or telegraph network.
The present invention relates more particularly to automatic equalizers which have a transfer function depending only on one parameter whose value is determined from characteristics of the equalized
Automatic equalizers of this type are used in digitial transmission systems which use cables with repeaters to equalize variable lengths of cable. In particular, the equalizer described in published
French Pat. No. 2,128,152 can be mentioned, said equalizer including a variable equalization network adjusted by means of a DC voltage which depends on the peak value of the equalized signal.
The present invention provides an automatic equalizer for digital transmission of constant unit time interval, said equalizer including a variable equalization network equipped with a feed back
control loop, which includes a sign coincidence autocorrelator which correlates the polarities of two versions of the equalized signal one of which versions is delayed in relation to the other
version by an integer multiple of the unit time interval.
The automatic equalizer preferably further includes a predistortion filter disposed at the input of the sign coincidence autocorrelator.
Two embodiments of the invention are described by way of example with reference to the accompanying drawings in which:
FIGS. 1 and 2 are block diagrams of two automatic equalizers in accordance with the invention; and
FIGS. 3 and 4 are graphs which illustrate the operation of the equalizers illustrated in FIGS. 1 and 2.
FIGS. 1 and 2 each show an automatic equalizer 1, 1', having a signal input 2, 2' connected to the output of a synchronous digital transmission channel 3 represented by a dashed line and having a
signal output 4, 4' from which the equalized transmission signal is available. Each automatic equalizer 1, 1' includes a variable equalization network 10, 10' equipped with a feed back control loop.
The variable equalization networks 10, 10' have signal inputs and outputs which coincide with those 2, 2', 4, 4' of the automatic equalizers 1, 1' as well as inputs 5, 5' which adjust their transfer
They are of known type and are determined as a function of the type of channel in question. Their variable transfer function can be adjusted to that of the transmission channel actually used by
adjusting an adjustment parameter whose value is a function of the distortions of the equalized signal. A detailed example thereof is given in French Pat. No. 2,128,152. They will not be further
considered in the following part of the description, since they are not a part of the present invention. It will simply be stated that an increase in the value of the adjustment parameter causes an
increase in the band width of the equalized signal and vice-versa and that consequently, the value of the control signal must increase with the distortions which affect the equalized signal.
The control loops which each connect the output of a variable equalization network 10, 10' to its adjustment input include a pre-distortion filter 20, 20' followed by a sign coincidence
autocorrelator 30, 30' and by a correction circuit 19, 19' which ensures control stability. They differ essentially in the structures of their sign coincidence autocorrelators which, however, deliver
the same output signal.
The sign coincidence autocorrelator 30 of the automatic equalizer 1 illustrated in FIG. 1 includes:
an absolute limiter 12;
an adder 14 with two inputs, each of which is connected to the output of the absolute limiter 12, one directly and the other via a delay circuit 13;
the delay circuit 13;
two integrators 15 and 16 connected to the output of the adder 14, one directly and the other via a logic inverter circuit 17;
the logic inverter circuit 17; and
a differential amplifier 18 whose inputs are connected to the outputs of the integrators 15 and 16.
The signal s(t) applied to the input of the sign coincidence autocorrelator is received by the absolute limiter 12 which delivers in response a logic signal u[1] (t) whose level is, by definition, 1
if the input signal s(t) is positive and 0 in the contrary case. The delay circuit 13 receives the signal u[1] (t) and delays it by one period τ. The adder performs the "exclusive OR" logic function.
One of its inputs receives the signal u[1] (t) which comes from the absolute limiter 12 and its other input receives the same signal delayed by one period τ by the delay circuit 13. Its output
delivers a signal q(t) applied to the integrator 15 whose integration constant is t[1] and whose output signal Q(t) can be expressed by the equation: ##EQU1##
The signal q(t) is also complemented and applied to the integrator 16. The integrator 16 has the same integration constant t[1] as the integrator 15 and delivers an output Q(t) whose form is: ##EQU2#
The signals Q(t) and Q(t) are related to each other by equation:
The output of the differential amplifier 18 supplies a signal r(t) equal to:
r(t)=Q(t)-Q(t)=1-2Q(t) (1)
The sign coincidence autocorrelator 30' of the automatic equalizer illustrated in FIG. 2 includes:
an absolute limiter 21 disposed at the input;
a multiplier 22 with two inputs each connected to the output of the absolute limiter, one directly, the other via a delay circuit 23;
the delay circuit 23; and
an integrator disposed connected to the output of the multiplier 22.
A signal s(t) applied to the input of the sign coincidence autocorrelator is received by the absolute limiter 21 whose output delivers a signal u[2] (t) which is, by definition, a binary signal equal
to +1 if s(t) is positive and to -1 in the contrary case. The signal u[2] (t) is applied without delay to one input of the multiplier 22 and with a delay of τ to the other input. This generates at
the output of the multiplier 22 a signal p(t) which is related to the signal q(t) of the logic "exclusive OR" gate 14 of the preceding circuit by the equation:
The output of the integrator 25 whose integration time constant is t[1] delivers a signal P(t) related to the signal p(t) by the equation: ##EQU3##
The signals p(t) and Q(t) are related to each other by the same equation as p(t) and q(t):
It is deduced from equation (1) that the sign coincidence autocorrelators of the automatic equalizers illustrated in FIGS. 1 and 2 have the same output signal P(t).
The delay circuits 13 and 23 which process only binary signals can be formed by means of shift registers which have n stages and operate at a frequency of n/τ, n being an integer chosen so as to
obtain an acceptable compromise between the cost of the registers and the precision of the autocorrelators.
The integrators 15, 16 and 25 can be formed by means of low-pass filters with a time constant t[1].
The predistortion filters 20, 20' used in the automatic equalizers 1, 1' illustrated in FIGS. 1 and 2 must be such that the distortion which they cause can be at least partially corrected by the
variable equalization networks 10, 10'. Advantageously, they simulate a given length of the transmission channel used. In the case of a transmission channel which operates like a low-pass filter,
they can be constituted, as will be seen further on, by low-pass filters which have a cut-off frequency equal to 1/4T (where T is the unit time interval of the synchronous digital transmission in
The correction circuits 19, 19' which stabilize the control means can be formed by means of low-pass filters.
During experiments, it has been observed that the automatic equalizers described with reference to FIGS. 1 and 2 were particularly adaptable when the delay τ of the delay circuits 13 and 23 was
chosen to be equal to an integer multiple of the unit time interval of the synchronous digital transmission in question and that the integration period t[1] was chosen to be long with respect to the
unit time interval.
This property can be explained by the fact that the signal supplied by the control means for adjusting the variable equalization networks 10, 10' is a much more exact representation of the linear
distortion which affects the equalized signal than the signals used for the same purpose in automatic equalizers of the prior art.
To describe the operation of the control means of the automatic equalizers described with reference to FIGS. 1 and 2, it will be shown that due to their sign coincidence autocorrelators, their output
signals are representative of the differences between the period of a time interval which separates the consecutive zero passes of the output signal s(t) of the predistortion filters 20, 20' from the
period T, then, by means of a simple example, that these differences are the first to be affected by the linear distortions undergone by a synchronous digital transmission signal.
Take a signal s(t) which includes zero passes separated by a time interval T'. An example of a signal of this type applied to the input of the absolute limiter 21 of the block diagram of FIG. 2 can
be, if the origin of the periods chosen is a zero pass of the signal: ##EQU4##
The signal μ[2] (t) at the output of the absolute limiter 21 is expressed as: ##EQU5##
Taking the delay τ caused by the circuit 23 as equal to T, the two signals applied to the multiplier 22 are: ##EQU6##
The output signal p(t) of the multiplier 22 is therefore ##EQU7##
On examining the preceding expression, it will be seen that the parenthesis remains negative when T is equal to T' except for particular values of t such as ##EQU8## where it is zero.
Besides these particular values of t, we have:
therefore the average value p'(t) of the signal p(t) over any period is equal to -1.
Similar reasoning to that used for T/T'=1 shows that where T/T'=0 and T/T'=2, the averge value p'(t) of the signal p(t) is equal to +1.
Where T/T is not an integer, p(t) is a periodic function of T', and the average value p'(t) of p(t) can therefore be calculated over a period which is an integer multiple of T' and in particular over
a period T'. For values of T/T' which are not integer values and lie between the intervals (0,1) and (1,2), the parenthesis of the expression of p(t) is positive for a part of the time and p'(t) is
greater than -1. It can be shown that p'(t) varies linearly from -1 to +1 when the ratio T/T' varies from 1 to 0 and from 1 to 2.
The preceding calculation still applies in the case where the time intervals which separate the consecutive zeros of the input signal s(t) are all equal to the value T' over the integration time t
[1], which is supposed to be large with respect to the unit interval T. Therefore, in this new case, the signal P(t) has the same variations in relation to T/T' as those found for the signal p'(t) in
the preceding case.
FIG. 3 shows either the variation of the signal p'(t) as a function of the ratio T/T' in the case where T' is considered as an interval of time which separates two consecutive zeros of the input
signal s(t), or the variation of the signal P(t) as a function of the ratio T/T' in the case where T' is considered as the value of each time interval which separates the consecutive zeros of the
input signal s(t) over a time period t[1], said time intervals being supposed identical.
It is deduced from FIG. 3 that the signal p'(t) is at its minimum level only when T is equal to T' and that its value is independent from the sign of the difference between T and T'. Since the
integration period is long with respect to the unit time interval T, these properties are also those of the signal P(t), which is therefore at its minimum value only when the time intervals T' which
separate the consecutive zeros of the input signal s(t) are each directly equal to T. The difference between the signal P(t) and its minimum value is representative of the average value of the
differences, taken in absolute value, between the time intervals T' and the time intervals T or, more simply, it is representative of the regularity of the zero passes of the input signal s(t).
The periods of the time intervals between the consecutive zeros of a synchronous digital transmission signal are the first to be affected by the linear distortions. This can be shown by calculation
in the simplified case where the transmission channel and the equalization network are likened to an ideal low-pass filter with a rectangular spectrum whose cut-off frequency is 1/2T" and where it is
supposed that the emission signal f(t) is an isolated pulse with a rectangular spectrum whose width is 1/2T.
The pulse response h(t) of the transmission channel and of the equalization network has the form: ##EQU9##
The emission signal f(t) has the form: ##EQU10##
The signal g(t) obtained in response at the output of the equalization network is equal to the convolution of the pulse response h(t) by the signal f(t). ##EQU11## whence ##EQU12##
The preceding expression shows that the received signal g(t) is identical to the emitted signal if T" is less than or equal to T. In that case, emitted signal has not undergone any distortion and on
reception, still has zero passes separated by the unit time interval T. In contrast, if T" is greater than T, the emitted signal undergoes distortion since it loses a part of the higher frequencies
of its spectrum. The spacing of its zero passes is modified and becomes T".
Still considering the preceding case and omitting the predistortion filters 20, 20' a signal P(t) would be provided at the output of the sign coincidence autocorrelators 30, 30', the signal p(t)
remaining at its minimum value for as long as the cut-off frequency 1/2T" remains greater than 1/2T and tending linearly towards a maximum value +1 which is reached when there are no more zero passes
in the integration period. FIG. 4 is a graph which shows the variation of the signal P(t) as a function of the cut off frequency 1/2T" evaluated with respect to the time interval T.
The preceding calculation therefore shows, for an emission signal formed by an isolated pulse with a rectangular spectrum and an equalized transmission channel which can be likened to an ideal
low-pass filter, that the linear distortion due to the transmission channel affects the time intervals between the successive zeros of the received signal. Experiment and simulation on a computer
confirm that the result remains the same when the transmission channel is a real filter and the emission signal is a synchronous digital signal with a unit time interval T, constituted by a random
succession of elementary pulses which can have any spectrum. They also show that the linear distortions have a cumulative effect on the differences, taken in absolute value, between the unit time
interval T and the time intervals which separate the consecutive zeros of the received signal. The same applies when the effect of the transmission channel is not an amplitude distortion but a
distortion of the group propagation time which affects a part of the frequency spectrum of the emitted signal.
The predistortion filter 20, 20' overcomes the difficulty due to control on the basis of the extreme values of the voltage P(t) and allows a control signal to be obtained which changes sign when the
automatic equalizer 1, 1" moves away from its optimum adjustment. When the automatic equalizer compensates the transmission channel exactly; its function is to cause modifications in the time
intervals which separate the successive zeros of the signal applied to the sign coincidence autocorrelator which lead to a zero value of the signal P(t). As shown in FIG. 4, it can be effected by
means of a low-pass filter which has a cut-out frequency equal to 1/4T. For the signal P(t) to be able to vary on either side of its zero value, the variable equalization network 10, 10' must be able
to equalize at least partially the predistortion filter 20 and 20', which must not cut out too suddenly.
Without going beyond the scope of the invention, some dispositions can be modified or some means can be replaced by equivalent means. In particular, delay circuits 13, 23 whose delay is τ which is an
integer multiple of the unit time interval of order greater than 1 can be used in sign coincidence autocorrelators. | {"url":"http://www.google.com/patents/US4243956?dq=7,752,326","timestamp":"2014-04-24T06:16:34Z","content_type":null,"content_length":"66725","record_id":"<urn:uuid:c3419902-0430-47e4-bc1c-8898c57052f7>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability: Overview
When an official tosses a coin in the air at the beginning of a football game and one of the team captains calls “heads” or “tails,” what is the probability of the flipped coin coming up heads?
Probability can be defined as “the likelihood that an event will occur.” An event is a specific outcome. In this case, the event is the coin landing on heads. The reason a coin is tossed before
football games, for example, is because it allows a fair result, with two equally likely outcomes, heads or tails. The team captain will either win or lose the toss. The probability that the coin
will come up heads is 1 out of 2—one outcome, heads, out of two possible outcomes, heads or tails. This probability can be expressed as a ratio, P(heads) =
To be able to determine the probability of an event, it is necessary to know all the possible outcomes for the event.
One way to identify all the possible outcomes of an event is to list all possible choices in an organized way. For example, if there are 4 flavors of ice cream—chocolate, vanilla, strawberry, and
mint—and 2 types of cones, sugar and waffle, how many different choices of ice-cream cones are possible?
List of Ice-Cream Cone Choices
Sugar cone, chocolate ice cream
Sugar cone, vanilla ice cream
Sugar cone, strawberry ice cream
Sugar cone, mint ice cream
Waffle cone, chocolate ice cream
Waffle cone, vanilla ice cream
Waffle cone, strawberry ice cream
Waffle cone, mint ice cream
When making a list to find all possible choices, students need to be careful not to duplicate choices.
A tree diagram is another way to show all possible choices.
The tree diagram shows us more clearly the total number of choices. Two cones and 4 flavors result in 8 choices. Multiplication is another way to find the total number of choices: 2 x 4 = 8. Two
choices in the first group (cones) and 4 choices in the second group (flavors) give 8 choices of cones and flavors. Multiplying the choices in each group will always give us the total number of
possible choices.
Continuing with the ice-cream example, suppose you selected an ice-cream cone from a box without looking. What is the probability that you would select a waffle cone with chocolate ice cream?
Students should recognize that the result of the tree diagram, organized list, or multiplication gives them the total number of possible choices (8). The probability that you will choose a waffle
cone with chocolate ice cream is 1 out of 8, or
When you describe an event, you can be specific about the likelihood of its occurring. Spin the spinner pictured below. Note that it has eight sections of equal size.
The probability of spinning and landing on a number greater than 13 is 0, because it is impossible to land on a number that doesn't appear on this spinner. That event has a probability of 0. A
probability of 1 means the event is certain to happen, such as spinning and landing on a number less than 9. The closer a probability is to 1, the more likely an event is going to occur. So the
probability that an event will happen ranges from 0 to 1. On this spinner, landing on a number between 1 and 8 is more likely to happen than landing on a 3.
You can show the probability of an event with a number line.
Using the above spinner you can locate the probability of an event on the number line. The probability of spinning and landing on 5 is 1 out of 8, or
When we discuss probability we are describing the likelihood that an event will occur. We can always express this likelihood as a fraction, since we are talking about a ratio of the number of
favorable outcomes to the number of possible outcomes.
Using a deck of 48 number cards containing 4 each of cards numbered 1-12, what is the probability of picking a 9 at random? Since all 48 cards are of equal size, there are 48 possible outcomes of
choosing 1 card. There are four 9s in the deck. The probability can be written like this.
The probability of choosing a 9 is written as the fraction
Students will already be familiar with rolling a number cube, and you can use this model to reinforce probability concepts. When Carl rolls a number cube numbered 1, 2, 3, 4, 5, and 6, what is the
probability he will roll a 5? Rolling the cube has 6 equally likely outcomes. Having the 5 land faceup is one of 6 possible outcomes. The probability of rolling a 5 is 1 out of 6, or
Probability is often used to make predictions about possible outcomes based on data. Students need to be able to apply what they learn about probability to real-life situations and problems.
Example: The science club was sponsoring an after-school dance and wanted to sell snow cones to raise money. They could only afford to order two flavors of syrup and wanted to predict which two
flavors would sell the best. They conducted a lunchroom survey of 50 students and compiled the following data.
Flavor Students' Choices
Cherry 18
Lime 10
Grape 3
Raspberry 4
Orange 15
If the science club orders cherry and orange syrup for the dance, they predict that the probability of having a favorite flavor will be 33 out of 50, or | {"url":"http://www.eduplace.com/math/mw/background/5/06b/te_5_06b_overview.html","timestamp":"2014-04-20T03:34:36Z","content_type":null,"content_length":"11756","record_id":"<urn:uuid:2dc2c37b-b94e-40e0-875e-22dd5b571e90>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alpine, NJ ACT Tutor
Find an Alpine, NJ ACT Tutor
...I have successfully tutored a number of Praxis students using techniques that I have developed to help SAT students raise their scores by several hundred points! I work with students to
develop a custom study plan that attacks their weaknesses and enhances their strengths. Students that hone these techniques over considerable practice have had great success!
34 Subjects: including ACT Math, calculus, geometry, writing
...No two students are alike, and I've failed with some kids before - in the classes we were teaching in Brooklyn, hard as I tried, some kids would lose focus and made little progress. Overall, I
prefer a non-stressful approach to establish a baseline from which to go on with each individual student. Preparation is also key and having the right material to work on is very important.
9 Subjects: including ACT Math, algebra 1, algebra 2, precalculus
I offer tutoring for standardized tests and academic coursework. I am a former Princeton Review teacher (both group lessons and tutoring) and have tutored privately as well. I received my BA from
the University of Pennsylvania and MA from Georgetown University.
20 Subjects: including ACT Math, English, algebra 2, grammar
Hello my name is Andres. I was a language teacher in my native country teaching English as a second language for native students and Spanish as a second language for foreign students. I am
currently finishing my second major in engineering science.
9 Subjects: including ACT Math, Spanish, calculus, geometry
...I crafted a pathway to academic success for myself and I want to share my know-how with the next generation of enthusiastic scholars. As a mentor, I can have tough conversations with tutees
when necessary, while also providing the inspiration to reach new heights of academic success. In addition to being a tutor, I am an attorney and a certified teacher.
52 Subjects: including ACT Math, reading, English, writing
Related Alpine, NJ Tutors
Alpine, NJ Accounting Tutors
Alpine, NJ ACT Tutors
Alpine, NJ Algebra Tutors
Alpine, NJ Algebra 2 Tutors
Alpine, NJ Calculus Tutors
Alpine, NJ Geometry Tutors
Alpine, NJ Math Tutors
Alpine, NJ Prealgebra Tutors
Alpine, NJ Precalculus Tutors
Alpine, NJ SAT Tutors
Alpine, NJ SAT Math Tutors
Alpine, NJ Science Tutors
Alpine, NJ Statistics Tutors
Alpine, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Alpine_NJ_ACT_tutors.php","timestamp":"2014-04-21T02:32:42Z","content_type":null,"content_length":"23772","record_id":"<urn:uuid:1be6652c-90c8-4f9f-9b6a-36bbdb8007dc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ricci flow with surgery in dimension 2
up vote 8 down vote favorite
Is it possible to define the Ricci flow with surgery in dimension 2 and use it to classify the surfaces?
I know this is overkill, there are simpler ways to classify surfaces, but I would like to understand the Ricci flow with surgery in dimension 3 and perhaps that this is simpler in dimension 2.
dg.differential-geometry ricci-flow reference-request
7 I'm definitely not an expert, but as far as I know you never need surgery in dimension 2. If you start with a compact (probably oriented, as well) 2-manifold, the Ricci flow always has long-time
existence and convergence to a constant curvature metric. This does indeed give (I believe) the classification of compact oriented surfaces. – Spiro Karigiannis Dec 2 '10 at 2:36
4 You can find this in the book of Ben Chow and Dan Knopf, by the way. – Spiro Karigiannis Dec 2 '10 at 2:36
3 To add to what Spiro said, the long-time existence result is for the volume-renormalized Ricci flow. Otherwise the sphere is a shrinking soliton and there is no convergence of the flow. See also
arxiv.org/abs/math/0505163 – Willie Wong Dec 2 '10 at 3:28
While one doesn't need surgery to prove the uniformization theorem, there is some issue classifying the solitons in dimension 2. My understanding (also not an expert) is that for a long time the
proofs that the only soliton metric on $S^2$ was the round sphere used the uniformization theorem. Recently, Chen, Lu and Tian proved this, though I don't know about other topologies. – Rbega Dec
2 '10 at 3:32
@RBega: that's the arxiv paper I linked to above :-p. – Willie Wong Dec 2 '10 at 3:43
add comment
3 Answers
active oldest votes
(1) The normalized Ricci flow (NRF) on compact surfaces always exists for all time and does not have singularities. Moreover, NRF fixes the conformal class of the metric.
(2) Let $r$ be the integral of the curvature, which is constant under any flow. Hamilton and Osgood-Phillips-Sarnak (independently, I think) showed that if $r\leq 0$, then the NRF
converges to a metric of constant curvature. Hamilton also proved that if the curvature is positive everywhere, then NRF converges to a metric of constant curvature. This part of the
argument unfortunately assumes the uniformization theorem.
(3) Chow showed that if $r>0$, then eventually the curvature will become positive everywhere and then Hamilton's argument applies.
up vote 15 down (4) Much later, Chen, Lu, and Tian wrote a 2 page paper explaining how to remove the uniformization theorem from Hamilton's argument. http://arxiv.org/abs/math/0505163
vote accepted
(5) Putting it all together, Ricci flow gives an new proof of uniformization, i.e. every metric on a compact surface is conformal to a metric of constant curvature. Since the only
constant curvature metrics are quotients of the standard sphere, Euclidean space, and hyperbolic space, one can then deduce the classification of surfaces.
As mentioned by Spiro, this whole story, except for (4), is told in the book by Chow and Knopf.
Note: A lot of this information is already in comments by others, but not in answer form. I'm writing this mainly because the accepted answer does not seem complete to me.
Thank you, this is rather this kind of answer I wanted. – Guillaume Brunerie Dec 9 '10 at 19:06
add comment
The Ricci flow in dimension two is (in essence) the gradient flow of the "Polyakov action" (renormalized $\log \det \Delta$). B. Osgood, R. Phillips, and P. Sarnak proved in the late
eighties (using Polyakov's trace formula) that $\log \det \Delta$ is convex on conformal classes, the critical points are metrics of constant curvature, the function is proper (which also
up vote shows that isospectral sets of metrics are compact -- the most celebrated corollary of their result at the time), and hence the uniformization theorem follows. As pointed out in the
13 down previous comments, no surgery is necessary in dimension two.
Where is this documented? Is it in one of Ben Chow's books? I confess I didn't know this. – Deane Yang Dec 2 '10 at 4:40
I assume that the only part you don't know is that Ricci flow is the gradient flow of log det. For this, you can look at a (one page) paper by Kokotov and Korotkin Letters in
Mathematical Physics (2005) 71:241–242 though the observation was known to Sarnak (and Hamilton, I would imagine) long before. I haven't read Chow's books, but I don't believe it is
there (do correct me if I am wrong...) – Igor Rivin Dec 2 '10 at 6:42
@Deane: you may also be interested in the recent work of Albin, Aldana, and Rochon extending the analysis to the non-compact case. That the Laplacian has continuous spectrum and the
2 volume can be infinite gives rise to some interesting technical restrictions. See math.jussieu.fr/~albin (I'd link to arXiv, but I think arXiv is broken at the moment: I cannot get any
abstract to open up.) At the very least, their paper also contains references to the facts Igor Rivin mentioned. – Willie Wong Dec 2 '10 at 12:00
add comment
Regarding (3) in Dan Lee's answer, you don't need my work in the Ricci flow approach to the differential geometric version of the uniformization theorem. The reason is as follows. Take any
metric $h$ on a closed orientable surface $M$ with $\chi (M) > 0$. Let $r$ denote the average scalar curvature of $h$, which is positive since $\chi$ is. Solve the equation $\Delta_h u = R_h
- r$, where $R_h$ denotes the scalar curvature of $h$. This is possible since the integral of $R_h - r$ with respect to the measure induced by $h$ is zero. Define the pointwise conformally
equivalent metric $g=e^uh$. Then we have $R_g=e^{-u}(-\Delta_h u + R_h) = e^{-u}r>0$. We can now use $g$ as an initial data for the normalized Ricci flow and apply Hamilton's theorem to
obtain exponential convergence in any $C^k$-norm to a constant curvature metric in the same conformal class as $h$.
By now there are many approaches to the Ricci flow on closed surfaces, such as the Aleksandrov reflection method employed by Bartz--Struwe--Ye (inspired by Schoen's work on Yamabe (constant
scalar curvature) metrics; in PDE see Serrin and Gidas--Ni--Nirenberg, etc.), Hamilton's isoperimetric monotonicity, application of Perelman's entropy formula, Andrews--Bryan's isoperimetric
profile monotonicity, etc.
Regarding Igor Rivin's answer, it's been a long time since I've looked at this, but this is what I remember (please correct me if I am wrong). Osgood--Phillips--Sarnak looked at the Polykov
action from the spectral point of view and also from the variational point of view. Aside: Ray--Singer's work on analytic torsion predates their work (I should also mention McKean, etal.).
up vote However they did not explicitly make the connection to Ricci flow, although they may have known this. They also did not reprove the differential geometric version of the uniformization in the
11 down positive Euler characteristic case since their proof of sequential convergence assumed conformality to the standard $S^2$. In my 2-sphere paper, I actually used the Polykov energy and the
vote fact that it is bounded from below (I believe due to Onofri), which I learned from Osgood--Phillips--Sarnak (I am indebted to Richard Melrose for asking that I read this paper when I first
arrived at MIT). I used the Polykov energy to control Hamilton' entropy in the variable signed curvature case. However, later in my entropy on 2-orbifolds paper, I found a way to avoid
Polykov entropy to control the modified Hamilton's entropy. Yet another proof of the entropy bound, adapting the original Hamilton's contradiction argument, was in my paper with Lang-Fang Wu
on 2-orbifolds with variable signed curvature.
Regarding some sort of convexity of the functional, a fancy way to interpret the energy functional is in terms of Bott--Chern secondary characteristic classes and this was originally used in
Donaldson's work on Hermitian-Einstein metrics/Hermitian Yang-Mills connections on (semi-)stable vector bundles over algebraic surfaces (and later algebraic manifolds); Uhlenbeck--Yau had a
different approach. In the case of closed Riemannian surfaces, the formula is: $$\ln \det \Delta_g - \ln \det \Delta_h = -\frac{1}{48\pi} E_h(g),$$ where the relative energy of two pointwise
conformal metrics is defined by: $$E_h(g) = \int_M \ln (g/h)(R_g d\mu_g + R_h d\mu_h).$$ The Bott--Chern class is in effect the term $\ln (g/h)$ since $\partial \bar{\partial}$ of it is
essentially the difference in the first Chern classes (Gauss--Bonnet integrands in this case) of $g$ and $h$.
1 Welcome to MO ! – BS. Oct 13 '13 at 19:36
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry ricci-flow reference-request or ask your own question. | {"url":"https://mathoverflow.net/questions/47981/ricci-flow-with-surgery-in-dimension-2/47999","timestamp":"2014-04-24T12:20:36Z","content_type":null,"content_length":"73617","record_id":"<urn:uuid:82cb3c7d-078d-4b7f-b603-7223df18f734>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stan Devitt
Research and Innovation for Healthcare
Global Architecture and Design Group
Agfa Healthcare
455 Phillip Street, Waterloo, Ontario, Canada, N2L 3X2
Phone: 1.519.746.6210 x 3124
Stan is a Principal Investigator with the Adaptable Clinical Workflow (ACW) working group at Agfa Healthcare.
Stan's current research is focussed on automating semantic inference and markup in order to support automated reasoning and proof technologies. His responsibilities include the design and
implementation of prototypes for decision support and the direct interaction with our external collabators and research partners. The work makes extensive use of the emerging semantic web
General Background
• An Invited expert - W3C MathML Working Group, and one of the main authors of the W3C MathML Recommendation.
• An Invited member - of the Ontario Research Centre for Computer Algebra.
• One of the original developers of the Maple Computer Algebra System responsible for numerous mathematical packages addressing a broad range of learning and advanced research topics.
• Extensive teaching and publishing experience as a tenured faculty member, and currently lecturing for the School of Computer Science at the Univesity of Waterloo.
• XML Consulting for academic publishers developing single source publising systems for teaching and testing mathematics.
• Ph.D. Combinatorics and Optimization, 1981, University of Waterloo, specialising Combinatorial Applications of Formal Language Theory, the Analysis of Algorithms, and Computational Complexity.
• MSc., Mathematics, the University of Calgary, 1975, specialising in Computational Number Theory.
• Honours BSc. in Pure Mathematics, University of Calgary, 1974 | {"url":"http://www.agfa.com/w3c/sdevitt/","timestamp":"2014-04-18T10:43:33Z","content_type":null,"content_length":"2864","record_id":"<urn:uuid:4d8aa67b-0e44-4d1c-8bfe-1ac38ccc6d2c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Table of Contents
The total benefit (revenue plus consumer savings) is 14.1 mils per kW-hr for electricity which is sold to Americans. For an SSPS that has a capacity of 10 GW and an assumed utilization of 95 percent,
this produces annual benefits of $1.173 billion. Power sold to foreigners yields 13.6 mils of benefits per kW-hr which amounts to $1.132 billion annually for each SSPS. All the power produced in the
first 2 years after the initial terrestrial SSPS is built is sold to the U.S. Afterwards one-third of the power produced is sold abroad. An SSPS is assumed to begin to produce power the year after it
is completed. As an example, table 6-12 shows that in year 20, 5 terrestrial SSPS's are producing power. The benefits obtained are therefore $5.783 billion.
Subject to certain qualifications discussed below a project should be undertaken if and only if the value of its benefits exceeds the value of its costs (see ref.9). It is important to include all
benefits and all costs, even those which are not normally expressed in monetary terms, such as the value of any damage done to the environment. In our society there is usually a positive interest
rate. This is a reflection of the fact that society values the consumption of a commodity today at a higher value than the consumption of the same commodity in the future. This fact must be taken
into consideration when the value of benefits and amount of costs are determined. To do this the benefits and costs which occur in the future must be discounted. For example, if a project pays as
benefits or has costs amounting to $B in every one of n + 1 consecutive years, then the value of the benefits or what is technically called the present value of the benefits is equal to:
where the benefits and costs are measured in real dollars (that is, dollars of constant purchasing power) and where r is the real discount rate.
Under certain idealized conditions the real discount rate is the same as the real rate of interest. The latter is essentially the rate of interest observed in the marketplace less the rate of
inflation. Empirically the idealized conditions needed to make r equivalent to the real rate interest do not hold, resulting in a considerable divergence between these two parameters. The size of
this divergence and hence the appropriate value of r is the subject of an extensive, unresolved debate among economists. The value of r which is currently used by the Office of Management and Budget
is 10 percent. This is considered by most economists to be reasonable if not conservative.
Having introduced several concepts, it is now possible to be precise about what is meant by the benefit-to-cost ratio. It is the present value of the stream of benefits divided by the present value
of the stream of costs. When this ratio is greater than one, then the project, subject to certain qualifications, is worthwhile. It is worth noting that if a benefit-to-cost ratio is, for example,
1.2, then if the costs in every year of the program were increased by as much as a factor of 1.2, the project would break even in the sense that the benefits would equal the costs where the costs
include a real rate of interest equal to the real discount rate.
There is no reason why a project cannot be of infinite length having an infinite stream of benefits and costs. Normally, the present value of these streams and hence the benefit-to-cost ratio is
finite. In the space colonization program a 70-year period is selected, not because a finite period is needed but for other reasons. In particular, if one goes too far into the future, various
assumptions begin to break down. For instance, a 5 percent growth rate in electrical power cannot continue forever, especially since much of this growth rate is due to a substitution of electricity
for other forms of energy. An additional consideration is that when employing a real discount rate of 10 percent, whatever happens after 70 years has little impact on the benefit cost ratio.
The term payback has been applied to a number of differing concepts. The most common form of usage is adopted for this study; namely, that payback occurs when the principal of the original investment
has been repaid. | {"url":"http://nss.org/settlement/nasa/75SummerStudy/6appendG.html","timestamp":"2014-04-18T06:07:47Z","content_type":null,"content_length":"4984","record_id":"<urn:uuid:cec845d7-c6a4-45c5-b982-18c487af4c24>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
ODE Problem, am I stupid?
I have been reading Ordinary Differential Equations (Pollard) from Dover.
The chapter I am in, is called Problems Leading to Differential Equations of The First Order - Geometric Problems.
Problem :
Find the family of curves with the property that the area of the region bounded by the x axis , the tangent line drawn at a point P(x,y) of a curve of the family and the projection of the tangent
line on the x axis has a constante value A.
image hosting png
In the solution, they say the equation of the tangent line is y / (x - a) = y'
They then solve, for a:
a = x - (y/y')
Afterwards, they obtain the distance QR = y/y'
Therefore they have the area of the triangle. They integrate, bla blabla.
Now, when I first looked this, it seemed pretty simple and straighforward. I understood every step. It was an elementary problem.
But, today I gave it a second look, and now I just don't agree with the solution.
Well, my question is y = mx + b;
but m = y'.
so, y = y' x + b.
I don't agree with this since y defines the equation of the tangent line BUT y' defines the derivative of THE CURVE. therefore in my viewing, when they, in the solution, reach to QR = y/y', and then
integrate they are mixing a fuction and a derivative of a diferent fuction.
So, where is my reasoning wrong?
Perhaps I should sleep more. ;D
Thanks for all the explanations! | {"url":"http://www.physicsforums.com/showthread.php?p=3792617","timestamp":"2014-04-19T12:31:39Z","content_type":null,"content_length":"30472","record_id":"<urn:uuid:418bc15d-3613-448c-a7ab-7f02baa7617f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numerical solution of partial differential equations
Quick description
Partial differential equations are considerably more complex to solve than ordinary differential equations, and there are a substantial number of special techniques developed to handle them. All
involve some sort of "grid" that subdivides space into small packets. The different ways in which the packets are used gives the different methods.
Multivariate calculus, basic numerical analysis.
Example 1: Galerkin
The most common (especially from a mathematical point of view) is the Galerkin method. The basic idea is that we set up the weak formulation of the partial differential equation. For a (linear)
operator equation , where is an operator ( is the dual space to ), we take a basis of a finite dimensional subspace , with .
Then we choose where
for . If is an elliptic partial differential operator, then the linear system generated by the Galerkin method is positive definite, and the linear system can be solved.
This method is also related to the Rayleigh-Ritz method, and the more general Petrov-Galerkin method.
The Galerkin method is commonly known as the finite element method (FEM), at least under common choices of the basis functions.
Example 2: Finite difference
Login or register to post comments | {"url":"http://www.tricki.org/article/Numerical_solution_of_partial_differential_equations","timestamp":"2014-04-20T13:21:58Z","content_type":null,"content_length":"21091","record_id":"<urn:uuid:94a8d2c8-59f7-423c-ad2d-3ea2a94ee096>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Heritage of European Mathematics
2013; 289 pp; hardcover
Volume: 7
ISBN-10: 3-03719-113-9
ISBN-13: 978-3-03719-113-2
List Price: US$98
Member Price: US$78.40
Order Code: EMSHEM/7
The 20th century was a time of great upheaval and great progress in mathematics. In order to get the overall picture of trends, developments, and results, it is illuminating to examine their
manifestations locally, in the personal lives and work of mathematicians who were active during this time. The university archives of Göttingen harbor a wealth of papers, letters, and manuscripts
from several generations of mathematicians--documents which tell the story of the historic developments from a local point of view.
This book offers a number of essays based on documents from Göttingen and elsewhere--essays which have not yet been included in the author's collected works. These essays, independent from each
other, are meant as contributions to the imposing mosaic of the history of number theory. They are written for mathematicians, but there are no special background requirements.
The essays discuss the works of Abraham Adrian Albert, Cahit Arf, Emil Artin, Richard Brauer, Otto Grün, Helmut Hasse, Klaus Hoechsmann, Robert Langlands, Heinrich-Wolfgang Leopoldt, Emmy Noether,
Abraham Robinson, Ernst Steinitz, Hermann Weyl, and others.
A publication of the European Mathematical Society (EMS). Distributed within the Americas by the American Mathematical Society.
Graduate students and research mathematicians interested in number theory.
• The Brauer-Hasse-Noether theorem
• The remarkable career of Otto Grün
• At Emmy Noether's funeral
• Emmy Noether and Hermann Weyl
• Emmy Noether: The testimonials
• Abraham Robinson and his infinitesimals
• Cahit Arf and his invariant
• Hasse-Arf-Langlands
• Ernst Steinitz and abstract field theory
• Heinrich-Wolfgang Leopoldt
• On Hoechsmann's theorem
• Acknowledgements
• Bibliography
• Name index
• Subject index | {"url":"http://cust-serv@ams.org/bookstore?fn=20&arg1=whatsnew&ikey=EMSHEM-7","timestamp":"2014-04-17T14:29:21Z","content_type":null,"content_length":"15187","record_id":"<urn:uuid:1185cd50-6475-472a-b21d-c9c047be910d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: T-Junctions re-visited
This may be a result of an oversight I made. For small triangles the algorithm might find a false hit. Try replacing
# O (N_Edges N_Points)
for I,J in Edges
for K in Points
if (abs(DIJ-(DIK+DJK))<TOL)
# K is on the line between I & J
by (here RTOL should be something like 1e-4, adjust to taste)
# O (N_Edges N_Points)
for I,J in Edges
for K in Points
if ((abs(DIJ/(DIK+DJK)-1.0)<RTOL) && (abs(DIJ-(DIK+DJK))<TOL))# CHANGED TO INCLUDE A RELATIVE CHECK TOO
# K is on the line between I & J
Edited 4 time(s). Last edit at 2013-03-14 03:44PM by Tim Gould. | {"url":"http://forums.ldraw.org/read.php?19,8595,8639","timestamp":"2014-04-20T08:27:48Z","content_type":null,"content_length":"76192","record_id":"<urn:uuid:6fca63f3-18c9-4c85-ab72-a268e01a47ad>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Archives of the Caml mailing list > Message from Kuba Ober
Performance questions, -inline, ...
Date: -- (:)
From: Kuba Ober <ober.14@o...>
Subject: Performance questions, -inline, ...
I haven't looked at assembly output yet, but I've run into some unexpected
behavior in my benchmarks.
This was compiled by ocamlopt -inline 100 -unsafe, the results and code are
below (MIPS is obtained by dividing 50 million iterations by (Unix.times
()) . Unix.tms_utime it took to run). I haven't included the timing etc. code
(it's part of a larger benchmark).
What I wonder is why vector-to-vector add is so much faster than (constant)
scalar to vector add. Vectors are preinitialized each time with a 1.0000,
1.0001, ... sequence.
Also, the very bad performance from generic vector-to-vector *with* inlining
is another puzzler, whereas generic add of scalar-to-scalar performs
similarly to straight-coded one.
Cheers, Kuba
* add1: add scalar to scalar 120 MIPS
* add3: add scalar to vector 250 MIPS
* add5: add vector to vector 320 MIPS
* add2: generic add scalar to scalar 100 MIPS
* add4: generic add vector to vector 38 MIPS
let start = 1.3
(* generic scalar operation *)
let op1 op const nloop =
let accum = ref start in
for i = 1 to nloop do
accum := op !accum const
(* generic vector operation *)
let op2 op const a b (nloop : int) =
let len = Array.length a in
for j = 0 to len-1 do
for i = 0 to len-1 do
b.(i) <- op a.(i) b.(i)
(** addition **)
let add1 nloop =
let accum = ref start in
for i = 1 to nloop do
accum := !accum +. addconst
let add2 = op1 ( +. ) addconst
let add3 a b nloop =
let len = Array.length a in
for j = 0 to len-1 do
for i = 0 to len-1 do
b.(i) <- a.(i) +. addconst
let add4 = op2 ( +. ) addconst
let add5 a b nloop =
let len = Array.length a in
for j = 0 to len-1 do
for i = 0 to len-1 do
b.(i) <- a.(i) +. b.(i) | {"url":"http://caml.inria.fr/pub/ml-archives/caml-list/2008/01/9d176aa7c654a973f89a05a242624d32.en.html","timestamp":"2014-04-17T06:20:10Z","content_type":null,"content_length":"10725","record_id":"<urn:uuid:6fde8ba9-38f1-479f-8ea4-6de3498c42f2>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tempe Algebra 2 Tutor
Find a Tempe Algebra 2 Tutor
...My specific area of interest is Algebraic Number theory. Along with 2 year of experience teaching Algebra to students. With a bachelor's degree in mathematics, I have the qualifications in the
state of Arizona to teach mathematics, if I were to go through the certification process.
15 Subjects: including algebra 2, calculus, geometry, algebra 1
...Each student requires a unique approach and I carefully cultivate methods for each pupil to ensure their success. I have worked with a variety of students ranging from those with learning
disabilities all the way to brilliant students seeking a challenge. Simply put, I love to teach and I thoroughly enjoy the one-on-one interaction tutoring provides.
10 Subjects: including algebra 2, chemistry, calculus, geometry
...D. in Applied Mathematics for the Life and Social Sciences from Arizona State University. Languages: I speak Spanish and English. Ethnicity: I was born and raised in Puerto Rico.I used MATLAB
on a daily basis and I have been using it since five years, so I am very familiar with its programming.
11 Subjects: including algebra 2, calculus, algebra 1, SAT math
...If we find that I cannot assist you effectively in whatever topic you are trying to learn, I will not charge you for that meeting. I provide flexible meeting times, and I offer one free
consultation meeting. Please let me know how I'm doing!
10 Subjects: including algebra 2, chemistry, statistics, calculus
Hello, my name is Yujie, and I am a sophomore at ASU majoring in physics. I tutored at the math tutoring center during the Fall semester and also tutored physics privately on the side. I took
phy150/151/241, classical mechanics (phy310), quantum mechanics, and mathematical methods in physics (phy2...
10 Subjects: including algebra 2, physics, SAT math, calculus | {"url":"http://www.purplemath.com/Tempe_algebra_2_tutors.php","timestamp":"2014-04-18T13:32:43Z","content_type":null,"content_length":"23749","record_id":"<urn:uuid:c75e38d5-e5de-498f-9f0d-36c81877066f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
e trellis complexity
Results 1 - 10 of 11
- IEEE Trans. Inform. Theory , 1998
"... Abstract — We consider the design of channel codes for improving the data rate and/or the reliability of communications over fading channels using multiple transmit antennas. Data is encoded by
a channel code and the encoded data is split into � streams that are simultaneously transmitted using � tr ..."
Cited by 1225 (25 self)
Add to MetaCart
Abstract — We consider the design of channel codes for improving the data rate and/or the reliability of communications over fading channels using multiple transmit antennas. Data is encoded by a
channel code and the encoded data is split into � streams that are simultaneously transmitted using � transmit antennas. The received signal at each receive antenna is a linear superposition of the �
transmitted signals perturbed by noise. We derive performance criteria for designing such codes under the assumption that the fading is slow and frequency nonselective. Performance is shown to be
determined by matrices constructed from pairs of distinct code sequences. The minimum rank among these matrices quantifies the diversity gain, while the minimum determinant of these matrices
quantifies the coding gain. The results are then extended to fast fading channels. The design criteria are used to design trellis codes for high data rate wireless communication. The encoding/
decoding complexity of these codes is comparable to trellis codes employed in practice over Gaussian channels. The codes constructed here provide the best tradeoff between data rate, diversity
advantage, and trellis complexity. Simulation results are provided for 4 and 8 PSK signal sets with data rates of 2 and 3 bits/symbol, demonstrating excellent performance that is within 2–3 dB of the
outage capacity for these channels using only 64 state encoders.
, 1995
"... The problem of minimizing the vertex count at a given time index in the trellis for a general (nonlinear) code is shown to be NPcomplete. Examples are provided that show that 1) the minimal
trellis for a nonlinear code may not be observable, i.e., some codewords may be represented by more than one p ..."
Cited by 56 (7 self)
Add to MetaCart
The problem of minimizing the vertex count at a given time index in the trellis for a general (nonlinear) code is shown to be NPcomplete. Examples are provided that show that 1) the minimal trellis
for a nonlinear code may not be observable, i.e., some codewords may be represented by more than one path through the trellis and 2) minimizing the vertex count at one time index may be incompatible
with minimizing the vertex count at another time index. A trellis product is defined and used to construct trellises for sum codes. Minimal trellises for linear codes are obtained by forming the
product of elementary trellises corresponding to the one-dimensional subcodes generated by atomic codewords. The structure of the resulting trellis is determined solely by the spans of the atomic
codewords. A correspondence between minimal linear block code trellises and configurations of non-attacking rooks on a triangular chess board is established and used to show that the number of
distinct minimal li...
, 1997
"... We start with an overview of algorithmiccomplexity problems in coding theory We then show that the problem of computing the minimum distance of a binary linear code is NP-hard, and the
corresponding decision problem is NP-complete. This constitutes a proof of the conjecture Bedekamp, McEliece, van T ..."
Cited by 34 (2 self)
Add to MetaCart
We start with an overview of algorithmiccomplexity problems in coding theory We then show that the problem of computing the minimum distance of a binary linear code is NP-hard, and the corresponding
decision problem is NP-complete. This constitutes a proof of the conjecture Bedekamp, McEliece, van Tilborg, dating back to 1978. Extensions and applications of this result to other problems in
coding theory are discussed.
- IEEE Trans. Inform. Theory , 1995
"... In this paper, we present a new soft-decision decoding algorithm for Reed-Muller codes. It is based on the GMC decoding algorithm proposed by Schnabl and Bossert [1] which interprets Reed-Muller
codes as generalized multiple concatenated codes. We extend the GMC algorithm to list-decoding (L-GMC). A ..."
Cited by 13 (1 self)
Add to MetaCart
In this paper, we present a new soft-decision decoding algorithm for Reed-Muller codes. It is based on the GMC decoding algorithm proposed by Schnabl and Bossert [1] which interprets Reed-Muller
codes as generalized multiple concatenated codes. We extend the GMC algorithm to list-decoding (L-GMC). As a result, a SDML decoding algorithm for the first order Reed-Muller codes is obtained.
Moreover, the performance achieved with L-GMC for Reed-Muller codes of higher order is considerably better compared to GMC. In particular, for the Reed-Muller codes of length ¢¡¤ £ , quasi SDML
decoding performance is obtained at a computational complexity that is by far less than optimum decoding using the syndrome trellis [2]. Simulations will also show that for Reed-Muller codes up to a
length 1024, the performance of L-GMC decoding is more than 1dB superior to conventional GMC decoding. 1
- IEEE Trans. Inform. Theory , 1996
"... ..."
, 1996
"... This paper investigates trellis structures of linear block codes for the IC (integrated circuit) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a
constraint on the structural complexity of the trellis in terms of the maximum number of states at any parti ..."
Cited by 3 (0 self)
Add to MetaCart
This paper investigates trellis structures of linear block codes for the IC (integrated circuit) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a
constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An
upper bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of
the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (R.M) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC
implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called ACS-connectivity which is related to state
connec-tivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2)
complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5)
, 1996
"... We consider various computational techniques in algebraic coding theory along two lines of work. First we investigate optimization of non-linear codes by relaxing minimum distance constraints,
developing, in the process, two algorithms for improving a given non-linear code and a method of visualizin ..."
Cited by 3 (1 self)
Add to MetaCart
We consider various computational techniques in algebraic coding theory along two lines of work. First we investigate optimization of non-linear codes by relaxing minimum distance constraints,
developing, in the process, two algorithms for improving a given non-linear code and a method of visualizing algebraic codes in three dimensions. Secondly, we study the Generalized Lexicographic
Construction, and show that it produces as special cases the lexicodes and derivatives with properties such as trellis-orientation, trellis-state boundedness, and local optimality. We implement
algorithms for generating these families of codes and, in the process, improve upon work by Conway and Sloane, Brualdi and Pless, Kschischang and Horn, and Zhang.
- IEEE Transactions on Computers , 1999
"... Ordered binary decision diagrams (OBDDs) are graph-based data structures for representing Boolean functions. They have found widespread use in computer-aided design and in formal verification of
digital circuits. Minimal trellises are graphical representations of error-correcting codes that play a p ..."
Cited by 2 (1 self)
Add to MetaCart
Ordered binary decision diagrams (OBDDs) are graph-based data structures for representing Boolean functions. They have found widespread use in computer-aided design and in formal verification of
digital circuits. Minimal trellises are graphical representations of error-correcting codes that play a prominent role in coding theory. This paper establishes a close connection between these two
graphical models, as follows. Let C be a binary code of length n, and let f C (x 1 ; : : : ; x n ) be the Boolean function that takes the value 0 at x 1 ; : : : ; x n if and only if (x 1 ; : : : ; x
n ) 2 C . Given this natural oneto -one correspondence between Boolean functions and binary codes, we prove that the minimal proper trellis for a code C with minimum distance d ? 1 is isomorphic to
the single-terminal OBDD for its Boolean indicator function f C (x 1 ; : : : ; x n ). Prior to this result, the extensive research during the past decade on binary decision diagrams -- in computer
engineering -...
, 1996
"... This paper continues the work of Lafourcade and Vardy [18], tabulated on ..."
, 805
"... ABSTRACT. A graphical realization of a linear code C consists of an assignment of the coordinates of C to the vertices of a graph, along with a specification of linear state spaces and linear
“local constraint ” codes to be associated with the edges and vertices, respectively, of the graph. The κ-co ..."
Add to MetaCart
ABSTRACT. A graphical realization of a linear code C consists of an assignment of the coordinates of C to the vertices of a graph, along with a specification of linear state spaces and linear “local
constraint ” codes to be associated with the edges and vertices, respectively, of the graph. The κ-complexity of a graphical realization is defined to be the largest dimension of any of its local
constraint codes. κ-complexity is a reasonable measure of the computational complexity of a sumproduct decoding algorithm specified by a graphical realization. The main focus of this paper is on the
following problem: given a linear code C and a graph G, how small can the κ-complexity of a realization of C on G be? As useful tools for attacking this problem, we introduce the Vertex-Cut Bound,
and the notion of “vc-treewidth ” for a graph, which is closely related to the well-known graphtheoretic notion of treewidth. Using these tools, we derive tight lower bounds on the κ-complexity of
any realization of C on G. Our bounds enable us to conclude that good error-correcting codes can have low-complexity realizations only on graphs with large vc-treewidth. Along the way, we also prove
the interesting result that the ratio of the κ-complexity of the best conventional trellis realization of a length-n code C to the κ-complexity of the best cycle-free realization of C grows at most
logarithmically with codelength n. Such a logarithmic growth rate is, in fact, achievable. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=234393","timestamp":"2014-04-16T11:36:48Z","content_type":null,"content_length":"37630","record_id":"<urn:uuid:eacba467-9278-43a5-aaec-fd02436a4fce>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
divide by zero
Archive of Mr Excel Message Board
Back to Forms in Excel VBA archive index
Back to archive home
divide by zero
Posted by gordon on July 30, 2001 5:04 PM
How do I trick the computer into dividing by zero. I need the zero because it has a value in my program.
I tried to replace #DIV/0! with 0 using the find and replace but Excel won't let me.
Check out our Excel Resources
Re: divide by zero - use IF & ISERROR
Posted by anno on July 30, 2001 5:20 PM
Not sure if this is what you want, but a quick and dirty way around this is to replace the #DIV/0! with a zero by using ISERROR(). For example, if the operation you have in cell A1 returns #DIV/0!,
change the contents of A1 to "=IF(ISERROR(*),0)", where * is the operation you currently have in A1.
If you have a whole bunch of these to change in a non contiguous range this method will be a bit tedious, but like i said it's a quick and dirty way. over to the experts.
Re: divide by zero
Posted by Aladin Akyurek on July 30, 2001 5:21 PM
You can't as far as I know.
:I need the zero because it has a value in my program.
Look at the formulas that return #DIV/0 and modify them to return 0. For example,
=IF(B1, A1/B1,0)
This will return 0 instead #DIV/0 when B1=0.
This archive is from the original message board at www.MrExcel.com.
All contents © 1998-2004 MrExcel.com.
Visit our
online store
to buy searchable CD's with thousands of VBA and Excel answers.
Microsoft Excel is a registered trademark of the Microsoft Corporation.
MrExcel is a registered trademark of Tickling Keys, Inc. | {"url":"http://www.mrexcel.com/archive/Formulas/25027.html","timestamp":"2014-04-19T06:52:51Z","content_type":null,"content_length":"4118","record_id":"<urn:uuid:c49764e8-5d7a-4396-9408-fca91119343c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
Compressed Fiberglass R-Value
How many of you have heard the statement, "compressing fiberglass insulation reduces it's r-value"? I'm going to assume everyone has, myself included. Well, the statement is correct, but the
interpretation or application of it often leads to a wrong conclusion, for example: Stuffing extra fiberglass insulation into a wall cavity will reduce the walls r-value. That statement is most often
wrong, here's why.
I found it confusing that compressing fiberglass insulation would reduce it's r-value, yet we could buy high density fiberglass at a higher r-value. So what is the difference between high density
fiberglass and regular density that we compress into a tighter space? It turns out to be, not as much as we thought. The distinction is between the r-value of the fiberglass batt and its r-value per
inch. Take for example, r-19 batts. They are nominally 6 1/4 " in thickness, thus an r-value of 3.04 per inch. Now, that's not the r-value our books give us, but it is what the numbers say and what
the pink panther says. Now, install it in the normal 5 1/2" cavity and what do you get, 3.04 X 5 1/2" = 16.72, no. The panther says we get r-18, so what happened to the r per inch. It went up to
So compressing the batts increased their r per inch, while reducing the total number of inches, resulting in a lower total r-value, and although the total went down, we can see that a limited amount
of compression is not necessarily a bad thing. Where the turning point is, I haven't determined and obviously we can't squash the fiberglass flat and expect it to continue to increase its r per inch.
But a little stuffing here and a little there can be a good thing.
Here is the panther's web page with the numbers I used and it does show that their high density for a a 5.5 inch cavity is actually R-21. Whether that's accomplished because they know how to increase
the density better or because we need to add more insulation than the 6.25 in the example I started with, I don't know.
I however do know I now understand the original statement better, I just hope my explanation helped you as well. Corrections welcome if I am wrong. | {"url":"http://energyauditortalk.org/index.php?topic=248.0","timestamp":"2014-04-19T01:48:32Z","content_type":null,"content_length":"57103","record_id":"<urn:uuid:a6f4ee95-d204-46ed-8c7e-4143c1d8a856>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonlinear Pharmacokinetics Ppt Presentation
PowerPoint Presentation:
In some cases, the rate process of drug’s ADME are dependent upon carrier or enzymes that are substrate-specific, have definite capacities & susceptible to saturation at high drug concentration. Such
pharmacokinetics are said to be DOSE DEPENDENT, MIXED ORDER, NONLINEAR, & CAPACITY LIMITED KINETICS. A number of drugs demonstrate saturation or capacity- limitedmetabolism in humans. Ex. glycine
conjugation of salicylate . 2
PowerPoint Presentation:
Drugs that demonstrate saturation kinetics usually show the following characteristics. Elimination of drug does not follow simple first-order kinetics—that is, elimination kinetics are nonlinear. The
elimination half-life changes as dose is increased. Usually, the elimination half-life increases with increased dose due to saturation of an enzyme system. However, the elimination half-life might
decrease due to "self"-induction of liver biotransformation enzymes, as is observed for carbamazepine . The area under the curve (AUC) is not proportional to the amount of bioavailable drug. The
saturation of capacity-limited processes may be affected by other drugs that require the same enzyme or carrier-mediated system ( i.e , competition effects). The composition and/or ratio of the
metabolites of a drug may be affected by a change in the dose. 3
conti … DETECTION OF NONLINEARITY:- Two simple tests for detection of nonlinearity Determination of steady state plasma conc. at different doses. Determination of some of the important
pharmacokinetic parameters such as fraction bioavailable, elimination half life, or total systemic clearance at different doses of the drug. CAUSES OF NONLINEARITY: Drug absorption Drug distribution
Drug metabolism Drug excretion 4
conti … DRUG ABSORPTION : Sources of nonlinearity: When absorption is solubility or dissolution rate limited. Ex:- Griseofulvin . Intestinal metabolism. Ex:- Salicylamide , propranolol Drugs with low
solubility in GI but relatively high dose Ex:- Chorothiazide , griseofulvin , danazol Saturable gastric or GI decomposition Ex:- Penicillin G, omeprazole , saquinavir Saturable transport in gut wall
Ex:- Riboflavin, gebapentin , L-dopa, baclofen , ceftibuten When absorption involves carrier-mediated transport systems. Ex:- Riboflavin, Ascorbic acid, Cyanocobalamin . 5
conti … When presystemic gut wall or hepatic metabolism attains saturation Ex: Proprenolol, Hydralazine & Verapamil . Other causes include change in gastric emptying, GI blood flow & other
physiological factors. 6
conti … DRUG DISTRIBUTION: Sources of nonlinearity in drug distribution: Saturation of binding sites on plasma proteins. Ex:- Phenylbutazone & naproxen, lidocaine , salicylic acid, ceftriaxone ,
diazoxide , phenytoin , warfarin . Saturation of tissue binding sites. Ex:- thiopental & Fentanyl ., disopyramide Cellular uptake Ex:- Methicillin (rabbit) Tissue binding Ex:- Imiprimine (rat) CSF
transport Ex:- Benzylpenicillins Saturable transport into or out of tissues Ex:- Methotrexate Clearance is also altered depending upon extraction ratio of drug. 7
conti … DRUG METABOLISM : Causes of nonlinearity in metabolism are: Capacity limited metabolism due to enzyme & cofactor saturation. Ex:- Phenytoin, alcohol, theophylline. Enzyme induction. Ex:-
Carbamazepine, where a decrease in peak plasma concentration has been observed on repetitive administration. Saturable metabolism Ex:- Phenytoin , salicyclic acid, theophylline , valproic acid.
Cofactor or enzyme limitation Ex:- Acetaminophen, Altered hepatic blood flow Ex:- Propranolol , verapamil Metabolite inhibition Diazepam. Other causes, saturation of binding sites & pathological
situation such as hepatotoxicity . 8
conti … DRUG EXCRETION: Two active saturable processes are: Active tubular secretion. Ex:- Penicillin G. Active tubular reabsorption . Ex:- Water soluble vitamins & glucose. Biliary secretion Ex:-
Iodipamide , sulfobromophthalein sodium. Enterohepatic recycling Ex:- Cimetidine , isotretinoin . Other causes include forced diuresis, change in pH, nephrotoxicity & saturation of binding sites.
Kinetic of capacity-limited or saturable processes is best described by Michaelis-menten equation. 9
MICHAELIS MENTEN EQUATION: 10 Where, dCp / dt is rate of decline of drug concentration with time. C p is the concentration of drug in the plasma V max is the maximum elimination rate K M is the
Michaelis constant
conti … Three situation can be considered depending upon values of K M & C When K M = C :- under this condition, the equation reduces to i.e. rate process is equal to one half its maximum rate. 11 a
PLOT of michaelis-menten equation( elimination rate versus concentration).initially ,the rate increases linearly with concentration & then reaches maximum beyond which it proceeds at a constant rate
PowerPoint Presentation:
12 A plot of michaelis menten equation ( elimination rate dc/ dt vs. C ). Initially the rate increase linerly (first order) with concentration become mixed order at higher concentration. And then
reaches maximum ( Vmax ) beyond which it proceeds at constant rate( zero order).
conti … When K M >> C :- C p in the denominator is negligible , The above equation is identical to the one that describes first order elimination of a drug where Vmax /K M =K’. 13
conti … When C p >> K M saturation of the enzymes occurs and the value for K M is negligible. This is identical to one that describes a zero order process i.e. the rate process occurs at a constant
rate Vmax & is independent of drug concentration e.g. metabolism of ethanol. 14
conti … Estimation of Km & Vmax : This parameters can be assessed from plasma concentration-time data collected after i.v. bolus administration of a drug with nonlinear elimination characteristics.
Integration of Michaelis Menten equation given below, 15
conti … Integration followed by conversion to log base 10 yields : log Cp= log C 0 p + ( C 0 p -C) / 2.303 K M – Vmax / 2.303 K M A semi log plot of Cp vs. t yields a curve with a terminal linear
portion having Slope of – Vmax /2.303 K M & when back extrapolated to time zero gives y-intercept log Co extrapolated. The equation describes this line is : At low plasma conc., both above equations
are identical. 16
conti … Equating both the 2 & simplifying, we get : K M thus can be obtained from above equation. Vmax can be obtained by substituting the value of K M in the slope value. 17
conti … An alternating approach of estimating Vmax & K M is determining rate of change of plasma drug conc. at different times & using the reciprocal of Michaelis Menten Equation : where,V = dCp / dt
, C= plasma drug conc. At midpoint of the sampling interval. This is known as double reciprocal plot or Lineweaver-Burke plot of 1/ dCp / dt vs. 1/C which yields a straight line with slope= K M /Vmax
& y-intercept = 1/Vmax. 18
conti … A disadvantage of Lineweaver-Burke plot is that the points are clustered . More reliable plots in which points are uniformly scattered are Hanes-Woolf plot (equation 1) & Woolf- Augustinsson
- Hofstee plot ( equation 2) …..equation 1 ……equation 2 19
conti … K M & Vmax from steady state concentration: W hen a drug is administered as a constant rate i.v. infusion or in a multiple dose regimen, the steady state concentration Css is given in terms
of dosing rate DR as: DR= C ss Cl T where, DR=R 0 when drug is administered as zero order i.v. infusion At steady state, dosing rate equals rate of decline(elimination b’coz of metabolism) in plasma
drug conc.& if decline is due to a single capacity limited process, then; R = dose/day or dosing rate(DR) 20
conti … Graphical computation of K M & Vmax: Lineweaver-Burke plot / Klotz plot: A plot of 1/R vs. 1/Css yields a straight line with slope K M / Vmax & y-intercept 1/Vmax. 21
conti … 22 LINEWEAVER-BURKE/KLOTZ PLOT
conti … Direct linear plot: Here, a pair of Css i.e. Css1 & Css2 obtained with two different dosing rates DR1 & DR2 is plotted. The points Css1 &DR1 are joined to form a line and a second line is
obtained similarly by joining Css2 & DR2. The points where these two lines intersect each other is extrapolated on DR axis to obtain Vmax & on x-axis to get K M . 23
conti … The third graphical method: A plot of R vs. R/ Css yields a straight line with slope –K M & y-intercept Vmax. 24
conti … There are certain limitation of K M & Vmax estimated by assuming one-compartment system & a single capacity-limited process. More complex equations will results & the computed K M & Vmax will
usually be larger when : The drug is eliminated by more than one capacity limited process. The drug exhibits parallel capacity limited and first-order elimination process. The drug follows multi
compartment kinetics. 25
First order kinetics:
First order kinetics 26
PowerPoint Presentation:
First-Order Reactions If the amount of drug A is decreasing at a rate that is proportional to the amount of drug A remaining, then the rate of disappearance of drug A is expressed as ……….equation 1
where k is the first-order rate constant and is expressed in units of time – 1 From the equation it is clear that a first order process is the one whose rate is directly proportional to the
concentration of drug undergoing reaction. i.e. greater the concentration faster the reaction. 27
PowerPoint Presentation:
It is because of such proportionality between rate of reaction of drug that first order process is said to be followed linear kinetics. ln A = - kt + ln A O ……….equation 2 Integration of Equation
yields the following expression: ……….equation 3 Since equation 3 has only one exponent the first order process is also called as monoexponantial rate process. Thus, first order process is
characterized by logarithmic or exponantial kinetics. i.e. a constant fraction of the drug undergoes reaction per unit time. 28
PowerPoint Presentation:
Because ln = 2.3 log, Equation becomes ……….equation 4 When drug decomposition involves a solution, starting with initial concentration C 0 , it is often convenient to express the rate of change in
drug decomposition, dC / dt , in terms of drug concentration, C , rather than amount because drug concentration is assayed. 29
PowerPoint Presentation:
Hence, ……….equation 5 ……….equation 6 Above equation may be expressed as ……….equation 7 Because ln = 2.3 log, Equation becomes ……….equation 8 30
PowerPoint Presentation:
According to Equation 4 ,A graph of log A versus t will yield a straight line , the y intercept will be log A 0 , and the slope of the line will be – k /2.3 . Similarly, a graph of log C versus t
will yield a straight line according to Equation 8. The y intercept will be log C 0 , and the slope of the line will be – k /2.3. For convenience, C versus t may be plotted on semilog paper without
the need to convert C to log C . An example is shown in . 31
PowerPoint Presentation:
This graph demonstrates the constancy of the t 1/2 in a first-order reaction. 32
PowerPoint Presentation:
Half-Life Half-life ( t 1/2 ) expresses the period of time required for the amount or concentration of a drug to decrease by one-half. First-Order Half-Life The t 1/2 for a first-order reaction may
be found by means of the following equation: It is apparent from this equation that, for a first-order reaction, t 1/2 is a constant. No matter what the initial amount or concentration of drug is,
the time required for the amount to decrease by one-half is a constant. 33
references “SHARGEL” L. , “WU-PONG” S. , “YU ANDREW” B. C. , NONLINEAR PHARMACOKINETICS, APPLIED BIOPHARMACEUTICS & PHARMACOKINETICS , FIFTH EDITION, THE MCGRAW-HILL COMPANIES, 219-248. “BRAHMANKAR”
D.M. , “JAISWAL” S. B. , NONLINEAR PHARMACOKINETICS, BIOPHARMACEUTICS AND PHARMACOKINETICS-A TREATIES, SECOND EDITION-2009, VALLABH PRAKASHAN , 316-317. “GIBALDI” M., “PERRIER” D., NONLINEAR
PHARMACOKINETICS, PHARMACOKINETICS , SECOND EDITION, REVISED & EXPANDED, INFORMA HEALTHCARE, 271-314. 34
PowerPoint Presentation:
PowerPoint Presentation:
THANK YOU 36 | {"url":"http://www.authorstream.com/Presentation/chandni.dave-1547210-nonlinear-pharmacokinetics/","timestamp":"2014-04-21T12:33:57Z","content_type":null,"content_length":"141186","record_id":"<urn:uuid:6d2e11ee-10c7-4a80-ae1c-5ef23f8ba47c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/jishan/answered/1","timestamp":"2014-04-19T10:17:35Z","content_type":null,"content_length":"136369","record_id":"<urn:uuid:538dc421-6b32-4ddf-a7aa-462042e2cabf>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basics of Space Flight: Orbital Mechanics
Orbital mechanics, also called flight mechanics, is the study of the motions of artificial satellites and space vehicles moving under the influence of forces such as gravity, atmospheric drag,
thrust, etc. Orbital mechanics is a modern offshoot of celestial mechanics which is the study of the motions of natural celestial bodies such as the moon and planets. The root of orbital mechanics
can be traced back to the 17th century when mathematician Isaac Newton (1642-1727) put forward his laws of motion and formulated his law of universal gravitation. The engineering applications of
orbital mechanics include ascent trajectories, reentry and landing, rendezvous computations, and lunar and interplanetary trajectories.
conic section, or just conic, is a curve formed by passing a plane through a right circular cone. As shown in Figure 4.1, the angular orientation of the plane relative to the cone determines whether
the conic section is a circle, ellipse, parabola, or hyerbola. The circle and the ellipse arise when the intersection of cone and plane is a bounded curve. The circle is a special case of the ellipse
in which the plane is perpendicular to the axis of the cone. If the plane is parallel to a generator line of the cone, the conic is called a parabola. Finally, if the intersection is an unbounded
curve and the plane is not parallel to a generator line of the cone, the figure is a hyperbola. In the latter case the plane will intersect both halves of the cone, producing two separate curves.
We can define all conic sections in terms of the eccentricity. The type of conic section is also related to the semi-major axis and the energy. The table below shows the relationships between
eccentricity, semi-major axis, and energy and the type of conic section.
│ Conic Section │ Eccentricity, e │ Semi-major axis │ Energy │
│ Circle │ 0 │ = radius │ < 0 │
│ Ellipse │ 0 < e < 1 │ > 0 │ < 0 │
│ Parabola │ 1 │ infinity │ 0 │
│ Hyperbola │ > 1 │ < 0 │ > 0 │
Satellite orbits can be any of the four conic sections. This page deals mostly with elliptical orbits, though we conclude with an examination of the hyperbolic orbit.
To mathematically describe an orbit one must define six quantities, called orbital elements. They are
● Semi-Major Axis, a
● Eccentricity, e
● Inclination, i
● Argument of Periapsis,
● Time of Periapsis Passage, T
● Longitude of Ascending Node,
semi-major axis is one-half of the major axis and represents a satellite's mean distance from its primary. Eccentricity is the distance between the foci divided by the length of the major axis and is
a number between zero and one. An eccentricity of zero indicates a circle.
Inclination is the angular distance between a satellite's orbital plane and the equator of its primary (or the ecliptic plane in the case of heliocentric, or sun centered, orbits). An inclination of
zero degrees indicates an orbit about the primary's equator in the same direction as the primary's rotation, a direction called prograde (or direct). An inclination of 90 degrees indicates a polar
orbit. An inclination of 180 degrees indicates a retrograde equatorial orbit. A retrograde orbit is one in which a satellite moves in a direction opposite to the rotation of its primary.
Periapsis is the point in an orbit closest to the primary. The opposite of periapsis, the farthest point in an orbit, is called apoapsis. Periapsis and apoapsis are usually modified to apply to the
body being orbited, such as perihelion and aphelion for the Sun, perigee and apogee for Earth, perijove and apojove for Jupiter, perilune and apolune for the Moon, etc. The argument of periapsis is
the angular distance between the ascending node and the point of periapsis (see Figure 4.3). The time of periapsis passage is the time in which a satellite moves through its point of periapsis.
Nodes are the points where an orbit crosses a plane, such as a satellite crossing the Earth's equatorial plane. If the satellite crosses the plane going from south to north, the node is the ascending
node; if moving from north to south, it is the descending node. The longitude of the ascending node is the node's celestial longitude. Celestial longitude is analogous to longitude on Earth and is
measured in degrees counter-clockwise from zero with zero longitude being in the direction of the vernal equinox.
In general, three observations of an object in orbit are required to calculate the six orbital elements. Two other quantities often used to describe orbits are period and true anomaly. Period, P, is
the length of time required for a satellite to complete one orbit. True anomaly,
For a spacecraft to achieve Earth orbit, it must be launched to an elevation above the Earth's atmosphere and accelerated to orbital velocity. The most energy efficient orbit, that is one that
requires the least amount of propellant, is a direct low inclination orbit. To achieve such an orbit, a spacecraft is launched in an eastward direction from a site near the Earth's equator. The
advantage being that the rotational speed of the Earth contributes to the spacecraft's final orbital speed. At the United States' launch site in Cape Canaveral (28.5 degrees north latitude) a due
east launch results in a "free ride" of 1,471 km/h (914 mph). Launching a spacecraft in a direction other than east, or from a site far from the equator, results in an orbit of higher inclination.
High inclination orbits are less able to take advantage of the initial speed provided by the Earth's rotation, thus the launch vehicle must provide a greater part, or all, of the energy required to
attain orbital velocity. Although high inclination orbits are less energy efficient, they do have advantages over equatorial orbits for certain applications. Below we describe several types of orbits
and the advantages of each:
Geosynchronous orbits (GEO) are circular orbits around the Earth having a period of 24 hours. A geosynchronous orbit with an inclination of zero degrees is called a geostationary orbit. A spacecraft
in a geostationary orbit appears to hang motionless above one position on the Earth's equator. For this reason, they are ideal for some types of communication and meteorological satellites. A
spacecraft in an inclined geosynchronous orbit will appear to follow a regular figure-8 pattern in the sky once every orbit. To attain geosynchronous orbit, a spacecraft is first launched into an
elliptical orbit with an apogee of 35,786 km (22,236 miles) called a geosynchronous transfer orbit (GTO). The orbit is then circularized by firing the spacecraft's engine at apogee.
Polar orbits (PO) are orbits with an inclination of 90 degrees. Polar orbits are useful for satellites that carry out mapping and/or surveillance operations because as the planet rotates the
spacecraft has access to virtually every point on the planet's surface.
Walking orbits: An orbiting satellite is subjected to a great many gravitational influences. First, planets are not perfectly spherical and they have slightly uneven mass distribution. These
fluctuations have an effect on a spacecraft's trajectory. Also, the sun, moon, and planets contribute a gravitational influence on an orbiting satellite. With proper planning it is possible to design
an orbit which takes advantage of these influences to induce a precession in the satellite's orbital plane. The resulting orbit is called a walking orbit, or precessing orbit.
Sun synchronous orbits (SSO) are walking orbits whose orbital plane precesses with the same period as the planet's solar orbit period. In such an orbit, a satellite crosses periapsis at about the
same local time every orbit. This is useful if a satellite is carrying instruments which depend on a certain angle of solar illumination on the planet's surface. In order to maintain an exact
synchronous timing, it may be necessary to conduct occasional propulsive maneuvers to adjust the orbit.
Molniya orbits are highly eccentric Earth orbits with periods of approximately 12 hours (2 revolutions per day). The orbital inclination is chosen so the rate of change of perigee is zero, thus both
apogee and perigee can be maintained over fixed latitudes. This condition occurs at inclinations of 63.4 degrees and 116.6 degrees. For these orbits the argument of perigee is typically placed in the
southern hemisphere, so the satellite remains above the northern hemisphere near apogee for approximately 11 hours per orbit. This orientation can provide good ground coverage at high northern
Hohmann transfer orbits are interplanetary trajectories whose advantage is that they consume the least possible amount of propellant. A Hohmann transfer orbit to an outer planet, such as Mars, is
achieved by launching a spacecraft and accelerating it in the direction of Earth's revolution around the sun until it breaks free of the Earth's gravity and reaches a velocity which places it in a
sun orbit with an aphelion equal to the orbit of the outer planet. Upon reaching its destination, the spacecraft must decelerate so that the planet's gravity can capture it into a planetary orbit.
To send a spacecraft to an inner planet, such as Venus, the spacecraft is launched and accelerated in the direction opposite of Earth's revolution around the sun (i.e. decelerated) until it achieves
a sun orbit with a perihelion equal to the orbit of the inner planet. It should be noted that the spacecraft continues to move in the same direction as Earth, only more slowly.
To reach a planet requires that the spacecraft be inserted into an interplanetary trajectory at the correct time so that the spacecraft arrives at the planet's orbit when the planet will be at the
point where the spacecraft will intercept it. This task is comparable to a quarterback "leading" his receiver so that the football and receiver arrive at the same point at the same time. The interval
of time in which a spacecraft must be launched in order to complete its mission is called a launch window.
Newton's Laws of Motion and Universal Gravitation
Newton's laws of motion describe the relationship between the motion of a particle and the forces acting on it.
The first law states that if no forces are acting, a body at rest will remain at rest, and a body in motion will remain in motion in a straight line. Thus, if no forces are acting, the velocity (both
magnitude and direction) will remain constant.
The second law tells us that if a force is applied there will be a change in velocity, i.e. an acceleration, proportional to the magnitude of the force and in the direction in which the force is
applied. This law may be summarized by the equation
where F is the force, m is the mass of the particle, and a is the acceleration.
The third law states that if body 1 exerts a force on body 2, then body 2 will exert a force of equal strength, but opposite in direction, on body 1. This law is commonly stated, "for every action
there is an equal and opposite reaction".
In his law of universal gravitation, Newton states that two particles having masses m[1] and m[2] and separated by a distance r are attracted to each other with equal and opposite forces directed
along the line joining the particles. The common magnitude F of the two forces is
where G is an universal constant, called the constant of gravitation, and has the value 6.67259x10^-11 N-m^2/kg^2 (3.4389x10^-8 lb-ft^2/slug^2).
Let's now look at the force that the Earth exerts on an object. If the object has a mass m, and the Earth has mass M, and the object's distance from the center of the Earth is r, then the force that
the Earth exerts on the object is GmM /r^2 . If we drop the object, the Earth's gravity will cause it to accelerate toward the center of the Earth. By Newton's second law (F = ma), this acceleration
g must equal (GmM /r^2)/m, or
At the surface of the Earth this acceleration has the valve 9.80665 m/s^2 (32.174 ft/s^2).
Many of the upcoming computations will be somewhat simplified if we express the product GM as a constant, which for Earth has the value 3.986005x10^14 m^3/s^2 (1.408x10^16 ft^3/s^2). The product GM
is often represented by the Greek letter
For additional useful constants please see the appendix Basic Constants.
For a refresher on SI versus U.S. units see the appendix Weights & Measures.
In the simple case of free fall, a particle accelerates toward the center of the Earth while moving in a straight line. The velocity of the particle changes in magnitude, but not in direction. In the
case of uniform circular motion a particle moves in a circle with constant speed. The velocity of the particle changes continuously in direction, but not in magnitude. From Newton's laws we see that
since the direction of the velocity is changing, there is an acceleration. This acceleration, called centripetal acceleration is directed inward toward the center of the circle and is given by
where v is the speed of the particle and r is the radius of the circle. Every accelerating particle must have a force acting on it, defined by Newton's second law (F = ma). Thus, a particle
undergoing uniform circular motion is under the influence of a force, called centripetal force, whose magnitude is given by
The direction of F at any instant must be in the direction of a at the same instant, that is radially inward.
A satellite in orbit is acted on only by the forces of gravity. The inward acceleration which causes the satellite to move in a circular orbit is the gravitational acceleration caused by the body
around which the satellite orbits. Hence, the satellite's centripetal acceleration is g, that is g = v^2/r. From Newton's law of universal gravitation we know that g = GM /r^2. Therefore, by setting
these equations equal to one another we find that, for a circular orbit,
Click here for example problem #4.1
(use your browser's "back" function to return)
Motions of Planets and Satellites
Through a lifelong study of the motions of bodies in the solar system, Johannes Kepler (1571-1630) was able to derive three basic laws known as Kepler's laws of planetary motion. Using the data
compiled by his mentor Tycho Brahe (1546-1601), Kepler found the following regularities after years of laborious calculations:
1.  All planets move in elliptical orbits with the sun at one focus.
2.  A line joining any planet to the sun sweeps out equal areas in equal times.
3.  The square of the period of any planet about the sun is proportional to the cube of the planet's mean distance from the sun.
These laws can be deduced from Newton's laws of motion and law of universal gravitation. Indeed, Newton used Kepler's work as basic information in the formulation of his gravitational theory.
As Kepler pointed out, all planets move in elliptical orbits, however, we can learn much about planetary motion by considering the special case of circular orbits. We shall neglect the forces between
planets, considering only a planet's interaction with the sun. These considerations apply equally well to the motion of a satellite about a planet.
M and m moving in circular orbits under the influence of each other's gravitational attraction. The center of mass of this system of two bodies lies along the line joining them at a point C such that
mr = MR. The large body of mass M moves in an orbit of constant radius R and the small body of mass m in an orbit of constant radius r, both having the same angular velocity m^2r must equal M^2R. The
specific requirement, then, is that the gravitational force acting on either body must equal the centripetal force needed to keep it moving in its circular orbit, that is
If one body has a much greater mass than the other, as is the case of the sun and a planet or the Earth and a satellite, its distance from the center of mass is much smaller than that of the other
body. If we assume that m is negligible compared to M, then R is negligible compared to r. Thus, equation (4.7) then becomes
If we express the angular velocity in terms of the period of revolution, , we obtain
where P is the period of revolution. This is a basic equation of planetary and satellite motion. It also holds for elliptical orbits if we define r to be the semi-major axis (a) of the orbit.
A significant consequence of this equation is that it predicts Kepler's third law of planetary motion, that is P^2~r^3.
Click here for example problem #4.2
Click here for example problem #4.3
│ In celestial mechanics where we are dealing with planetary or stellar sized bodies, it is often the case that the mass of the secondary body is significant in relation to the mass of the primary, │
│ as with the Moon and Earth. In this case the size of the secondary cannot be ignored. The distance R is no longer negligible compared to r and, therefore, must be carried through the derivation. │
│ Equation (4.9) becomes │
│ │
│ More commonly the equation is written in the equivalent form │
│ │
│ where a is the semi-major axis. The semi-major axis used in astronomy is always the primary-to-secondary distance, or the geocentric semi-major axis. For example, the Moon's mean geocentric │
│ distance from Earth (a) is 384,403 kilometers. On the other hand, the Moon's distance from the barycenter (r) is 379,732 km, with Earth's counter-orbit (R) taking up the difference of 4,671 km. │
Kepler's second law of planetary motion must, of course, hold true for circular orbits. In such orbits both r are constant so that equal areas are swept out in equal times by the line joining a
planet and the sun. For elliptical orbits, however, both r will vary with time. Let's now consider this case.
C along some arbitrary path. The area swept out by the radius vector in a short time interval t is shown shaded. This area, neglecting the small triangular region at the end, is one-half the base
times the height or approximately r(r. This expression becomes more exact as approaches zero, i.e. the small triangle goes to zero more rapidly than the large one. The rate at which area is being
swept out instantaneously is therefore
For any given body moving under the influence of a central force, the value ^2 is constant.
Let's now consider two points P[1] and P[2] in an orbit with radii r[1] and r[2], and velocities v[1] and v[2]. Since the velocity is always tangent to the path, it can be seen that if r and v, then
where vsin is the transverse component of v. Multiplying through by r, we have
or, for two points P[1] and P[2] on the orbital path
Note that at periapsis and apoapsis, P[1] and P[2] be these two points we get
Let's now look at the energy of the above particle at points P[1] and P[2]. Conservation of energy states that the sum of the kinetic energy and the potential energy of a particle remains constant.
The kinetic energy T of a particle is given by mv^2/2 while the potential energy of gravity V is calculated by the equation -GMm/r. Applying conservation of energy we have
From equations (4.14) and (4.15) we obtain
Rearranging terms we get
Click here for example problem #4.4
Click here for example problem #4.5
The eccentricity e of an orbit is given by
Click here for example problem #4.6
If the semi-major axis a and the eccentricity e of an orbit are known, then the periapsis and apoapsis distances can be calculated by
Click here for example problem #4.7
A space vehicle's orbit may be determined from the position and the velocity of the vehicle at the beginning of its free flight. A vehicle's position and velocity can be described by the variables r,
v, and r is the vehicle's distance from the center of the Earth, v is its velocity, and zenith angle (see Figure 4.7). If we let  r[1], v[1], and [1] be the initial (launch) values of  r, v,
and P[2] represent the perigee, then equation (4.13) becomes
Substituting equation (4.23) into (4.15), we can obtain an equation for the perigee radius R[p].
Multiplying through by -R[p]^2/(r[1]^2v[1]^2) and rearranging, we get
Note that this is a simple quadratic equation in the ratio (R[p]/r[1]) and that 2GM /(r[1] × v[1]^2) is a nondimensional parameter of the orbit.
Solving for (R[p]/r[1]) gives
Like any quadratic, the above equation yields two answers. The smaller of the two answers corresponds to R[p], the periapsis radius. The other root corresponds to the apoapsis radius, R[a].
Please note that in practice spacecraft launches are usually terminated at either perigee or apogee, i.e.
Click here for example problem #4.8
Equation (4.26) gives the values of R[p] and R[a] from which the eccentricity of the orbit can be calculated, however, it may be simpler to calculate the eccentricity e directly from the equation
Click here for example problem #4.9
To pin down a satellite's orbit in space, we need to know the angle
Click here for example problem #4.10
flight-path angle, and is positive when the velocity vector is directed away from the primary as shown in Figure 4.8. When flight-path angle is used, equations (4.26) through (4.28) are rewritten as
The semi-major axis is, of course, equal to (R[p]+R[a])/2, though it may be easier to calculate it directly as follows:
Click here for example problem #4.11
If e is solved for directly using equation (4.27) or (4.30), and a is solved for using equation (4.32), R[p] and R[a] can be solved for simply using equations (4.21) and (4.22).
Orbit Tilt, Rotation and Orientation
Above we determined the size and shape of the orbit, but to determine the orientation of the orbit in space, we must know the latitude and longitude and the heading of the space vehicle at burnout.
Figure 4.9 above illustrates the location of a space vehicle at engine burnout, or orbit insertion. [1] and [2] are the geographical longitudes of the ascending node and the burnout point at the
instant of engine burnout. Figure 4.10 pictures the orbital elements, where i is the inclination,
If [2] are given, the other values can be calculated from the following relationships:
In equation (4.36), the value of
The longitude of the ascending node, [1] is geographical longitude. The celestial longitude of the ascending node is equal to the local apparent sidereal time, in degrees, at longitude [1] at the
time of engine burnout. Sidereal time is defined as the hour angle of the vernal equinox at a specific locality and time; it has the same value as the right ascension of any celestial body that is
crossing the local meridian at that same instant. At the moment when the vernal equinox crosses the local meridian, the local apparent sidereal time is 00:00. See this sidereal time calculator.
Click here for example problem #4.12
│ Geodetic Latitude, Geocentric Latitude, and Declination │
│ │
│ geodetic latitude (or geographical latitude), geocentric latitude, Declination, │
│ │
│ R is the magnitude of the reference ellipsoid's geocentric radius vector to the point of interest on its surface, r is the magnitude of the geocentric radius vector to the celestial object of │
│ interest, and the altitude h is the perpendicular distance from the reference ellipsoid to the celestial object of interest. The value of R at the equator is a, and the value of R at the poles is │
│ b. The ellipsoid's flattening, f, is the ratio of the equatorial-polar length difference to the equatorial length. For Earth, a equals 6,378,137 meters, b equals 6,356,752 meters, and f equals 1/ │
│ 298.257. │
│ │
│ When solving problems in orbital mechanics, the measurements of greatest usefulness are the magnitude of the radius vector, r, and declination, │
│ │
│ The relationship between geodetic and geocentric latitude is, │
│ │
│ The radius of the reference ellipsoid is given by, │
│ │
│ The length r can be solved from h, or h from r, using one of the following, │
│ │
│ And declination is calculated using, │
│ │
│ For spacecraft in low earth orbit, the difference between r ≈ R + h. │
│ │
│ It is important to note that the value of h is not always measured as described and illustrated above. In some applications it is customary to express h as the perpendicular distance from a │
│ reference sphere, rather than the reference ellipsoid. In this case, R is considered constant and is often assigned the value of Earth's equatorial radius, hence h = r – a. This is the method │
│ typically used when a spacecraft's orbit is expressed in a form such as "180 km × 220 km". The example problems presented in this web site also assume this method of measurement. │
Position in an Elliptical Orbit
Johannes Kepler was able to solve the problem of relating position in an orbit to the elapsed time, t-t[o], or conversely, how long it takes to go from one point in an orbit to another. To solve
this, Kepler introduced the quantity M, called the mean anomaly, which is the fraction of an orbit period that has elapsed since perigee. The mean anomaly equals the true anomaly for a circular
orbit. By definition,
where M[o] is the mean anomaly at time t[o] and n is the mean motion, or the average angular velocity, determined from the semi-major axis of the orbit as follows:
This solution will give the average position and velocity, but satellite orbits are elliptical with a radius constantly varying in orbit. Because the satellite's velocity depends on this varying
radius, it changes as well. To resolve this problem we can define an intermediate variable E, called the eccentric anomaly, for elliptical orbits, which is given by
For small eccentricities a good approximation of true anomaly can be obtained by the following formula (the error is of the order e^3):
The preceding five equations can be used to (1) find the time it takes to go from one position in an orbit to another, or (2) find the position in an orbit after a specific period of time. When
solving these equations it is important to work in radians rather than degrees, where 2
Click here for example problem #4.13
Click here for example problem #4.14
At any time in its orbit, the magnitude of a spacecraft's position vector, i.e. its distance from the primary body, and its flight-path angle can be calculated from the following equations:
And the spacecraft's velocity is given by,
Click here for example problem #4.15
The orbital elements discussed at the beginning of this section provide an excellent reference for describing orbits, however there are other forces acting on a satellite that perturb it away from
the nominal orbit. These perturbations, or variations in the orbital elements, can be classified based on how they affect the Keplerian elements. Secular variations represent a linear variation in
the element, short-period variations are periodic in the element with a period less than the orbital period, and long-period variations are those with a period greater than the orbital period.
Because secular variations have long-term effects on orbit prediction (the orbital elements affected continue to increase or decrease), they will be discussed here for Earth-orbiting satellites.
Precise orbit determination requires that the periodic variations be included as well.
Third-Body Perturbations
The gravitational forces of the Sun and the Moon cause periodic variations in all of the orbital elements, but only the longitude of the ascending node, argument of perigee, and mean anomaly
experience secular variations. These secular variations arise from a gyroscopic precession of the orbit about the ecliptic pole. The secular variation in mean anomaly is much smaller than the mean
motion and has little effect on the orbit, however the secular variations in longitude of the ascending node and argument of perigee are important, especially for high-altitude orbits.
For nearly circular orbits the equations for the secular rates of change resulting from the Sun and Moon are
Longitude of the ascending node:
Argument of perigee:
where i is the orbit inclination, n is the number of orbit revolutions per day, and
Click here for example problem #4.16
Perturbations due to Non-spherical Earth
When developing the two-body equations of motion, we assumed the Earth was a spherically symmetrical, homogeneous mass. In fact, the Earth is neither homogeneous nor spherical. The most dominant
features are a bulge at the equator, a slight pear shape, and flattening at the poles. For a potential function of the Earth, we can find a satellite's acceleration by taking the gradient of the
potential function. The most widely used form of the geopotential function depends on latitude and geopotential coefficients, J[n], called the zonal coefficients.
The potential generated by the non-spherical Earth causes periodic variations in all the orbital elements. The dominant effects, however, are secular variations in longitude of the ascending node and
argument of perigee because of the Earth's oblateness, represented by the J[2] term in the geopotential expansion. The rates of change of [2] are
where n is the mean motion in degrees/day, J[2] has the value 0.00108263, R[E] is the Earth's equatorial radius, a is the semi-major axis in kilometers, i is the inclination, e is the eccentricity,
and [2] perturbations dominate; for satellites above GEO the Sun and Moon perturbations dominate.
Molniya orbits are designed so that the perturbations in argument of perigee are zero. This conditions occurs when the term 4-5sin^2i is equal to zero or, that is, when the inclination is either 63.4
or 116.6 degrees.
Click here for example problem #4.17
Perturbations from Atmospheric Drag
Drag is the resistance offered by a gas or liquid to a body moving through it. A spacecraft is subjected to drag forces when moving through a planet's atmosphere. This drag is greatest during launch
and reentry, however, even a space vehicle in low Earth orbit experiences some drag as it moves through the Earth's thin upper atmosphere. In time, the action of drag on a space vehicle will cause it
to spiral back into the atmosphere, eventually to disintegrate or burn up. If a space vehicle comes within 120 to 160 km of the Earth's surface, atmospheric drag will bring it down in a few days,
with final disintegration occurring at an altitude of about 80 km. Above approximately 600 km, on the other hand, drag is so weak that orbits usually last more than 10 years - beyond a satellite's
operational lifetime. The deterioration of a spacecraft's orbit due to drag is called decay.
The drag force F[D] on a body acts in the opposite direction of the velocity vector and is given by the equation
where C[D] is the drag coefficient, v is the body's velocity, and A is the area of the body normal to the flow. The drag coefficient is dependent on the geometric form of the body and is generally
determined by experiment. Earth orbiting satellites typically have very high drag coefficients in the range of about 2 to 4. Air density is given by the appendix Atmosphere Properties.
The region above 90 km is the Earth's thermosphere where the absorption of extreme ultraviolet radiation from the Sun results in a very rapid increase in temperature with altitude. At approximately
200-250 km this temperature approaches a limiting value, the average value of which ranges between about 600 and 1,200 K over a typical solar cycle. Solar activity also has a significant affect on
atmospheric density, with high solar activity resulting in high density. Below about 150 km the density is not strongly affected by solar activity; however, at satellite altitudes in the range of 500
to 800 km, the density variations between solar maximum and solar minimum are approximately two orders of magnitude. The large variations imply that satellites will decay more rapidly during periods
of solar maxima and much more slowly during solar minima.
For circular orbits we can approximate the changes in semi-major axis, period, and velocity per revolution using the following equations:
where a is the semi-major axis, P is the orbit period, and V, A and m are the satellite's velocity, area, and mass respectively. The term m/(C[D]A), called the ballistic coefficient, is given as a
constant for most satellites. Drag effects are strongest for satellites with low ballistic coefficients, this is, light vehicles with large frontal areas.
A rough estimate of a satellite's lifetime, L, due to drag can be computed from
where H is the atmospheric density scale height. A substantially more accurate estimate (although still very approximate) can be obtained by integrating equation (4.53), taking into account the
changes in atmospheric density with both altitude and solar activity.
Click here for example problem #4.18
Perturbations from Solar Radiation
Solar radiation pressure causes periodic variations in all of the orbital elements. The magnitude of the acceleration in m/s^2 arising from solar radiation pressure is
where A is the cross-sectional area of the satellite exposed to the Sun and m is the mass of the satellite in kilograms. For satellites below 800 km altitude, acceleration from atmospheric drag is
greater than that from solar radiation pressure; above 800 km, acceleration from solar radiation pressure is greater.
At some point during the lifetime of most space vehicles or satellites, we must change one or more of the orbital elements. For example, we may need to transfer from an initial parking orbit to the
final mission orbit, rendezvous with or intercept another spacecraft, or correct the orbital elements to adjust for the perturbations discussed in the previous section. Most frequently, we must
change the orbit altitude, plane, or both. To change the orbit of a space vehicle, we have to change its velocity vector in magnitude or direction. Most propulsion systems operate for only a short
time compared to the orbital period, thus we can treat the maneuver as an impulsive change in velocity while the position remains fixed. For this reason, any maneuver changing the orbit of a space
vehicle must occur at a point where the old orbit intersects the new orbit. If the orbits do not intersect, we must use an intermediate orbit that intersects both. In this case, the total maneuver
will require at least two propulsive burns.
Orbit Altitude Changes
Hohmann transfer orbit. In this case, the transfer orbit's ellipse is tangent to both the initial and final orbits at the transfer orbit's perigee and apogee respectively. The orbits are tangential,
so the velocity vectors are collinear, and the Hohmann transfer represents the most fuel-efficient transfer between two circular, coplanar orbits. When transferring from a smaller orbit to a larger
orbit, the change in velocity is applied in the direction of motion; when transferring from a larger orbit to a smaller, the change of velocity is opposite to the direction of motion.
The total change in velocity required for the orbit transfer is the sum of the velocity changes at perigee and apogee of the transfer ellipse. Since the velocity vectors are collinear, the velocity
changes are just the differences in magnitudes of the velocities in each orbit. If we know the initial and final orbits, r[A] and r[B], we can calculate the total velocity change using the following
Note that equations (4.59) and (4.60) are the same as equation (4.6), and equations (4.61) and (4.62) are the same as equation (4.45).
Click here for example problem #4.19
One-Tangent Burn. In this instance the transfer orbit is tangential to the initial orbit. It intersects the final orbit at an angle equal to the flight path angle of the transfer orbit at the point
of intersection. An infinite number of transfer orbits are tangential to the initial orbit and intersect the final orbit at some angle. Thus, we may choose the transfer orbit by specifying the size
of the transfer orbit, the angular change of the transfer, or the time required to complete the transfer. We can then define the transfer orbit and calculate the required velocities.
For example, we may specify the size of the transfer orbit, choosing any semi-major axis that is greater than the semi-major axis of the Hohmann transfer ellipse. Once we know the semi-major axis of
the ellipse, a[tx], we can calculate the eccentricity, angular distance traveled in the transfer, the velocity change required for the transfer, and the time required to complete the transfer. We do
this using equations (4.59) through (4.63) and (4.65) above, and the following equations:
Click here for example problem #4.20
Another option for changing the size of an orbit is to use electric propulsion to produce a constant low-thrust burn, which results in a spiral transfer. We can approximate the velocity change for
this type of orbit transfer by
where the velocities are the circular velocities of the two orbits.
Orbit Plane Changes
to be perpendicular to the orbital plane and, therefore, perpendicular to the initial velocity vector. If the size of the orbit remains constant, the maneuver is called a simple plane change. We can
find the required change in velocity by using the law of cosines. For the case in which V[f] is equal to V[i], this expression reduces to
where V[i] is the velocity before and after the burn, and
Click here for example problem #4.21
From equation (4.73) we see that if the angular change is equal to 60 degrees, the required change in velocity is equal to the current velocity. Plane changes are very expensive in terms of the
required change in velocity and resulting propellant consumption. To minimize this, we should change the plane at a point where the velocity of the satellite is a minimum: at apogee for an elliptical
orbit. In some cases, it may even be cheaper to boost the satellite into a higher orbit, change the orbit plane at apogee, and return the satellite to its original orbit.
Typically, orbital transfers require changes in both the size and the plane of the orbit, such as transferring from an inclined parking orbit at low altitude to a zero-inclination orbit at
geosynchronous altitude. We can do this transfer in two steps: a Hohmann transfer to change the size of the orbit and a simple plane change to make the orbit equatorial. A more efficient method (less
total change in velocity) would be to combine the plane change with the tangential burn at apogee of the transfer orbit. As we must change both the magnitude and direction of the velocity vector, we
can find the required change in velocity using the law of cosines,
where V[i] is the initial velocity, V[f] is the final velocity, and
Click here for example problem #4.22
As can be seen from equation (4.74), a small plane change can be combined with an altitude change for almost no cost in or propellant. Consequently, in practice, geosynchronous transfer is done with
a small plane change at perigee and most of the plane change at apogee.
Another option is to complete the maneuver using three burns. The first burn is a coplanar maneuver placing the satellite into a transfer orbit with an apogee much higher than the final orbit. When
the satellite reaches apogee of the transfer orbit, a combined plane change maneuver is done. This places the satellite in a second transfer orbit that is coplanar with the final orbit and has a
perigee altitude equal to the altitude of the final orbit. Finally, when the satellite reaches perigee of the second transfer orbit, another coplanar maneuver places the satellite into the final
orbit. This three-burn maneuver may save propellant, but the propellant savings comes at the expense of the total time required to complete the maneuver.
When a plane change is used to modify inclination only, the magnitude of the angle change is simply the difference between the initial and final inclinations. In this case, the initial and final
orbits share the same ascending and descending nodes. The plane change maneuver takes places when the space vehicle passes through one of these two nodes.
In some instances, however, a plane change is used to alter an orbit's longitude of ascending node in addition to the inclination. An example might be a maneuver to correct out-of-plane errors to
make the orbits of two space vehicles coplanar in preparation for a rendezvous. If the orbital elements of the initial and final orbits are known, the plane change angle is determined by the vector
dot product. If i[i] and [i] are the inclination and longitude of ascending node of the initial orbit, and i[f] and [f] are the inclination and longitude of ascending node of the final orbit, then
the angle between the orbital planes,
Click here for example problem #4.23
The plane change maneuver takes place at one of two nodes where the initial and final orbits intersect. The latitude and longitude of these nodes are determined by the vector cross product. The
position of one of the two nodes is given by
Knowing the position of one node, the second node is simply
Click here for example problem #4.24
Orbit Rendezvous
Orbital transfer becomes more complicated when the object is to rendezvous with or intercept another object in space: both the interceptor and the target must arrive at the rendezvous point at the
same time. This precision demands a phasing orbit to accomplish the maneuver. A phasing orbit is any orbit that results in the interceptor achieving the desired geometry relative to the target to
initiate a Hohmann transfer. If the initial and final orbits are circular, coplanar, and of different sizes, then the phasing orbit is simply the initial interceptor orbit. The interceptor remains in
the initial orbit until the relative motion between the interceptor and target results in the desired geometry. At that point, we would inject the interceptor into a Hohmann transfer orbit.
Launch Windows
Similar to the rendezvous problem is the launch-window problem, or determining the appropriate time to launch from the surface of the Earth into the desired orbital plane. Because the orbital plane
is fixed in inertial space, the launch window is the time when the launch site on the surface of the Earth rotates through the orbital plane. The time of the launch depends on the launch site's
latitude and longitude and the satellite orbit's inclination and longitude of ascending node.
Orbit Maintenance
Once in their mission orbits, many satellites need no additional orbit adjustment. On the other hand, mission requirements may demand that we maneuver the satellite to correct the orbital elements
when perturbing forces have changed them. Two particular cases of note are satellites with repeating ground tracks and geostationary satellites.
After the mission of a satellite is complete, several options exist, depending on the orbit. We may allow low-altitude orbits to decay and reenter the atmosphere or use a velocity change to speed up
the process. We may also boost satellites at all altitudes into benign orbits to reduce the probability of collision with active payloads, especially at synchronous altitudes.
To an orbit designer, a space mission is a series of different orbits. For example, a satellite might be released in a low-Earth parking orbit, transferred to some mission orbit, go through a series
of resphasings or alternate mission orbits, and then move to some final orbit at the end of its useful life. Each of these orbit changes requires energy. The is traditionally used to account for this
energy. It sums all the velocity changes required throughout the space mission life. In a broad sense the
The discussion thus far has focused on the elliptical orbit, which will result whenever a spacecraft has insufficient velocity to escape the gravity of its primary. There is a velocity, called the
escape velocity, V[esc], such that if the spacecraft is launched with an initial velocity greater than V[esc], it will travel away from the planet and never return. To achieve escape velocity we must
give the spacecraft enough kinetic energy to overcome all of the negative gravitational potential energy. Thus, if m is the mass of the spacecraft, M is the mass of the planet, and r is the radial
distance between the spacecraft and planet, the potential energy is -GmM /r. The kinetic energy of the spacecraft, when it is launched, is mv^2/2. We thus have
which is independent of the mass of the spacecraft.
Click here for example problem #4.25
A space vehicle that has exceeded the escape velocity of a planet will travel a hyperbolic path relative to the planet. The hyperbola is an unusual and interesting conic section because it has two
branches. The arms of a hyperbola are asymptotic to two intersecting straight line (the asymptotes). If we consider the left-hand focus, f, as the prime focus (where the center of our gravitating
body is located), then only the left branch of the hyperbola represents the possible orbit. If, instead, we assume a force of repulsion between our satellite and the body located at f (such as the
force between two like-charged electric particles), then the right-hand branch represents the orbit. The parameters a, b and c are labeled in Figure 4.14. We can see that c^2 = a^2+ b^2 for the
hyperbola. The eccentricity is,
The angle between the asymptotes, which represents the angle through which the path of a space vehicle is turned by its encounter with a planet, is labeled
If we let
If we know the radius, r, velocity, v, and flight path angle,
The true anomaly corresponding to known valves of r, v and e and a, and then calculate true anomaly using equation (4.43), rearranged as follows:
The impact parameter, b, is the distance of closest approach that would result between a spacecraft and planet if the spacecraft trajectory was undeflected by gravity. The impact parameter is,
Closet approach occurs at periapsis, where the radius distance, r[o], is equal to
p is a geometrical constant of the conic called the parameter or semi-latus rectum, and is equal to
Click here for example problem #4.26
At any known true anomaly, the magnitude of a spacecraft's radius vector, its flight-path angle, and its velocity can be calculated using equations (4.43), (4.44) and (4.45).
Click here for example problem #4.27
Early we introduced the variable eccentric anomaly and its use in deriving the time of flight in an elliptical orbit. In a similar manner, the analytical derivation of the hyperbolic time of flight,
using the hyperbolic eccentric anomaly, F, can be derived as follows:
Whenever F should be taken as positive; whenever F should be taken as negative.
Click here for example problem #4.28
Hyperbolic Excess Velocity
hyperbolic excess velocity. We can calculate this velocity from the energy equation written for two points on the hyperbolic escape trajectory – a point near Earth called the burnout point and a
point an infinite distance from Earth where the velocity will be the hyperbolic excess velocity, v[∞]. Solving for v[∞] we obtain
Note that if v[∞] = 0 (as it is on a parabolic trajectory), the burnout velocity, v[bo], becomes simply the escape velocity.
Click here for example problem #4.29
It is, or course, absurd to talk about a space vehicle "reaching infinity" and in this sense it is meaningless to talk about escaping a gravitational field completely. It is a fact, however, that
once a space vehicle is a great distance from Earth, for all practical purposes it has escaped. In other words, it has already slowed down to very nearly its hyperbolic excess velocity. It is
convenient to define a sphere around every gravitational body and say that when a probe crosses the edge of this sphere of influence it has escaped. Although it is difficult to get agreement on
exactly where the sphere of influence should be drawn, the concept is convenient and is widely used, especially in lunar and interplanetary trajectories. For most purposes, the radius of the sphere
of influence for a planet can be calculated as follows:
where D[sp] is the distance between the Sun and the planet, M[p] is the mass of the planet, and M[s] is the mass of the Sun. Equation (4.89) is also valid for calculating a moon's sphere of
influence, where the moon is substituted for the planet and the planet for the Sun.
Click here for example problem #4.30
Compiled, edited and written in part by Robert A. Braeunig, 1997, 2005, 2007, 2008, 2011, 2012, 2013. | {"url":"http://www.braeunig.us/space/orbmech.htm","timestamp":"2014-04-17T16:11:17Z","content_type":null,"content_length":"78291","record_id":"<urn:uuid:2d85375a-6623-4230-9496-ed3abd252d36>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
18 Feb 20:02 2013
~ operator ?
<briand <at> aracnet.com>
2013-02-18 19:02:52 GMT
Hi all,
I was creating "bigger" uncurries which I am simply extending from an existing uncurry I found some where, e.g.
uncurry4 :: (a -> b -> c -> d -> e) -> ((a, b, c, d) -> e)
uncurry4 f ~(a,b,c,d) = f a b c d
when I realized, what's the "~" for ?
I've only been able to find a partial explanation that it involves preserving laziness, or something,
maybe ?
I was hoping someone could enlighten me. | {"url":"http://comments.gmane.org/gmane.comp.lang.haskell.cafe/103471","timestamp":"2014-04-17T19:00:17Z","content_type":null,"content_length":"12769","record_id":"<urn:uuid:92628f93-4a68-4fb7-a0a6-0161450a5dfc>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
I Need To Know How To Solve Solving Systems By Substitutions,The Main Shortcut linear equations substitution · solving systems by substitution worksheet
This worksheet contains problems on solving systems of equations. Students must use the graphing method (coordinate planes are included) or substitution.
This Bottomless Worksheet offers endless practice in solving systems of linear equations in two variables with the substitution method.
of this document.Discrete Math Chapter 4 Solving Systems of Equations worksheet. Page 2 of 3. Solve each system ALGEBRAICALLY. You may use ELIMINATION or SUBSTITUTION.
Day 1 Solving Systems of Equations through Graphing Exercises. Graphing Worksheet. Day 2 Solving Systems using the Substitution Exploring Substitution
maryland land instrument intake sheet
Worksheets: Worksheet Version of this web page| Group Activity-- Create a Math Advertisement for . Solution of system of equations by substitution method
SYSTEMS OF EQUATIONS. Sections: All Topics - Mixed Review Solving by Elimination Solving by Graphing Solving by Substitution All Topics - Mixed Review:
A page of questions requiring students to solve systems of equations using the Substitution Method. Students could get additional practice by trying.
sales tax spreadsheet
forest backround color sheet
of this document.Introduce solving systems by substitution. Explain that substitution means . Try the following problems from the worksheet “Solving Systems using
free downloadable piano sheet music
double digit worksheet no regrouping
early math worksheets
ExamView Quiz: Solving Systems by Graphing. Lesson. Notes: Solving systems by substitution (ppt) · Classwork Worksheet (doc). Internet Sites
5th grade science worksheets
family group sheet fillable template
lem voltage module datasheet
Day 1 Solving Systems of Equations through Graphing Exercises. Graphing Worksheet. Day 2 Solving Systems using the Substitution Exploring Substitution
Algebra worksheets covering a range of high school concepts. Expanding and factorising. Linear and quadratic equations. Substitution. Solving linear systems/equations graphically, algebraically and
using a graphics calculator.
of this document.Algebra II: Homework #7: Solving Systems of Linear Equations. By Substitution and Elimination. Directions: On the following worksheet, do problems 1-20.
It is solving systems of equations by substitution 4x+5y=11 y=3x-13 I've been working on these worksheets for over two hours and i cant figure this one
rocket history worksheets
PEARSON RESOURCES Worksheets, Videos, Chapter Tests, Lesson/Vocabulary Quizzes Solving linear systems for real-world situations using substitution
orlando sheet metal supply
adavantages and disadvantages of worksheets
grown up christmas list sheet
bin trip sheet
9 Nov 2009 Description: This Bottomless Worksheet offers endless practice in solving systems of linear equations in two variables with the substitution
review sheet central
9 Nov 2009 Algebra Worksheets, Solving systems of equations (Algebra II), Flash, Systems of Equations: Bottomless Worksheet of Substitution Method
Solving Systems of Equations using Substitution. Hundreds of video clips, worksheets, and word problems on solving systems of equations using substitution .
[PDF] Solving Systems of Equations by Substitution. Grade Range: 9th - 11th; Rating: 4 Stars. In this systems of equations worksheet, students solve
Solving System of Linear Equations by Substitution. We hope that the free math worksheets have been helpful. We encourage parents and teachers to select
general system solve by substitution. Quadratics - ax2+bx+c=0. simple factorable Permissions - Can you photocopy these worksheets?
of this document.Systems of. Linear Equations. Guided Practice Worksheet. Solving Systems of Linear Equations by Substitution & Elimination
9 Nov 2009 Algebra Worksheets, Solving systems of equations (Algebra II), Flash, Systems of Equations: Bottomless Worksheet of Substitution Method
sample food chains and worksheets
design cheat sheet
esl pronunciation worksheet | {"url":"http://arteviduell.com/cxywfp/vpz.php?d=236","timestamp":"2014-04-18T14:01:47Z","content_type":null,"content_length":"6755","record_id":"<urn:uuid:07531f1d-e1c1-40cd-8f8a-39084c063288>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pacific Palisades Algebra 1 Tutor
Find a Pacific Palisades Algebra 1 Tutor
...I have acted in many plays, musicals, films, and appeared on TV several times. In addition, I have directed a few productions, and coached friends and students on auditions and/or projects
they're involved in. I was chosen as the Voice and Speech Assistant at The Boston Conservatory my senior year, an honor that is only given to two students in the senior class.
16 Subjects: including algebra 1, reading, English, writing
...I have often been told that I'm good at thinking on my feet, and I use that skill to find a different approach to a certain topic, if the way that I am explaining something does not work for
you. One thing I always practice in my classroom is starting from a level where the students can understa...
11 Subjects: including algebra 1, physics, geometry, GED
...I provided tutoring during college focusing on elementary math students. Elementary math is the building blocks for all math so I like to use flash cards and study sheets that help students
have fun learning. I was a straight A student myself in math in grades k-12.
22 Subjects: including algebra 1, English, reading, writing
...I dont just teach math, I teach the kids a real love for math and how to tackle math problems. For me, the most important part of a good math education is to teach critical thinking skills that
allow students to visualize their way through any math problem. I am not interested in memorization, only deep understand of the subject matter.
3 Subjects: including algebra 1, elementary math, prealgebra
...I have taught from 3rd grade up to 12th grade. I have worked with students that are on the Autism spectrum, students with ADHD, students with ADD, and students that are Emotionally Disturbed. I
graduated from Cal State Northridge with a BA in History.
6 Subjects: including algebra 1, special needs, autism, ADD/ADHD | {"url":"http://www.purplemath.com/Pacific_Palisades_algebra_1_tutors.php","timestamp":"2014-04-18T13:48:09Z","content_type":null,"content_length":"24443","record_id":"<urn:uuid:4bab6060-291d-491c-9e36-18bbd96b5d1b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shape of a matrix
July 2nd 2009, 06:14 AM #1
Jun 2009
Shape of a matrix
I am using an evolutionary fitting method which provides the inverse Hessian up to a constant, i.e.
$\mathbf{C} = const \quad \mathbf{H}^{-1}$
The 2-norm of $\mathbf{C}$ is very small, e.g. $10^{-20}$, but the eigenvalues of $\mathbf{C}$ are relative to the eigenvalues of $\mathbf{H}$. I was reading a proof which had to do with scaling
a matrix via eigenvalues in the form
$factor=\frac{\sum_j^p f(x) \lambda_j}{\sum_j ^p f(x)}$
I am working with log-likelihood, so $f(x)$ would have to be $\log[L(\beta)]$ in the above equation. Firstly, is there a known way to obtain $\mathbf{H}^{-1}$ by rescaling with the eigenvalues of
$\mathbf{C}$, or do I need to calculate log-likehood for each record, sum them, and then solve for $\mathbf{H}^{-1}$?
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-statistics/94202-shape-matrix.html","timestamp":"2014-04-20T13:54:36Z","content_type":null,"content_length":"31096","record_id":"<urn:uuid:c96f95b9-9410-4c58-a55a-9cc8c76563ec>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Distribution of the sample mean of an exponential distribution
Sorry to dig up a two-month old thread, but in the course of solving a different problem, I may have found a straight-forward answer to the problem presented by the OP. Here it is, for future
The OP was on the right lines with the Gamma distribution. In particular, it is the Erlang distribution, which is a special case of the Gamma distribution, that is appropriate in this case.
Recall that the distribution of the sum of k iid exponential distributions is described by the Erlang distribution. That is,
Erl(k,r) ~ Exp(r) + … + Exp(r). (Sum of k exponential distros.)
Here we use r to denote the rate parameter (more commonly denoted by lamda), where r = 1/mean.
The pdf of the Erlang distro is given by
f( x; k, r ) = ( r^k * x^(k-1) * e^(-r*x) ) / (k-1)! .
(Wiki page for more info on the Erlang distro:
So, to find the distribution of the sample mean of k values drawn from k iid exponential distributions we simply need to find
1/k * Erl(k,r).
This is a scalar multiple of a random variable. The transformation of the random variable yields a distribution with pdf
f'( x; k, r ) = f( x*k; k, r ) = ( r^k * (x*k)^(k-1) * e^(-r*x*k) ) / (k-1)!.
In the OP's case we have k=5; plugging this into the pdf gives
f'( x; 5, r ) = (625/24) * e^(-5*x*r) * r^5 * x^4.
This is my first post here. I may have made a mistake. A quick numerical test gives similar results. Also the convolution method for k=2 gives the same result.
In case anyone's interested, my own problem is to find the distribution of the sample *variance* of k iid exponential distributions. I have yet to find the solution. | {"url":"http://www.physicsforums.com/showthread.php?p=3388754","timestamp":"2014-04-18T10:47:02Z","content_type":null,"content_length":"41238","record_id":"<urn:uuid:712f086e-1326-4980-a9f0-2cf9b8345f82>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rosemead Algebra Tutor
Find a Rosemead Algebra Tutor
...I am currently focusing on research, and planning to attend a graduate program for a PhD in art history. For the past three summers, I have worked as the lead teaching assistant for a math
course for incoming freshmen at Caltech. I'm also a tutor there, and I've been working with students from elementary school to undergrad and community college (including PCC, ELAC, Mt.
51 Subjects: including algebra 2, algebra 1, reading, chemistry
...I currently own all things Macintosh (MAC for short). I love Apple products, and can help you master them too! They are very user friendly, but if you find it difficult to switch over from your
PC, I can help you. I have a lot of patience when it comes to educating people at anything they need ...
29 Subjects: including algebra 1, English, reading, writing
...I have read a wide range of literature from JRR Tolkien, Anne McCaffrey, Andre Norton and Brian Jacques to Mark Twain, Victor Hugo, Charles Dickens, Alexander Dumas, Shakespeare and Dante. I
would love to help others broaden their knowledge in these areas. I have often found that students have trouble with a subject because of the way it is presented.
12 Subjects: including algebra 1, English, reading, writing
...I have recently completed both first and second semesters of organic chemistry with an A while attending Scripps College. Chemistry is by far one of my favorite subjects. While the first
semester seemed more conceptual, understanding why a reaction takes place helped me to master the second semester.
16 Subjects: including algebra 1, chemistry, reading, biology
...I also have several years of experience as a tutor and have integrated study and organizational skills throughout my teaching and tutoring. I passed the CBEST on my first try with flying
colors, and I have over eight years of experience teaching a variety of subjects. I am experienced teaching the reading, writing, and math skills needed to pass the CBEST.
29 Subjects: including algebra 1, reading, writing, English | {"url":"http://www.purplemath.com/Rosemead_Algebra_tutors.php","timestamp":"2014-04-18T06:09:31Z","content_type":null,"content_length":"24071","record_id":"<urn:uuid:7510cea5-6127-448f-b3c4-622b8365e1cc>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
An interesting version of the problem “balls into bins”
up vote 0 down vote favorite
Consider n people, each has k identical balls. Each people choose k different bins from m bins, constrained by the condition that there are no two people choose exactly the same k bins. For instance,
there are 6 people A,B,C,D,E,F, each hold 2 balls and there are 4 different bins①,②,③,④. If A choose ①②, B choose ①③, C choose ①④, D choose ②③,E choose ②④,and F choose③④, then we call it a
proper configuration since no two people choose exactly the same 2 bins
Now each people flip a unbiased coin, if HEAD appears then he put all his ball into the k bins he has chosen, each bin with one ball. He will do nothing if TAIL appears. Here comes the problem, given
a proper configuration, everyone flip a coin and behave the way we described above. Can we infer the coin result of each people based on the number of balls in each bin?
pr.probability co.combinatorics computer-science
It should be clearly stated the the choice of bins for each person is public knowledge, otherwise the answer is clearly no. – Tony Huynh May 16 '13 at 8:09
2 If you find one ball in each of the $4$ bins, this could come from $A+F$, $B+E$, or $C+D$. Is that really what you wanted to ask? – Douglas Zare May 16 '13 at 8:24
This is a question about enumerating bipartite graphs with given degree sequences, under a weak condition that two vertices on one side can't have the same neighbours on the other side. Under
reasonable conditions the number of solutions will grow faster than exponentially as $m,n\to\infty$. – Brendan McKay May 16 '13 at 11:46
@Tony Huynh That's true, the choice of bins for each person is public knowledge. Thanks for your reminding – Charles May 17 '13 at 13:45
@Douglas Zare I kown there eixsts multiple results sometimes i.e. we cannot infer the results of each people. Actually,what i want is some konwledge on the probability that this situation
occurs.Put it another way, given m,n and k, can we bound the probability that we couldn't infer a unique result? – Charles May 17 '13 at 13:54
show 1 more comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged pr.probability co.combinatorics computer-science or ask your own question. | {"url":"http://mathoverflow.net/questions/130800/an-interesting-version-of-the-problem-balls-into-bins","timestamp":"2014-04-20T18:50:21Z","content_type":null,"content_length":"51824","record_id":"<urn:uuid:a745e3e0-0947-4b21-8949-284db05c4b27>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
Properties of divisors when moving from char 0 to char p.
up vote 4 down vote favorite
Consider a smooth projective variety $X$ over $\mathbb{C}$ such that $X$ has models over $\mathbb{Z}[1/N]$ and $X_p=X_{\mathbb{Z}[1/N]}\times \text{Spec}(\mathbb{F}_p)$ is also a smooth projective
Now let $L$ be a line bundle over $X_{\mathbb{Z}[1/N]}$ and $L_\mathbb{C}$ and $L_p$ be the corresponding line bundle over $X=X_{\mathbb{Z}[1/N]}\times\text{Spec}(\mathbb{C})$ and $X_p$ respectively.
How do the properties of $L_\mathbb{C}$ (ample, nef, big, effective) relate to the properties of $L_p$?
For example, if $L_\mathbb{C}$ is nef does this imply that $L_p$ is nef?
Any references or starting points would be greatly appreciated.
add comment
1 Answer
active oldest votes
No, that is not true: the property of being nef, ample, etc. is not stable under specialization (although it is stable under generization). For instance, begin with $Y=\mathbb{P}^2_{\
mathbb{Z}}$, i.e., $\text{Proj} \ \mathbb{Z}[s,t,u]$ together with its natural projection $$\pi:Y\to \text{Spec}\ \mathbb{Z}.$$ Consider the homogeneous ideal $$I = \langle st(t-s),s
(u-pt),t(u-ps),u(t-s),u(u-ps) \rangle.$$ The corresponding zero scheme is the union of three disjoint sections of $\pi$, namely $[1,0,0]$, $[0,1,0]$ and $[1,1,p]$. Let $\nu:X\to Y$ be
the blowing up of $Y$ along $I$.
There is a canonically defined pullback map of invertible sheaves, $$f:\nu^*\omega_{Y/\mathbb{Z}} \to \omega_{X/\mathbb{Z}},$$ which identifies $\nu^*\omega_{Y/\mathbb{Z}}$ with $\
omega_{X/\mathbb{Z}}(-\underline{E})$ for a unique Cartier divisor $\underline{E}$ on $X$, the exceptional divisor of $\nu$. Now consider the invertible sheaf $L = (\nu^*\mathcal{O}_Y
up vote 6 (2))(-\underline{E})$.
down vote
accepted Over $\mathbb{Z}[1/p]$, the invertible sheaf $L$ is nef, and even globally generated. Essentially this is because the associated ideal $I[1/p]$ is generated by the homogeneous
generators of degree $2$; indeed, the one cubic generator $st(t-s)$ is already in the ideal generated by the quadratic generators, $$st(t-s) = \frac{1}{p}\left(s[t(u-ps)]-t[s(u-pt)]\
right).$$ However, in characteristic $p$ this is not nef: the strict transform of the line $Z(u)$ has intersection number $-1$ with $L$.
Edit. Replace "openness" claim by "stable under generization", see comments below.
1 I'm not sure quite what you mean by your parenthetical remark in the first line, but nefness is not an open property--see e.g. math.mit.edu/~johnl/docs/bminus.pdf – Daniel Litt Aug
15 '13 at 16:37
I should remark, the example I linked to is for an $\mathbb{R}$-divisor (theorem 1.2); I don't know of an honest example for a Cartier divisor, but I don't see why nefness should be
open in that case either. – Daniel Litt Aug 15 '13 at 16:52
@Daniel: "Open" should be "stable under generization". Let $(X_R,\mathcal{O}(1))$ be a projective, flat $R$-scheme, where $R$ is a DVR. Let $\mathcal{L}_R$ be an invertible sheaf on
1 $X_R$. Assume that $\mathcal{L}_0$ is nef on the closed fiber $X_0$. Then for every positive integer $N$, $\mathcal{L}_0^{\otimes N}(1)$ is ample on $X_0$ by Kleiman's criterion.
Thus, by openness of ampleness, $\mathcal{L}_R^{\otimes N}(1)$ is ample for every positive integer $N$. Thus, $\mathcal{L}_R$ is nef. – Jason Starr Aug 15 '13 at 19:17
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry nt.number-theory complex-geometry projective-geometry positive-characteristic or ask your own question. | {"url":"http://mathoverflow.net/questions/139463/properties-of-divisors-when-moving-from-char-0-to-char-p","timestamp":"2014-04-19T07:19:53Z","content_type":null,"content_length":"56403","record_id":"<urn:uuid:a2dea9c2-8933-4d0d-a5a4-5fdf56f62eeb>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
Extreme events and the multifractal butterfly effect
Seminar Room 1, Newton Institute
Scaling processes abound in geophysics and this has important consequences for the probability distributions of the corresponding intensive and extensive geophysical variables. Classical scaling
processes – such as in classical turbulence – are self-similar, they are characterized by exponents which are invariant under isotropic scale changes. However, the atmosphere and lithosphere are
strongly stratified so that we must generalize the notion of scale allowing for invariance under anisotropic zooms. When this is done, it is often found that scaling can apply over huge ranges, up to
planetary in extent. It is now clear that the generic scaling process is the multifractal cascade in which a scale invariant dynamical mechanism repeats (multiplicatively) from scale to scale;
anisotropic scaling – and multifractal universality classes - imply that multifractals are widely relevant in the earth sciences. General (canonical) multifractal processes developed over finite
ranges of scale and analyzed at their smallest scale (the “bare” process), have “long-tailed” distributions (e.g. the lognormal). However the small scale cascade limit is singular so that the
integration/averaging of cascades developed down to their small scale limits leads to “dressed” properties characterized notably by “fat-tailed” power law probability distributions Pr(x>s)=s**-qD
where x is a random value, s a threshold and qD the critical exponent implying that the moments
for q>qD diverge. For cascades averaged over scales larger than the inne r cascade scale, the moments q>qD are no longer determined by the large scale finite by the small scale details: the
“multifractal butterfly effect”. The sampling properties of such processes can be understood with “multifractal phase transitions”; we review this as well as evidence for the divergence of moments in
laboratory, atmospheric and climatological series, and in data from the solid earth and discuss implications (abrupt changes, etc.).
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible. | {"url":"http://www.newton.ac.uk/programmes/MFE/seminars/2013103115501.html","timestamp":"2014-04-18T15:46:59Z","content_type":null,"content_length":"7672","record_id":"<urn:uuid:9bd67706-8c90-46f2-b8e5-ff718701e479>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
M&M Model for Radioactive Decay
A tasty in-class demonstration of radioactive decay using two colors of M&M's. Illustrates the quantitative concepts of probability and
exponential decay
. This activity is appropriate for small classes (<40 students).
Learning Goals
• to illustrate the exponential nature of radioactive decay
• to demonstrate the concept of half-life
• to illustrate probability and how abundance of radioctive elements actually determines rate of decay.
Context for Use
This is a fun and tasty way to illustrate radioactive decay and the important mathematical concepts that go along with decay. It is best used in-class or in a lab after the concepts of half-life and
decay have been introduced. Depending on how many times it gets passed around the class, this can take 15-30 minutes. Required equipment includes: a known quantity of two colors of M&M's or other
colored candy (50 of each is probably enough) plus extra of the "radiogenic" isotope, a jar and some graph paper.
This particular activity works best for small classes. Several alternate ideas for large classes or small group work are linked in
References and Resources
near the bottom of this page.
Teaching Notes and Tips
This is a relatively easy and fun demonstration for a smaller class. So that you (and the students) can keep count of the number of "decayed" M&M's, tell the students not to eat the decayed atoms
right away. When the experiment is finished they may eat their radioactive atoms. Make sure that others who haven't picked radioactive atoms get some of the radiogenic isotopes to eat.
There are several variations of this experiment: One is to start with all radioactive elements -- simulating something like a zircon (which excludes the radiogenic Pb) and show them how that works.
This is a simpler system and may be easier for them to comprehend. Using some "initial radiogenic isotopes" can be useful, though. It is a good introduction to using isotopes as tracers (e.g.,
initial Sr ratio). Make sure that the students understand that if a mineral that includes the radiogenic isotope is used, the initial number of radiogenic isotopes must be calculated in order to
calculate age.
The References and Resources section of this page has other adaptations of this to include individual or small group activities with M&M's.
Teaching Materials
With a small class, pass around a jar of M&M's with a known quantity of two colors (e.g., red and green holiday M&M's) in it. Have each student reach in (blindly) and take an M&M. If the M&M is red
(radioactive), it has decayed, keep it out of the jar and replace it with a green (radiogenic) candy; if it is green, it goes back into the jar. As the jar gets passed around the room, the number of
red M&M's gets smaller and the green get more abundant. Therefore, it gets harder and harder to pick a red one. This simulates radioactive decay well and helps students to understand why the number
of decaying isotopes gets smaller as the number of radioactive isotopes gets smaller.
You can graph this "experiment" if you know how many of each color you started with and how many red M&M's have been removed. After a certain number of "decays", stop and count how many reds are
left. Continue through another sequence of "picks" and plot reds again. Repeat this procedure a few more times. See if the students can figure out how long a "half-life" is for this problem based on
the graph you generated.
Mostly, this exercise needs a quick check of whether the concepts of half-life, probability, dependence on quantity and exponential functions were understood. There are several ways that this can be
• If you have a student response system, a quick quiz with questions that cover these four concepts is an easy way to determine the students' understanding.
• Having students work through a short problem (in groups or on their own) that applies these concepts in a geologic context -- a problem where they have to read a graph or calculate how many
isotopes are left after x half-lives -- can also provide a quick check.
• A short written quiz might also be a way to assess comprehension.
• One of the best ways that i can think of to test comprehension with this exercise is to have the students figure out the "half-life" of this system (i.e., How many "picks" constitutes a
half-life?). If they understand the concepts, they should be able to figure this out.
References and Resources
Science NetLinks has a very nice lesson plan for a similar activity entitled Radioactive Decay: A Sweet Simulation of a Half-Life (more info)
Science House has a template for Radioactive Decay of Candium
Teachers Experiencing Antarctica and the Arctic has an activity entitled The Dating Game that actually has the students apply what they are learning to a real problem.
Contact the Author
Please contact the author with questions or suggestions.
Controlled Vocabulary Terms
Subject: Geoscience:Geology:Historical Geology
Resource Type: Activities:Classroom Activity:Short Activity:Demonstration, Activities:Lab Activity
Special Interest: Quantitative
Grade Level: College Lower (13-14):Introductory Level
Quantitative Skills: Arithmetic/Computation, Graphs, Probability and Statistics:Probability, Data Trends:Curve Fitting/Regression, Probability and Statistics
Ready for Use: Ready to Use
Earth System Topics: Time/Earth History
Topics: Chemistry/Physics/Mathematics, Time/Earth History | {"url":"http://serc.carleton.edu/quantskills/activities/MandMModel.html","timestamp":"2014-04-20T13:31:41Z","content_type":null,"content_length":"27151","record_id":"<urn:uuid:4a713290-be6b-4ef1-88c9-af35fb902009>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inverse kinematics with Damped Least Squares Method
01-18-2010 #1
Inverse kinematics with Damped Least Squares Method
Hi Trossen Community,
I have been recycling old robots by taking them apart and thinking of new robots to build with them when I came across this unfinished idea of a 5DoF legged quadruped I had a while ago; the
flying pig: bot:
Now mind you I have all the spare servos, electronics and nearly all the brackets to make this robot and would only need to order the aluminum/abs plates, but before I move forward and even
finish the solid works design, I want to know if I can handle 5DoF inverse kinematics on an Axon microcontroller.
So I started this thread to both document my exploration of advanced IK theory and to start a high level discussion regarding inverse kinematic techniques. What will follow is an in depth
discussion of forward and inverse kinematics starting with my attempt to perform the Damped Least Squares method.
Re: Inverse kinematics with Damped Least Squares Method
The first task to working out the inverse kinematics is to work out the forward kinematics. I used Matlab to simulate the forward kinematics because its symbolic math will allow me to quickly and
easily take derivative later on and then port the results to the Axon's C code. The following is a annotated picture of my 5DoF leg showing how I labeled each angle, distance, and joint in the
To get the rotation matrices around a variable axis of rotation I used the following snipet of code
function result = GetRotation(RotAngle,Axis)
%Normalize the Axis of rotation
Axis = Axis/(sum((Axis).^2))^(.5);
skewSymmMat = [ 0 -Axis(3) Axis(2);
Axis(3) 0 -Axis(1);
-Axis(2) Axis(1) 0;];
outerProduct = [Axis(1)*Axis(1) Axis(1)*Axis(2) Axis(1)*Axis(3);
Axis(2)*Axis(1) Axis(2)*Axis(2) Axis(2)*Axis(3);
Axis(3)*Axis(1) Axis(3)*Axis(2) Axis(3)*Axis(3);];
result = outerProduct + cos(RotAngle)*(eye(3,3)-outerProduct)+sin(RotAngle)*skewSymmMat;
Using this function I then wrote a function for calculating the forward kinematics of the entire leg:
function result = GetLegSeg(start,Displacement,RotAngle,Axis,drawColor)
Axis = Axis/(sum((Axis).^2))^(.5);
lengthDisp = (sum((Displacement).^2))^(.5);
RotMatrix = GetRotation(RotAngle,Axis);
finish = start + RotMatrix*Displacement;
result = finish-start;
function result = Get5DoFLeg(dist1,dist2,dist3,dist4,dist5,theta1,theta2,theta3,theta4,theta5)
d0 = [0; 1; 0];
H1= [0; 0; 0];
d1 = [0; 0; -dist1];
d1 = GetLegSeg(H1,d1,theta1,d0,'bo-');
H2 = H1+d1;
d2 = [0; dist2; 0];
d2 = GetLegSeg(H2,d2,theta2,d1,'ro-');
legBendAxis = cross(d1,d2);
H3 = H2+d2;
hipDirection = d1/(sum((d1).^2))^(.5);
d3 = dist3*hipDirection;
d3 = GetLegSeg(H3,d3,theta3,legBendAxis,'ko-');
H4 = H3+d3;
d4 = dist4*hipDirection;
d4 = GetLegSeg(H4,d4,theta3+theta4,legBendAxis,'ko-');
H5 = H4+d4;
d5 = dist5*hipDirection;
d5 = GetLegSeg(H5,d5,theta3+theta4+theta5,legBendAxis,'ko-');
H6 = H5+d5;
result = H6;
Using these functions I then plotted the leg segments at different theta angles:
These few sample pictures aren't much but they show the forward Kinematics in action. By inserting some symbolic math objects into the above matlab script I can calculate the set of three HUGE
equations that express the end point of the leg (the effector) in terms of the joint angles. I won't bother reprinting them here since they are over half a page long. In my next post I will show
how to use the forward kinematics to work out the Jacobian and the use the Jacobian to solve the inverse kinematic problem.
Re: Inverse kinematics with Damped Least Squares Method
Thanks for posting this, it's very interesting. When you port this into C, what matrix library do you plan to use? Does anyone have any good suggestions for basic matrix maths?
http://www.buildtherobot.blogspot.com - for robot builders and enthusiasts
Re: Inverse kinematics with Damped Least Squares Method
Well I plan to write my own matrix library for C assuming I can't find one (a safe bet I think). We will see. Just as an update I have the Damped Least Squares working now in Matlab and will be
posting more about it tomorrow when I am less tired :P
Re: Inverse kinematics with Damped Least Squares Method
I came across this in my never-ending quest for knowledge.
The code sample is not exactly what you're looking for, but could useful in writing your own library.
Re: Inverse kinematics with Damped Least Squares Method
Very Cool, but i believe the Delta Robot is one of those rare fully defined systems. You wouldn't need Damped Least Squares because there aren't redundant solutions to the IK problem. However I
like the design very much, especially that it was pulled off with legos.
Re: Inverse kinematics with Damped Least Squares Method
So last night i finished the IK simulation on Matlab using the Damped Least Squares method. Using the symbolic math functions on Matlab I found the effector equation by typing in:
>> syms theta1
>> syms theta2
>> syms theta3
>> syms theta4
>> syms theta5
>> syms dist1
>> syms dist2
>> syms dist3
>> syms dist4
>> syms dist5
>> syms delta
>> effector = Get5DoFLeg(dist1,dist2,dist3,dist4,dist5,theta1,theta2,theta3,theta4,theta5)
>> effector = simplify(effector)
I then wrote the function GetJacobian and used it on the effector symbolic object to generate the jacobian equation for my leg
function result = GetJacobian(s,variables,numVar)
for i=1:3
for j=1:numVar
result = myJacobian;
I copy/pasted the resulting humongous equations into a new function which solved the jacobian for some given distances and theta angles.
The following video shows how I used the Jacobian to calculate the IK problem and shows a Matlab simulation of Damped Least Squares IK in action:
Using the Jacobian I calculated the damped pseudo inverse, Jdagger:
function result = GetJDagger(variables)
angle1 = variables(1);
angle2 = variables(2);
angle3 = variables(3);
angle4 = variables(4);
angle5 = variables(5);
d1 = 2;
d2 = 1;
d3 = 3;
d4 = 4;
d5 = 2;
delta = .5;
Jacobian = SolveJacobian(angle1,angle2,angle3,angle4,angle5);
Jdagger = transpose(Jacobian)*((Jacobian*transpose(Jacobian)+delta^2*eye(3,3))^-1);
result = Jdagger;
Finally I put together a simulation of the DLS process using the following function which .
function result = GetLegPath( Start, Target, iterations )
currentJoints = Start;
dist1 = 2;
dist2 = 1;
dist3 = 3;
dist4 = 4;
dist5 = 2;
for i=1:iterations
currentPos = Get5DoFLeg(currentJoints(1),currentJoints(2),currentJoints(3),currentJoints(4),currentJoints(5));
dE = Target - currentPos;
JDagger = GetJDagger(currentJoints);
dTheta = JDagger*dE;
currentJoints = currentJoints + dTheta;
result = currentJoints;
Each iterations of the GetLegPath brings the currentJoints closer to the values needed to place the leg at the Target position.
Well that's it. Within 24 hours I went from knowing nothing of advance IK theory to having a working Matlab model Axon mcu and have it execute timed iterations of the DLS method to see if it is
practical for such a processor to run a quadruped.
Last edited by WGhost9; 01-26-2010 at 11:12 PM.
Re: Inverse kinematics with Damped Least Squares Method
Update: Well I ported my some of my matlab code to C on the axon today and timed it. The forward kinematics can be calculated for a single leg in 4 msecs and the Jacobian can be calculated in
13-14msecs so I think the Axon CAN do four legged calculations on the fly. However, the ROM overhead is significant. With the Axon stripped of unneeded libraries I am still using up 75% of
the memory space to calculate Jdagger. I need to either find a better (read more program space) micro controller or a I need to double up two axon microcontrollers to get this to work.
Re: Inverse kinematics with Damped Least Squares Method
Okay so I have some results from the Axon (ATmega640) running the Damped Least Squares algorithm:
Time to calculate the forward kinematics of a 5DoF leg: 3-4msecs
Time to calculate the the Jacobian: 14 msecs
Time to calculate the damped pseudo inverse of the Jacobian (Jdagger): 4 msecs
Total time needed to place a leg: 22msecs
Total time needed to place 4 legs: 88msecs
Estimated refresh rate for leg placements: Ten times a second
Code Space Required For Fast DLS: 40-56 kBytes
As you can see, the Axon can probably perform a reasonably smooth DLS algorithm. However it won't have the code space left for any other application. What I need is a new microcontroller that is
at least as fast as the Axon and has more code space. I am thinking of buying a XMOS XC-A1 board and hooking it up to an SSC-32 but I am not sure how to do this or if it is even feasible. Perhaps
there is another microcontroller I should be looking at? Please let me know what you think.
Last edited by WGhost9; 01-20-2010 at 11:57 AM.
Re: Inverse kinematics with Damped Least Squares Method
Absolutely you can hook the XMOS up to an SSC32. Or many many SSC32's.
Software serial is in the tutorials.
I Void Warranties�
01-18-2010 #2
01-18-2010 #3
Join Date
Dec 2009
Rep Power
01-18-2010 #4
01-19-2010 #5
Join Date
Sep 2008
Toronto, Ontario
Rep Power
01-19-2010 #6
01-19-2010 #7
01-19-2010 #8
01-20-2010 #9
01-20-2010 #10 | {"url":"http://forums.trossenrobotics.com/showthread.php?3812-Inverse-kinematics-with-Damped-Least-Squares-Method&highlight=axon","timestamp":"2014-04-18T05:30:45Z","content_type":null,"content_length":"101318","record_id":"<urn:uuid:0ff76458-6210-48fb-b63a-b2f62f19fa7d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimizing search engines
by Swapnil Kotwal, Software Engineer at Vertis Infotech Ltd. on Aug 29, 2013
Optimizing Search Engines(A Mathematical Point view) ...
Optimizing Search Engines(A Mathematical Point view)
Following things covered here
- A basic introduction to Search Engine Optimizing.
Introduction to Google and Bing Webmaster.
- Use of Google Toolbar to see Page Rank of each page(Calculating importance of each page for Google Search Engines.)
- PageRank Algorithm(I will the focus on this point mostly).
- How it is useful to real SEO and practical implementation of SEO.
- Google Bomb.
Total Views
Views on SlideShare
Embed Views
Usage Rights
© All Rights Reserved | {"url":"http://www.slideshare.net/swapnilvkotwal/optimizing-search-engines","timestamp":"2014-04-18T06:52:58Z","content_type":null,"content_length":"137264","record_id":"<urn:uuid:702f7cf9-e63b-43fb-a104-d8923d923d3b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
One variable: graphs and descriptive statistics
WHEN DO YOU NEED THEM?
When does one ever want to look at results for just one variable? True, most classical research involves questions/hypotheses that entail looking at relationships between at least two variables. But
here are some common situations where you want to look at graphs and descriptive statistics for cases on one variable:
● Some studies look directly at more or less all of an entire population of interest, and have questions about single variables, so the results just need to be given as graphs and descriptive
statistics. E.g.
-- A census survey to find out how many people in Wales speak Welsh shows that 20.8% claim to.
-- A teacher has the feeling (hypothesis) that her vocab teaching (or the students' learning of vocab) is not very successful. She tests her class to see how much of the vocabulary of the last ten
lessons they have learnt, as part of an action research project to improve her vocabulary teaching.
● In any study, however many variables are involved, it is often valuable to describe our cases in terms of each of a set of variables that are not central to the investigation, as part of your
control of unwanted factors (cf CVs and SAMPLING). E.g.
-- You are interested in the opinions of Greek learners in private schools in Greece about their English course materials, so you send 50 questionnaires to your friend who works in one to distribute
for you. You get thirty-one back. On the questionnaire, apart from questions about the central variables of your study, you might ask questions to elicit their first language, age, gender, experience
of English outside school etc. You then check each of these separately to see if it suggests your sample is unusual in any way (?unexpectedly many girls), has odd cases in it (?two who say their
first language is Bulgarian) etc. You might well display proportions and graphs for genders, age groups, respondents vs non-respondents etc. as part of your report on your subjects.
● In any study, however many variables are involved, you may be in the position of deciding groups of cases on the basis of information gathered about them, rather than in advance. E.g.
-- In the above example you might actually want to make up groups out of your subjects, using their questionnaire responses, to use as EVs. Examining the 'experience of English outside school'
variable you find there are ten who have been abroad to English speaking countries, so you might set up a two category EV on this basis to see if opinions about course materials relate to having/not
having this experience at all. If so, would you have any hypothesis about the answer? Anyway, you might display a graph or table showing the proportions.
● In any study, however many variables are involved, it is often valuable to look at results for all cases on each dependent variable or condition separately and/or in each group separately as well
as doing what is necessary to establish relationships between variables. E.g.
-- In our example of gender and attitude to RP, with the hypothesis that there is at attitude difference between genders. Apart from doing the relevant two-variable graphs and statistics one would do
well to explore the data with histograms for each group separately and the whole set of subjects as if one group. One does not necessarily report in the write-up every statistic or graph one
calculates if it does not prompt anything interesting to say about it, but even statisticians often comment that researchers often 'don't look at their data enough, but just want to do a significance
test and get on to the next thing'.
If you are not into one variable inferential stats (which we are omitting here), the choices to be made are simple:
Scale type of Variable:
Interval Rank order Categories Counts
of any sort in a continuum
Graphic histogram ordered list bar chart, single bar
Presentation: of scores, of cases pie chart
frequency (of frequencies
polygon or percent)
Centrality statistic: mean, median rank modal category frequency
median score,
modal score
Variation statistic: standard quartile index of commonality
deviation, deviation
-- Only the italicised ones are commonly met and will be dealt with here.
-- The modal category is simply the one with the most cases in it - the most popular one.
-- The mean (denoted by M or X-bar) is what we usually call the average in everyday English.
-- The standard deviation (SD) is a measure of the spread of scores. Roughly it is the average of the differences between each score and the mean score (see any stats book for the formula). So if
everyone in a group scores the same, which will be the mean for the group, then the average of the differences of each score from the mean is 0 (SD=0; no variation). The more each score differs from
the mean, the higher the SD gets, indicating more variation or 'disagreement' in the group. Usually one 'wants' small SDs.
-- Similar concepts to SD, calculated in various ways, are called 'error' and 'variance' in statistics.
Joke from WWW: Most of us have A Greater Than Average Number of Legs
The great majority of people have more than the average number of legs. Amongst the 57 million people in Britain there are probably 5,000 people who have only one leg. Therefore the average number of
legs is
((5000 x 1) + (56,995,000 x 2)) / 57,000,000 = 1.9999123.
Since most people have two legs... need I say more?
1. Shows two versions of a histogram of results for one group of 16 learners on one variable ('interval' scores for quality of each subject's written composition). Which version is better and why, or
is neither optimal? What distinguishes a histogram from a bar graph/chart (seen in 2)? When to use each?
2. Is a bar graph (=bar chart) showing the broad subject specialism of participants in a study. I.e. it displays how all cases are categorised on a two category variable. How would you improve it
for inclusion in a write-up?
3. Shows two bar graphs for the same data – a set of several mean scores. One group of learners has given their ratings (on a five point scale) of how much they think eight different aspects of their
compositions improved when done by word processing. The average ratings for each of these 8 variables are displayed together. Which is the better version and why?
1) Which sounds more impressive, A or B?
A) 2 out of 4 subjects agreed B) 50% of subjects agreed
A) 40 out of 80 subjects agreed B) 50% of subjects agreed
OK, but which result would you actually trust more? How should one report such results?
2) What is unclear? How to restate this better?
In our survey we polled 50 people, though 10 declined to participate. …. 60% said yes to the question ‘Do you like the English class?’…
3) Percentage scores versus group/aggregate percent.
Two ways of handling data arising from different numbers of potential occurrences for different people. Imaginary example of data where three subjects have been recorded in quasi-natural
conversation, and counts have been made of their NS-like/correct use of third person –s.
Why do the percent differ in A and B? Which would the statistician prefer and why?
A) Analysis with subjects as cases: percentage scores and their mean
Case Correct Incorrect Total occurrences Percent correct score Mean percent correct
Learner 1 12 12 24 50
Learner 2 8 12 20 40
Learner 3 3 9 12 25
Total 23 33 56
B) Analysis with occurrences as cases: group percent
Total frequency Percent
Correct 23 41.1%
Incorrect 33 58.9%
Total occurrences 56
PJS rev 05 | {"url":"http://privatewww.essex.ac.uk/~scholp/onevardesc.htm","timestamp":"2014-04-19T04:19:50Z","content_type":null,"content_length":"49270","record_id":"<urn:uuid:9bf321f3-fc84-407d-b8cb-70db146bad17>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fantasy Baseball Cafe
I'll piggyback on this thread because it's a similar situation...
I have a 5x5 H2H league that's been together awhile. We finally decided to go to 7x7. It was going to be 6x6 but these are the pitching cats I wanted: Wins, Losses, K/9, ERA, WHIP, Saves, Holds.
After enough badgering I finally got my way.
Now that we got that straightened out we're having some trouble agreeing on the hitting categories. Everyone is in agreement on OBP, SLG, HR, RBI, R, SB. The disagreement is over BA. Some people want
it because they can't fathom fantasy baseball without it and others don't like it because they feel it's redundant.
So for those of you who have similar leagues what hitting cats do you use?
Maine has a good swing for a pitcher but on anything that moves, he has no chance. And if it's a fastball, it has to be up in the zone. Basically, the pitcher has to hit his bat. - Mike Pelfrey
TheYanks04 wrote:Do yourself a favor and stay with standard 5x5 Roto. Holds? Come on. I just finished drafting a Holds league and you get into stunning debates over the value of Ray King, Steve
Kline, Antonio Alfonseca and JC Romero among others. If that rocks your boat then by all means. Putting 4 MRs on my roster for no other reason but for Holds is something I find absolutely silly.
your knowledge must be limited then
durham=we ghetto
Amazinz, combine OBP and SLG, add BA, and then add in some other category. Total bases? Negative strikeouts maybe for hitters? And for your pitching, you should probably just combine Wins and losses
to Wins-losses, and then throw in K/BB. This will give you ultimate disencouragement of spot starting.
The problem with adding holds is that it devaules pitchers even further behind hitters. Adding holds every pitcher still remains at most a 4 category stud while adding another offensive category
allows hitters to contribute heavily to 6. SLG% against is a decent pitching category if I was forced to add one.
RE: Amazinz
I would probably just try and combine the W and L pitching category into W minus L. You aren't really going to find any new hitting categories that do not overlap the ones you already have.
The value depends not only on the number of categories a player contributes to, but also on the variance in the contributions, so I'm not sure that adding holds reduces relative pitcher values. It
would be interesting to look at auction values on that.
Even so, I have no problem with having hitters valued more. It's clear that in real baseball hitter contributions have more value than pitchers, because:
A. offense and defense are roughly equal in the effects on winning.
B. hitters are basically 100% responsible for offense, while defense depends on both pitching and fielding.
C. So, hitters are at least 50% of the win equation, while pitchers are certainly less than that.
GotowarMissAgnes wrote:The value depends not only on the number of categories a player contributes to, but also on the variance in the contributions, so I'm not sure that adding holds reduces
relative pitcher values. It would be interesting to look at auction values on that.
It doesn't change much how pitchers are valued amongst each other, although it could make the top MR more valuable than the top Closer. What I said was it changed the value between hitters and
pitchers which it abolutely does.
Even so, I have no problem with having hitters valued more.
When it comes to making real baseball evaluations I have absolutely zero problem saying hitters are more valuable. But when it comes to the fantasy baseball game, I completely dislike the idea. To me
it makes for a rather boring draft when there is little debate needed on whether the best hitter is worth more than the best pitcher. There still isn't a ton of debate in my mind with a 5X5 setup,
but it is much closer than in a 6X6 with holds.
Tavish wrote:
It doesn't change much how pitchers are valued amongst each other, although it could make the top MR more valuable than the top Closer. What I said was it changed the value between hitters and
pitchers which it abolutely does.
I don't see how that can be true, Tavish, but would be interested in hearing the evidence. Adding a pitching category almost by necessity must change relative pitcher values.
But with respect to pitchers and hitters your statement, I think, was that it reduced pitcher values because pitchers were now only 4 category contributors. So, I'd like to see evidence not just that
it changes pitcher/hitter values, but that it reduces pitcher values and does so because of the fact that almost no pitchers contribute to all 6 categories.
My point (and I have no hard evidence either) is that the valuation of a player depends both on the number of categories and the distribution of outcomes within that category. Assume a really silly
example to see this---let every player with 100% certainty have the exact same result for all categories., including two categories where pitchers make no contribution. Every player will have the
exact same value because they all contribute equally to the outcome.
So, it's not just the number of categories, it's how outcomes are distributed both across and within categories. I'd agree with your point if the distribution of results was the same within each
category. But, each category's distribution leaves each player's statistical contribution that is different at the margin. The addition of holds, changes the distribution of within each pitching
category, and I don't think you can say for sure how it impacts pitcher versus hitter value.
GotowarMissAgnes wrote:But with respect to pitchers and hitters your statement, I think, was that it reduced pitcher values because pitchers were now only 4 category contributors. So, I'd like to
see evidence not just that it changes pitcher/hitter values, but that it reduces pitcher values and does so because of the fact that almost no pitchers contribute to all 6 categories.
My point (and I have no hard evidence either) is that the valuation of a player depends both on the number of categories and the distribution of outcomes within that category. Assume a really
silly example to see this---let every player with 100% certainty have the exact same result for all categories., including two categories where pitchers make no contribution. Every player will
have the exact same value because they all contribute equally to the outcome.
Players will not contribute equally to the outcome. Hitters have the chance to contribute equally to the hitting categories that apply to them and pitchers have chance to contribute equally to the
pitching categories that apply to them. What I have said twice now is that there is an increased difference between hitters and pitchers, not between hitters and hitters or pitchers and pitchers.
For an example suppose you split BA into SLG and OBA and add holds to give you a 6X6 league. The top hitters such as Bonds, Pujols, Helton, etc. now will help your team win in one additional category
and increase your overall point total. By adding holds how much more value does Santana or Gagne gain?
Its not the addition of a category that changes the dynamics, its the addition of a specialization category for pitchers and a category that applies to all hitters.
Tavish wrote:Players will not contribute equally to the outcome. Hitters have the chance to contribute equally to the hitting categories that apply to them and pitchers have chance to contribute
equally to the pitching categories that apply to them. What I have said twice now is that there is an increased difference between hitters and pitchers, not between hitters and hitters or
pitchers and pitchers.
For an example suppose you split BA into SLG and OBA and add holds to give you a 6X6 league. The top hitters such as Bonds, Pujols, Helton, etc. now will help your team win in one additional
category and increase your overall point total. By adding holds how much more value does Santana or Gagne gain?
Its not the addition of a category that changes the dynamics, its the addition of a specialization category for pitchers and a category that applies to all hitters.
I think I understand what you are saying, but I don't think it's as simple a connection as you seem to think. I've played with the USAToday custom values and I don't see the effect happening that you
think happens.
Yes, adding the category gives Bonds value in two categories, rather than one. But, it also means that some guys with high BA, but below average OBP and SLG lose value in two categories. Yes, Santana
gains nothing by holds, but adding holds adds tremendous value to some pitchers, and its effect on other pitchers depends on how the addition of those holders into the pool impacts the distribution
of the other categories.
So, I don't see how you can generally conclude that adding holds increases the difference in value between pitchers and hitters. | {"url":"http://www.fantasybaseballcafe.com/forums/viewtopic.php?t=97372&start=10","timestamp":"2014-04-19T19:38:40Z","content_type":null,"content_length":"85930","record_id":"<urn:uuid:7e405a92-40b6-4f23-bd13-a032b9dbb3c4>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inverting a function
June 24th 2012, 03:04 PM #1
Jul 2010
Inverting a function
Given this function $y=\sqrt[3]{1-x^3}$, I have to find its inverse. How do you do that, and which is its inverse?
I am clueless, and I unfortunately need this answer tomorrow. I would hugely appreciate any help with this.
Re: Inverting a function
Re: Inverting a function
Could you please show me how to solve for $y$?
Re: Inverting a function
Cube both sides, subtract 1, multiply by -1 then take the cube root.
Re: Inverting a function
Re: Inverting a function
another way to look at this is:
the inverse of a composition is the reverse composition of the inverses:
$(f \circ g)^{-1} = g^{-1} \circ f^{-1}$.
to see that this is so, recall that an inverse of a function (provided one does exist) is a function g so that f(g(x)) = g(f(x)) = x.
we can write this as: $(f \circ g)(x) = (g \circ f)(x) = x$.
so let's calculate $[(f \circ g) \circ (g^{-1} \circ f^{-1})](x)$.
$[(f \circ g) \circ (g^{-1} \circ f^{-1})](x) = (f \circ g)((g^{-1} \circ f^{-1})(x))$
$= (f \circ g)(g^{-1}(f^{-1}(x)) = f(g(g^{-1}(f^{-1}(x))))$
since $g(g^{-1}(t)) = t$ no matter what "t" is, we have (taking $t = f^{-1}(x)$),
$f(g(g^{-1}(f^{-1}(x)))) = f(f^{-1}(x)) = x$.
the proof that $[(g^{-1} \circ f^{-1}) \circ (f \circ g)](x) = x$ is entirely similar.
now let's look at "your function":
$f(x) = \sqrt[3]{1 - x^3}$.
this is the composition of 3 functions:
$f = g \circ h \circ k$, where
$k(x) = x^3$
$h(x) = 1 - x$
$g(x) = \sqrt[3]{x}$
a little thought should convince you that:
$g^{-1}(x) = x^3$ (note that means k is g's inverse)
$h^{-1}(x) = 1-x$ (h is its own inverse, this can happen)
$k^{-1}(x) = \sqrt[3]{x}$ (and as we would expect, g is k's inverse).
in short, $f^{-1} = (g \circ h \circ k)^{-1} = k^{-1} \circ h^{-1} \circ g^{-1} = g \circ h \circ k = f$
you might find this slightly amazing.
what this means is: $(f \circ f)(x) = x$.
let's "try it" with some actual number, instead of x. how about x = 5?
$f(5) = \sqrt[3]{1 - 125} = \sqrt[3]{-124}$
ok, that's kind of a weird number. so let's find f(x) when $x = \sqrt[3]{-124}$.
$f(\sqrt[3]{-124}) = \sqrt[3]{1 - (\sqrt[3]{-124})^3} = \sqrt[3]{1 - (-124)} = \sqrt[3]{125} = 5$. huh. how about that?
Last edited by Deveno; June 24th 2012 at 10:52 PM.
Re: Inverting a function
another way to look at this is:
the inverse of a composition is the reverse composition of the inverses:
$(f \circ g)^{-1} = g^{-1} \circ f^{-1}$.
to see that this is so, recall that an inverse of a function (provided one does exist) is a function g so that f(g(x)) = g(f(x)) = x.
we can write this as: $(f \circ g)(x) = (g \circ f)(x) = x$.
so let's calculate $[(f \circ g) \circ (g^{-1} \circ f^{-1})](x)$.
$[(f \circ g) \circ (g^{-1} \circ f^{-1})](x) = (f \circ g)((g^{-1} \circ f^{-1})(x))$
$= (f \circ g)(g^{-1}(f^{-1}(x)) = f(g(g^{-1}(f^{-1}(x))))$
since $g(g^{-1}(t)) = t$ no matter what "t" is, we have (taking $t = f^{-1}(x)$),
$f(g(g^{-1}(f^{-1}(x)))) = f(f^{-1}(x)) = x$.
the proof that $[(g^{-1} \circ f^{-1}) \circ (f \circ g)](x) = x$ is entirely similar.
now let's look at "your function":
$f(x) = \sqrt[3]{1 - x^3}$.
this is the composition of 3 functions:
$f = g \circ h \circ k$, where
$k(x) = x^3$
$h(x) = 1 - x$
$g(x) = \sqrt[3]{x}$
a little thought should convince you that:
$g^{-1}(x) = x^3$ (note that means k is g's inverse)
$h^{-1}(x) = 1-x$ (h is its own inverse, this can happen)
$k^{-1}(x) = \sqrt[3]{x}$ (and as we would expect, g is k's inverse).
in short, $f^{-1} = (g \circ h \circ k)^{-1} = k^{-1} \circ h^{-1} \circ g^{-1} = g \circ h \circ k = f$
you might find this slightly amazing.
what this means is: $(f \circ f)(x) = x$.
let's "try it" with some actual number, instead of x. how about x = 5?
$f(5) = \sqrt[3]{1 - 125} = \sqrt[3]{-124}$
ok, that's kind of a weird number. so let's find f(x) when $x = \sqrt[3]{-124}$.
$f(\sqrt[3]{-124}) = \sqrt[3]{1 - (\sqrt[3]{-124})^3} = \sqrt[3]{1 - (-124)} = \sqrt[3]{125} = 5$. huh. how about that?
Thank you soooo much for putting so much effort into your answer. Yesterday I figured out that this function was its own inverse, but your steps made it so much clearer. Thank you
June 24th 2012, 03:19 PM #2
June 24th 2012, 03:23 PM #3
Jul 2010
June 24th 2012, 03:39 PM #4
June 24th 2012, 03:46 PM #5
June 24th 2012, 10:42 PM #6
MHF Contributor
Mar 2011
June 25th 2012, 05:04 AM #7
Jul 2010 | {"url":"http://mathhelpforum.com/advanced-algebra/200338-inverting-function.html","timestamp":"2014-04-18T10:44:35Z","content_type":null,"content_length":"64816","record_id":"<urn:uuid:98d92bfc-2579-4dd8-a68e-7aa1d560ceb3>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: December 2010 [00540]
[Date Index] [Thread Index] [Author Index]
Re: How can I solve semidefinite programming problems? Can any one with
• To: mathgroup at smc.vnet.net
• Subject: [mg114859] Re: How can I solve semidefinite programming problems? Can any one with
• From: Daniel Lichtblau <danl at wolfram.com>
• Date: Mon, 20 Dec 2010 00:41:45 -0500 (EST)
• References: <ieklo8$4uf$1@smc.vnet.net>
On Dec 19, 4:11 am, fmingu <fmi... at 163.com> wrote:
> Hello.
> I am a student learning signal processing. I need my data to be calculated which needs optimization. And I am using Mathematica for programming. But till now I do not find any functions in mathematica for semidefinte programming. The problem is
> min C'XC
> s.t. B>=0, W>=0 ,t>=0, X'=X
> where C' ,X' are the transpose of C and X, respectively. C,B,W are matrices where B is a block matrix containing X. t is a number to be calculated. X is the unknown symmetic matrix to be found. I asked Nathan Brixius, the author of SDPMATH package which is used in Mathematica for semidefinite programming in 90's. But the link to SDPMATH package is missing (http://www.cs.uiowa.edu/~brixius/sdp.html). And after I wrote a letter to the author, he said the package is lost.
> How can I solve semidefinite programming problems in mathematica? Are there any innard or build-in methods in mathematica? Can any one with kindness help me?
> Thanks a lot.
You might be able to finesse the positive definiteness constraints, at
least for smallish problems. This can be done by defining a function
that operates only on explicitly numeric matrices and returns the
minimal eigenvalue.
In the code and simple example below I do not check that the matrix
inputs are square and symmetric, but I enforce that in the way it is
called. This might need adjusting for various usages. I also do not
know whether Eigenvalues will always come up with real values when
given a symmetric input, as it is a numeric algorithm and subject to
numeric error. So I take the real part of the eigenvalues it computes.
In[1]:= n = 5;
cc = RandomInteger[{0, 10}, n];
In[3]:= xvars = Array[x, {n, n}];
Do[x[i, j] = x[j, i], {i, 2, n}, {j, i - 1}]
fvars = Union[Flatten[xvars]];
In[6]:= mineig[mat : {{_?NumericQ ..} ..}] :=
With[{eigvals = Eigenvalues[mat]}, Min[Re[eigvals]]]
In[7]:= {min, vals} =
FindMinimum[{cc.xvars.cc, {mineig[xvars] >= 0}}, fvars]
During evaluation of In[7]:= FindMinimum::eit: The algorithm does not
converge to the tolerance of 4.806217383937354`*^-6 in 500 iterations.
The best estimated solution, with feasibility residual, KKT residual,
or complementary residual of
is returned. >>
Out[7]= {-0.000102810847622932, {x[1, 1] -> 0.4745617099801545,
x[1, 2] -> -0.2366811680109855, x[1, 3] -> -0.2132627776342026,
x[1, 4] -> -0.2702226619552426, x[1, 5] -> -0.2453269610921312,
x[2, 2] -> 0.4759415776673994, x[2, 3] -> -0.03653531695016651,
x[2, 4] -> 0.103983724598686, x[2, 5] -> -0.01760774267839494,
x[3, 3] -> 0.4587401126362137, x[3, 4] -> -0.0003398923289131131,
x[3, 5] -> -0.07268674827647269, x[4, 4] -> 0.7191478545392413,
x[4, 5] -> 0.004481278729767096, x[5, 5] -> 0.4939968048471287}}
We check by how much the constraint is violated. The minimal
eigenvalue is, one rather suspects, a fuzzy zero.
In[10]:= mineig[xvars /. vals]
Out[10]= -4.990836973695068*10^-7
This is very much a "Your mileage may vary" approach. I would not
expect it to be terribly fast. I would not be surprised if it has
difficulty in enforcing the constraint, for some (perhaps most?)
problems. You might try alterations such as adding an explicit penalty
term in the objective function, perhaps altering it based on
StepMonitor option setting, etc. Below is a very simple form of this
idea, probably not so useful in practice.
FindMinimum[{cc.xvars.cc -
10^5*mineig[xvars], {mineig[xvars] >= 0}}, fvars]
Daniel Lichtblau
Wolfram Research | {"url":"http://forums.wolfram.com/mathgroup/archive/2010/Dec/msg00540.html","timestamp":"2014-04-18T19:23:36Z","content_type":null,"content_length":"28938","record_id":"<urn:uuid:8f9bc4f8-e419-4b69-8687-8a7b80f17dbe>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
How do I deal with multiple responses?
Title Dealing with multiple responses
Author Nicholas J. Cox, Durham University, UK
Ulrich Kohler, University of Mannheim, Germany
Date January 2003; updated July 2011
1 Defining the problem
Multiple responses—in the sense used here—are defined by a degree of open-endedness. In particular, a question in a survey may receive zero or more positive answers depending on the characteristics
or behavior of the respondent. For example, respondents might be asked: Have you experienced any of the following symptoms or received information on a subject from any of the following media? Do you
ever drink tea, coffee, wine, beer, or water? Do you travel to work by foot, bicycle, motorcycle, car, bus, tram, train, boat, ski, skates, sledge, horse, camel, yak, ...? (This may seem like a
simple question to you, but consider commuters who cycle or drive to catch a train and then end their journey to work with a walk.)
We are not here to discuss multivariate responses in general, nor repeated measures, nor panel data or longitudinal data, etc.
In statistical computing terms, such multiple responses may pose difficulties both for data structure and for data analysis. Most commonly, they are held as a set of variables, but sometimes it can
be useful to hold them as a single variable. No structure is ideal for all purposes, and often you may want to convert from one structure to another. Similarly, you may want to look at results for
individual variables or at results calculated from one or more of these variables. The subject is large and one FAQ cannot cover all possibilities. You may be able to add suggestions to those here,
so that users may be advised of helpful tips or of pitfalls to be avoided. In particular, we would welcome literature references.
Reference is made below to various user-written programs on SSC or in the Stata Journal (SJ) or the Stata Technical Bulletin (STB). If you need explanation of SSC, please see the FAQ: "I see
references to the findit and ssc commands on Statalist, but my Stata does not recognize these commands. What should I do?". If you need explanation of the SJ or STB, look at [R] sj or help stb. When
a version of Stata is specified, it indicates the earliest version on which that program will run.
As this FAQ is fairly long, and not all readers may want to read all the way through, some repetition is built-in.
2 How data may be held
2.1 Indicator variables
Let us look first at a relatively simple example. This crucially important question might appear in a questionnaire:
Which of the following software packages do you use for data analysis?
1 R
2 S-Plus
3 SAS
4 SPSS
5 Stata
6 others
In this question, respondents are asked to mark the name of each package they use. Respondents may mark any number of packages. The number before each package name is used as a code in some coding
schemes discussed below.
For many statistical analyses, the answers of the respondents are best coded as a set of indicator or dummy variables, something like this:
q1_R q1_SPlus q1_SAS q1_SPSS q1_Stata q1_others
1. 1 0 0 0 0 1
2. 1 1 0 0 1 0
3. 0 0 0 0 1 0
4. 0 0 1 0 0 0
5. 0 0 1 0 0 1
That is, there should be a variable for each possible answer, with value 1 if a respondent uses a specific package and 0 otherwise. The first respondent in this example uses R and some other package;
the second respondent uses R, S-Plus, and Stata; and so on. We use names for the variables that have a common prefix. This is a small detail, but it makes it easier to refer to the variables
collectively using a wildcard, such as q1_*.
Data on multiple responses with this coding scheme can be used immediately for many analyses. For example, you might want to know how many respondents use Stata. Type
. count if q1_Stata == 1
or type
. tabulate q1_Stata
You might want to see the distribution of the number of packages used by the respondents. This is just the row sum of the variables, most easily calculated by egen.
. egen npkg = rowtotal(q1_*)
. tabulate npkg
You might want to know the distribution of users of software packages. One method is to summarize the variables and compare their means, but a better method is through tabstat.
. tabstat q1_*, s(sum) c(s)
The use of row sums and of variable sums across 1s and 0s underlines the value of holding data in indicator variable form.
2.2 Ranked multiple responses
A common variant is that the question asks you to rank choices, say, from most common to least common use, or in some other way.
q1_1 q1_2 q1_3 q1_4 q1_5 q1_6
1. 1 6 0 0 0 0
2. 5 2 1 0 0 0
3. 5 0 0 0 0 0
4. 3 0 0 0 0 0
5. 6 3 0 0 0 0
Thus using the coding scheme indicated previously, person 1 uses R most and some other package next. There is more information recorded in this variant form, as the first data structure can be
obtained from this one, but not conversely. Two common variations on this scheme are to use numeric missing rather than 0 and to use string variables including names rather than numeric codes.
This structure evidently makes it easy to focus on which package is most commonly used. It makes it difficult to focus on which packages are used at all, and so forth.
We mention here a possibility that researchers may encounter or produce tied ranks in some projects. How best to handle tied ranks is not considered here.
2.3 “Order of mention” multiple responses
Yet another situation is that answers have been coded in the order in which the respondent mentioned them. Such data may look like ranked multiple responses, but their interpretation may or may not
be similar. In some fields, it appears common to take the order in which responses were mentioned as a tacit indication of an underlying order. For example, suppose you were asked to state brands of
some item you purchase or you know about. Marketing people could be interested in what springs most readily to your mind. Whether “order of mention” is tantamount to ranking is a substantive matter
for you to consider.
2.4 Composite string variables
Sometimes the answers to multiple responses are put into one string variable. Commonly, this is the concatenation of the codes of possible (positive) answers. For our example data, such a variable
could look like the following (using numeric codes):
1. 16
2. 521
3. 5
4. 3
5. 63
Or it could look like the following:
1. R others
2. Stata S-Plus R
3. Stata
4. SAS
5. SAS others
The variable spkg1 states for the first observation that this respondent uses the software packages 1 and 6, which means R (1) and some other package (6). This may look like a numeric variable, but
it should be a string variable. In our experience, both producing such a variable from other variables and working with such a variable are much easier when it is a string variable than when it is an
integer-valued numeric variable. In any case, as soon as the number of possibilities exceeds 10, you will need to punctuate to avoid ambiguity. Otherwise, someone mentioning symptoms 1 and 3 from a
list would be treated the same as someone mentioning symptom 13: both would be represented by "13". Similarly, as in the example of spkg2, once nonnumeric characters are used then there is no reason
not to include punctuation to make elements clearer, unless you are near the limiting size of string variables.
There are various issues that can arise in practice. If packages are being ranked, then "Stata others" has a different meaning from "others Stata" but not otherwise. In particular, with unranked
data, be warned: values that to you are identical but nevertheless differ literally will be tabulated or counted separately. Similar comments apply to leading and trailing spaces, accidental
misspellings, or inconsistencies in upper- and lowercase. In the latter situation, problems may be solved by working consistently, say, in lowercase with the aid of the lower() function. (See [D]
This structure is particularly useful for showing combinations of choices, say, in tables of the composite variable. As the number of possible answers grows, the number of possible combinations also
grows rapidly. Even setting aside the possibility of ranking, k choices mean 2^k possible combinations. However, this is a fact whatever the data structure.
An important detail here is whether the variable really is a string variable or (despite our general advice) a numeric variable. When tabulating a string variable, Stata will sort "12" before "2";
when tabulating a numeric variable, Stata will sort 2 before 12. The convention that is better for you will depend on your purpose. Thus, with a string representation, all choices with 1 as first
character will be tabulated adjacently, whereas with a numeric representation all choices coded by one digit will be tabulated adjacently. Either could be useful.
2.5 Single variable in a long data structure
Another data structure holds all information in a single variable with repeated observations for each individual in the dataset. An example might be the following:
id q1
1. 1 R
2. 1 others
3. 2 R
4. 2 S-Plus
5. 2 Stata
6. 3 Stata
7. 4 SAS
8. 5 SAS
9. 5 Stata
In the jargon associated especially with the reshape command, this example is of a long data structure.
The answers, here in q1, could be held in a string variable or in a numeric variable with value labels attached. To make full use of the information in such data, an identifier variable, here id, is
essential. An identifier variable was not needed for any of our earlier examples. There is no requirement to show zero or missing responses; that is, to make explicit the fact that the person with id
1 does not use programs other than those mentioned. Thus this data structure is economical as a way of holding multiple response data, but it is correspondingly awkward as a way of holding other data
on the same individuals. Suppose, for example, that we were also holding data on individuals’ age, sex, and field of study. This information would be best held repeated for each observation, which is
inefficient (but otherwise not especially problematic).
Data on multiple responses in this structure can be used immediately for many analyses. For example, you might want to know how many respondents use Stata. If q1 is a string variable, type
. count if q1 == "Stata"
or if q1 is a numeric variable in which Stata is represented by 5, type
. count if q1 == 5
Data in this structure may be used easily for analyses of subsets defined by separate answers, either a particular subset or several subsets. The information yielded by count, and more, is available
by typing
. tabulate q1
which shows the distribution of users of software packages.
You might want to see the distribution of the number of packages used by the respondents. This is just the number of observations for each individual (distinct id) for which q1 is not missing. If q1
is never missing, this is yielded by typing
. by id, sort: generate npkg = _N
Irrespective of whether q1 is ever missing, this is yielded by typing
. by id, sort: egen npkg = count(q1)
as count() counts how often its argument is not missing; see [D] egen.
However, if this were followed by
. tabulate npkg
the individual with id 1 would be shown twice, that with id 2 three times, and so on. We need a way of selecting each id just once. An egen function is dedicated to this task, tag(). This function
tags just one observation in each group of identical values with value 1 and any other observations in the same group with value 0.
. egen tag = tag(id)
. tabulate npkg if tag
The idiom if tag as a contraction of if tag == 1 is always safe, as tag() never produces missing values. This device has many other uses whenever we wish to relate multiple response data to other
data for each individual.
A final advantage of this structure is that it is also applicable to ranked multiple variables, given an extra variable holding ranks. It is then easy using, for example, generate, egen, tabulate,
by:, and if to produce many basic analyses.
Despite some major advantages, this data structure is awkward for working with conditions specifying more than one answer. There are some ways to approach this, but they are not attractive. We can
tag those who use both R and Stata in this way, illustrated by the case of string variables:
. by id, sort: egen R_and_Stata = total(q1 == "R" | q1 == "Stata")
. replace R_and_Stata = R_and_Stata == 2
One part of the argument of total(), that is, q1 == "R", will pick up any observation for which this is true. The other part of the argument, that is, q1 == "Stata", will pick up any observation for
which this is true. The sum of a result of 1 if each condition is satisfied just once for an individual should be 2. Naturally, that sum is not affected by any number of results of 0 arising whenever
any condition is false. However, although we can make some progress with such questions and this data structure, other data structures are far superior whenever examining two or more answers
2.6 Missing values and the appropriate denominator
Missing values are likely to be common with multiple response data. Even if everybody answered the question—which is unusual in many surveys—everyone may not give the same number of responses. Even
when asked to rank a fixed number of specified items, respondents often stop ranking when they are indifferent to items, perhaps through lack of experience or knowledge.
A related issue is the appropriate denominator in calculating proportions or percents. Again, there will almost always be a difference between “number of respondents” and “number of responses”.
Either or both may be of substantive interest.
Here flag two pertinent details specific to Stata.
First, remember when working with integer variables that numeric missing counts as nonzero and therefore as true. For background, see the FAQ: “What is true and false in Stata?”. This can be
especially important when trying to produce, or when working with, indicator variables for which the possible nonmissing values are just 1 and 0.
Second, egen, anymatch() and egen, anycount() never return missing results. We say more on this in 3.6.1 Many-to-many mappings: using egen.
3 How to change data structure
3.1 Many-to-one mappings: concatenating variables
You can concatenate variables by adding them as string variables or as the string equivalent of numeric variables. A tool specifically for this purpose is egen, concat(). See [D] egen for a more
detailed discussion and examples. For example, given
q1_1 q1_2 q1_3 q1_4 q1_5 q1_6
1. 1 6 0 0 0 0
2. 5 2 1 0 0 0
3. 5 0 0 0 0 0
4. 3 0 0 0 0 0
5. 6 3 0 0 0 0
you can type
. egen response = concat(q1_*)
without worrying about whether the variables are numeric or string, as egen, concat() automatically converts to string equivalent. You might want to remove the zeros that pad out the result
1. 160000
2. 521000
3. 500000
4. 300000
5. 630000
which is easy with one of Stata's built-in string functions:
. replace response = subinstr(response,"0","",.)
Given a structure of indicator variables
q1_R q1_SPlus q1_SAS q1_SPSS q1_Stata q1_others
1. 1 0 0 0 0 1
2. 1 1 0 0 1 0
3. 0 0 0 0 1 0
4. 0 0 1 0 0 0
5. 0 0 1 0 0 1
you might prefer a concatenation more obviously interpretable than "100001", "110010", etc., which yields values like "R others":
. gen str1 q1 = ""
. qui foreach p in R SPlus SAS SPSS Stata others {
. replace q1 = q1 + "`p' " if q1_`p' == 1
. }
. replace q1 = trim(q1)
For more detail on foreach, see foreach or a tutorial in Cox (2002).
3.2 Many-to-one mappings: reshaping to long
First, let us suppose our data are
id q1_R q1_SAS q1_SPlus q1_Stata q1_others sex
1. 1 1 0 0 0 1 male
2. 2 1 0 1 1 0 female
3. 3 0 0 0 1 0 male
4. 4 0 1 0 0 0 female
5. 5 0 1 0 1 0 female
which is an example of what in reshape jargon is described as a wide data structure. The q1_* are numeric indicator variables. Later, we will comment on data in which ranks are given.
To convert this structure to a long data structure in which program choice is represented by a single variable, we need to use reshape. In addition to [D] reshape, also see the FAQ: "I am having
problems with the reshape command. Can you give further guidance?".
The key to such reshape questions is to think in terms of a data matrix in which data are ordered by rows and columns, indexed conventionally in matrix algebra by i and j, respectively. The rows we
have are defined by the distinct values of id and the columns we have are the variables q1_*. The variable names have in common a stub q1_, and they differ in the suffixes following the stub, R, SAS,
etc. If the variable names do not have this stub plus suffix form, you will need to apply rename before you can apply reshape. For further discussion, see the FAQ just mentioned.
Our reshaping will be mapping the columns of the data matrix (variables q1_*) into one column, with other variables being rearranged to match. We specify the stub, and we also need to spell out that
the data variable to be created will be string.
. reshape long q1_ , i(id) string
The result is
id _j q1_ sex
1. 1 R 1 male
2. 1 SAS 0 male
3. 1 SPlus 0 male
4. 1 Stata 0 male
5. 1 others 1 male
6. 2 R 1 female
7. 2 SAS 0 female
8. 2 SPlus 1 female
9. 2 Stata 1 female
10. 2 others 0 female
11. 3 R 0 male
12. 3 SAS 0 male
13. 3 SPlus 0 male
14. 3 Stata 1 male
15. 3 others 0 male
16. 4 R 0 female
17. 4 SAS 1 female
18. 4 SPlus 0 female
19. 4 Stata 0 female
20. 4 others 0 female
21. 5 R 0 female
22. 5 SAS 1 female
23. 5 SPlus 0 female
24. 5 Stata 1 female
25. 5 others 0 female
which is almost where we want to be. There is no point in being explicit about programs not used, so we
. drop if q1_ == 0
and follow by dropping that variable altogether and by using a more intuitive name:
. drop q1_
. rename _j q1
Here is the result:
id q1 sex
1. 1 R male
2. 1 others male
3. 2 R female
4. 2 SPlus female
5. 2 Stata female
6. 3 Stata male
7. 4 SAS female
8. 5 SAS female
9. 5 Stata female
As seen, we need not worry about variables such as sex that are constant within id. They will get carried along automatically.
We promised to look at data in which ranks were given.
id q1_1 q1_2 q1_3 sex
1. 1 R others male
2. 2 R S-Plus Stata female
3. 3 Stata male
4. 4 SAS female
5. 5 Stata SAS female
The data matrix we have has rows defined by the distinct values of id and columns, which are the variables q1_*. The new data structure will have a single variable indicating software rank, which can
be done directly:
. reshape long q1_ , i(id) j(rank)
(output omitted)
. list
| id rank q1_ sex |
1. | 1 1 R male |
2. | 1 2 others male |
3. | 1 3 male |
4. | 2 1 R female |
5. | 2 2 S-Plus female |
6. | 2 3 Stata female |
7. | 3 1 Stata male |
8. | 3 2 male |
9. | 3 3 male |
10. | 4 1 SAS female |
11. | 4 2 female |
12. | 4 3 female |
13. | 5 1 Stata female |
14. | 5 2 Stata female |
15. | 5 3 female |
We do not need observations with missing q1, and we can clean up the variable name,
. drop if missing(q1_)
. rename q1_ q1
resulting in
| id rank q1 sex |
1. | 1 1 R male |
2. | 1 2 others male |
3. | 2 1 R female |
4. | 2 2 S-Plus female |
5. | 2 3 Stata female |
6. | 3 1 Stata male |
7. | 4 1 SAS female |
8. | 5 1 Stata female |
9. | 5 2 Stata female |
This example was of a string variable. Any value labels attached to a numeric variable survive the reshape, so it appears immaterial whether q1 is string or numeric with labels. (In practice, it is a
good idea to ensure that the numeric variables in the data matrix have the same value labels.)
3.3 One-to-many mappings: indicator variables
Given a composite variable, with values such as "125" or "Stata R", how can it be converted to a set of indicator variables? One answer lies in the strpos() function, one of Stata's string functions,
which we will document at some length, partly because it is often useful for other problems as well. We assume here that you are following our advice and holding the codes as a composite string
variable. If not, then in the examples below, use, e.g., strpos(string(varname)) rather than strpos(varname).
strpos() is used to find the position of one string within another. To find the position of the string "I" in the string "Where am I?", you can type
. display strpos("Where am I?", "I")
and Stata will return 10, meaning that the string "I" is found starting at the 10th position. What happens if you ask for the position of the string "you" in "Where am I?"? Since "you" is not
included in the longer string, strpos() returns 0. More generally, a positive result from strpos() means that one string is included within another and a zero result means that it is not.
We can also feed to strpos() any expression that evaluates to a string, such as the name of a string variable, so that a new variable can be generated as follows:
. generate byte q1_1 = strpos(spkg1, "1") > 0
strpos(spkg1, "1") will return a positive number if "1" is included in a value of spkg1 and 0 otherwise. strpos(spkg1, "1") > 0 will in turn evaluate to 1 if true and to 0 if false, thus yielding an
indicator variable. For background, see the FAQ "What is true and false in Stata?".
In passing the specification of a byte variable type, possible here because we know that the possible values are well within the limits for that data type; for more information, see data types. Using
an economical data type for an indicator variable can be helpful whenever space is short.
We will want to generate similar variables for other answers. Doing this variable by variable can be avoided, for example, by using forvalues:
. forvalues i = 1/6 {
. generate byte q1_`i' = strpos(spkg1, "`i'") > 0
. }
For more detail on forvalues, see forvalues or a tutorial in Cox (2002). A further extension would be something like
. forvalues i = 1/6 {
. capture assert strpos(spkg1, "`i'") == 0
. if _rc {
. generate q1_`i' = strpos(spkg1, "`i'") > 0
. }
. }
What is going on here? Any statement tested by assert will yield a so-called return code that is zero if the statement is true for all observations examined and a return code that is nonzero (in
fact, 9) if it is false. We test to see if any observations contain values other than zero before we generate a new variable. The capture ensures that everything continues smoothly, whatever the
In particular, in our dataset nobody uses SPSS, so, arguably, we could dispense with an indicator variable for that choice. When we get to
assert strpos(spkg1, "4") == 0
this assertion will be true of all the data, and the return code from assert will be 0. So, the return code—which is accessible in _rc—will be nonzero and thus true. More generally, this approach
will avoid creation of variables for any choices that were possible but happen to have been chosen by none of the sample.
This approach will work well with choices coded by one-digit characters, numeric or otherwise. You need to be more careful, however, when the choices include say "1", "10", "11", as a search for the
character "1" will then find it whenever it occurs as part of "10", "11", and so forth. Given space separation, as "1 10 11", one possibility is to search for " 1 " within the string expression " " +
string_variable + " ". Another possibility is to split the variable into "words" and then work from the resulting variables. This possibility is explained in more detail in the next subsection.
Typically easier, however, are unambiguous strings, as exemplified by
. foreach p in R S-Plus SAS SPSS Stata others {
. local P : subinstr local p "-" ""
. gen byte q1_`P' = strpos(spkg2, "`p'") > 0
. }
which generates the variables q1_R, q1_SPlus and so forth, with values 1 and 0 just like in the example before. (Incidentally, for S-Plus we need to catch the hyphen, which may not appear as a
character in a variable name.) Again this is all totally literal and thus dependent on consistent spelling, use of spaces, and use of upper- and lowercase. On that last point alone, we can be more
broad-minded in this way,
. foreach p in S-Plus SAS SPSS Stata others {
. local P : subinstr local p "-" ""
. gen byte q1_`P' = strpos(lower(spkg2), lower("`p'")) > 0
. }
but we need a separate approach for R, given that "r" is evidently part of “others”.
Finally, you may catch choices never made, just as before:
. foreach p in R S-Plus SAS SPSS Stata others {
. local P : subinstr local p "-" ""
. capture assert strpos(lower(spkg2), lower("`p'")) == 0
. if _rc {
. gen byte q1_`P' = strpos(lower(spkg2), lower("`p'")) > 0
. }
. }
3.4 One-to-many mappings: splitting variables
A composite string variable with values such as "125" or "43" can be split into individual str1 variables by a simple loop. You just need to find out the length of the composite, say, from describe.
Suppose that you want to split a str7 variable:
. forvalues i = 1/7 {
. gen str1 r`i' = substr(response,`i',1)
. }
A composite string variable with values such as "Stata R" or “coffee,beer”, in which words or phrases or other elements are separated by some punctuation, say, a space or a comma, is best handled by
another approach. In Stata 8 or later versions, this can be done with the split command. In Stata 7, you can use the predecessor of that command, split by Nicholas J. Cox from SSC. In Stata 6, you
can use the predecessor of that command, strparse by Michael Blasnik and Nicholas J. Cox from SSC.
3.5 One-to-many mappings: reshaping to wide
First, let us suppose that our data are like
id q1 sex
1. 1 R male
2. 1 others male
3. 2 R female
4. 2 S-Plus female
5. 2 Stata female
6. 3 Stata male
7. 4 SAS female
8. 5 SAS female
9. 5 Stata female
which is an example of what was earlier described as a long data structure. To resolve an ambiguity, let us specify that q1 is a string variable. Later, we will comment on the case of a numeric
variable with value labels. Finally, we will comment on data in which ranks are given.
To convert this structure to a wide data structure in which each distinct answer in q1 is represented by a single variable, we need to use reshape. In addition to [D] reshape, also see the FAQ: "I am
having problems with the reshape command. Can you give further guidance?".
The key to such reshape questions is to think in terms of a data matrix in which data are ordered by rows and columns, indexed conventionally in matrix algebra by i and j, respectively. The rows we
desire are defined by the distinct values of id and the columns we desire are defined by the distinct values of q1. Those values will be used as the suffixes of a set of variables. If q1 is a string
variable, we immediately have a small problem: the "-" within S-Plus is not acceptable within a variable name. We could fix this by
. replace q1 = subinstr(q1,"-","",.)
or in more difficult situations, we could encode a string variable into a numeric variable. In the matrix itself, we want indicator variables in which 1 represents yes and 0 no. All our observations
at present are in effect instances of 1, but we need to make that explicit:
. gen byte one = 1
That creates a variable that is 1 in every observation. In most circumstances, such a variable would be pointless, but here it is essential. The variable is created as a byte variable to economize on
storage. You can dispense with this detail if you have plenty of memory to spare.
Now we can reshape the data:
. reshape wide one, i(id) j(q1) string
We need not worry about variables such as sex that are constant within id. They will get carried along automatically. (If, contrary to assumption, they are not constant within id, then you will get
an error message and no reshape, as something that should be true of your data is in fact false.) Here is the result of the reshape:
id oneR oneSAS oneS_Plus oneStata oneothers sex
1. 1 1 . . . 1 male
2. 2 1 . 1 1 . female
3. 3 . . . 1 . male
4. 4 . 1 . . . female
5. 5 . 1 . 1 . female
We are almost done, but, depending on taste, there may be some cleaning up to do. First, we have a stub for the new variables that may not be to our liking. One specific way to fix that is with
. rename one* q1_*
Second, we may wish to change all the missings in q1_* to 0. Once again, a specific command can do this, mvencode:
. mvencode q1_*, mv(0)
We promised to comment on the case in which the argument of j(), here q1, is a numeric variable with value labels attached. The code is similar to the previous commands:
. gen byte one = 1
. reshape wide one, i(id) j(q1)
. rename one* q1_*
. mvencode q1_*, mv(0)
However, a side effect of reshape here is that the value labels associated with q1 get dropped. For this reason, using a string variable is attractive whenever practicable, bearing in mind that the
values of the string variable are destined to be variable name suffixes; hence, only alphabetical, numeric, and underscore characters are allowed.
We also promised to look at data in which ranks were given, which is even easier.
id q1 sex rank
1. 1 R male 1
2. 1 others male 2
3. 2 R female 1
4. 2 S-Plus female 2
5. 2 Stata female 3
6. 3 Stata male 1
7. 4 SAS female 1
8. 5 SAS female 2
9. 5 Stata female 1
The data matrix we seek has rows defined by the distinct values of id and columns defined by the distinct values of rank. In the matrix itself, we want variables indicating software, which can be
done directly:
. reshape wide q1, i(id) j(rank)
. rename q1* q1_*
In this problem, any value labels attached to a numeric variable q1 do survive the reshape, so it appears immaterial whether q1 is string or numeric with labels.
3.6 Many-to-many mappings
3.6.1 Many-to-many mappings: using egen
The most common problem here seems to be the creation of indicator variables from variables indicating successive choices. One pertinent tool in official Stata for the case of integer codes held in
numeric variables is egen, anycount(). The result can be thought of as number of variables equal to any of the values specified. A sibling is egen, anymatch(). The result can be thought of indicating
whether values of variables are equal to any of the values specified.
For example, given the ranked responses
q1_1 q1_2 q1_3 q1_4 q1_5 q1_6
1. 1 6 0 0 0 0
2. 5 2 1 0 0 0
3. 5 0 0 0 0 0
4. 3 0 0 0 0 0
5. 6 3 0 0 0 0
we can generate the corresponding variables:
. forvalues i = 1/6 {
. egen Q1_`i' = anycount(q1_*), val(`i')
. }
First, we loop over the possible answers (the values of the data), here the integers 1/6. More complicated sets of answers might be better handled using foreach. For each possible answer, in turn 1 2
3 4 5 6, we count how many of the variables—here the q1_*—are equal to any of the values specified—here just a single value in each case. We also use uppercase Q1 as a prefix for the new variables.
Above all, appreciate that the new variables do not retain all the information in the originals, as we are ignoring the information on rank order.
Using anycount() rather than anymatch() is a small wrinkle. With this example, we expect that each package will be mentioned at most once, but counting with anycount() allows a data check. Any
multiple count will show up as a value of 2 or more, and we can identify any respondent trying to subvert the questionnaire by repeatedly mentioning their favorite software—or, if it seems
appropriate, treat that as a measure of strength of interest.
Naturally, if you prefer, you can use anymatch(). This function is guaranteed to produce an indicator variable with values 1 or 0.
If you choose either of these functions, as mentioned earlier, neither function ever produces missing values as a result. This may be surprising, but it was intended as a feature, given what was seen
as the most likely uses of the generated new variables and how they might appear within Stata commands. However, if all the variables supplied as arguments are missing in an observation, then the
result of anymatch() or anycount() will be 0 for that observation. If you want to recode such 0s to numeric missing, here is one way to do it. We exploit the fact that the observation-wise (rowwise)
maximum will be returned as missing by egen, rowmax() if and only if all values examined in an observation are missing.
. egen rmax = rowmax(q1_*)
. forvalues i = 1/6 {
. replace Q1_`i' = . if rmax == .
. }
A crucial limitation is that both functions anymatch() and anycount() apply only to integer codes. With arbitrary string codes, say,
q1_1 q1_2 q1_3 q1_4 q1_5 q1_6
1. R others
2. Stata S-Plus
3. Stata
4. SAS
5. others SAS
we need to create our own numeric measures from first principles; for example,
. foreach p in R S-Plus SAS SPSS Stata others { /* loop over responses */
. local P : subinstr local p "-" ""
. gen byte q1_`P' = 0
. forval i = 1/6 { /* loop over existing variables */
. qui replace q1_`P' = q1_`P' + (strpos(q1_`i',"`p'") > 0)
. }
. }
Here we need a double loop, one over possible responses, initializing a variable to 0, and one over existing variables, adding 1 each time we find the package name inside. Counting whether strpos()
returns a positive count is here a more general than testing for equality, as it guards against the possibility that leading and/or trailing spaces have somehow been added to the variable. Nothing is
done here directly about consistency of case—we have already seen how to tackle that—or catching misspellings.
The code example here uses addition to produce an analog of egen, anycount(). One way of producing an analog of egen, anymatch() is to use the or operator |, as 0 | 1 and 1 | 1 both yield 1. For
context, see operators.
3.6.2 Many-to-many mappings: user-written programs
A program zb_qrm by Eric Zbinden (SSC; Stata 5) maps from a set of numeric variables with codes 1 upwards to a set of indicator variables for those codes. It also displays information on the
occurrence pattern of indicators.
A program mrdum by Lee Sieswerda (SSC; Stata 7) is similar but is on the whole more general.
These programs differ over what is an appropriate denominator, all observations or all observations containing at least one response. As flagged previously, various choices may be sensible depending
on the problem being tackled.
4 Tabulations
Tabulation itself is a large and complex subject. Our aim in this section is just to give some pointers to commands that may be of use.
Stata’s official commands do not give much support to multiple response variables, although we gave an example earlier of the application of tabstat. One general strategy is to use an egen function
to calculate something, (possibly) egen, tag() to tag just one observation in each of several groups, and then list to show the results. Using collapse or contract followed by list is more drastic.
Alternatively, user-written commands in this territory include
• mrtab (Ben Jann, SJ 5-1; Stata 8.2) A highly versatile command for one-way and two-way tables.
• tabcond (Nicholas J. Cox, SSC; Stata 7) Tabulates frequencies satisfying up to 5 specified conditions. Zero frequencies are shown explicitly.
• tabm (Nicholas J. Cox, SSC as part of tab_chi; Stata 7) Tabulates two or more comparable variables, in a combined two-way table of variables by values. Either all variables should be numeric, or
all variables should be string.
• tabsplit (Nicholas J. Cox, SSC as part of tab_chi; Stata 8; earlier version tabsplit6 for Stata 6 or 7) Tabulates frequencies of occurrence of the parts of a string variable. By default, the
parts of a string are separated by spaces. Optionally, alternative punctuation characters may be specified.
• tabw (Peter Sasieni, STB-25; Stata 3.1). For each variable in a list, it tabulates the number of times it takes on the values 0, 1, ..., 9; the number of times it is missing; and the number of
times it is equal to some other value. String variables are not tabulated but are identified at the end of the displayed table.
5 Acknowledgment
Lee Sieswerda made several helpful comments on a draft.
6 Reference
Cox, N. J. 2002.
Speaking Stata: How to face lists with fortitude. Stata Journal. 2: 202–222. | {"url":"http://www.stata.com/support/faqs/data-management/multiple-responses/","timestamp":"2014-04-16T22:08:05Z","content_type":null,"content_length":"74882","record_id":"<urn:uuid:cede3052-0e4e-4d1b-8393-57dabfaa690c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Can you identify 2 models that are used in science?
• 9 months ago
• 9 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51e04076e4b076f7da41bdf6","timestamp":"2014-04-17T16:45:48Z","content_type":null,"content_length":"37451","record_id":"<urn:uuid:d7511ca4-194e-4a41-a2af-40e6bdf26673>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
CS 8803 LIC: Lattices in Cryptography 2013
Meeting: Mondays and Wednesdays, 1-2:30pm, Cherry Emerson 204
First meeting: Monday, August 26
Instructor: Chris Peikert (cpeikert ATHERE cc.gatech.edu)
Office Hours: Klaus 3146, by appointment
Lecture notes Course description
Point lattices are remarkably useful in cryptography, both for cryptanalysis (breaking codes) and more recently for constructing cryptosystems with unique security and functionality properties. This
seminar will cover classical results, exciting recent developments, and several important open problems. Specific topics will include:
• Mathematical background and basic results
• The LLL algorithm, Coppersmith's method, and applications to cryptanalysis
• Complexity of lattice problems: NP-hardness, algorithms and other upper bounds
• Gaussians, harmonic analysis, and the smoothing parameter
• Worst-case/average-case reductions (SIS and LWE)
• Basic cryptographic constructions: one-way functions, encryption schemes, digital signatures
• ``Exotic'' cryptographic constructions: ID-based encryption, fully homomorphic encryption and more
• Ring-based cryptographic reductions and primitives
There are no formal prerequisite classes. However, this course is mathematically rigorous, hence the main requirement is mathematical maturity. Specifically, students should be comfortable with
devising and writing correct formal proofs (and finding the flaws in incorrect ones!), devising and analyzing algorithms and reductions between problems, and working with probability. A previous
course in cryptography (e.g., Applied/Theoretical Cryptography) will be helpful but is not required. No previous familiarity with lattices will be assumed. Highly recommended courses include CS 6505
(Algorithms, Computability and Complexity), CS 6520 (Computational Complexity Theory), CS 6260 (Applied Cryptography), and/or CS 7560 (Theory of Cryptography). The instructor reserves the right to
limit enrollment to students who have the necessary background. | {"url":"http://www.cc.gatech.edu/~cpeikert/lic13/","timestamp":"2014-04-16T17:27:53Z","content_type":null,"content_length":"3681","record_id":"<urn:uuid:8cb4425b-3d8a-4158-85de-e80eeed0df03>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
Set STL
05-01-2007 #1
Registered User
Join Date
Mar 2007
Set STL
I have a few questions regarding sets maybe someone can answer them for me. First let me say i am trying to use a set to represent my hash table. First off i would like to know how to make a hash
function of a specific size, such at 256. also i would like to know how to insert a number at a specific index. For example i have a hash function to evaluate that is f(k) = 10 and i have to
create a program to run that then print the hash table. So if i have numbers like 50 100 120 they would all be entered into index 10 of the set but i don't know how to do that maybe someone can
do that.
First let me say i am trying to use a set to represent my hash table.
Bad idea. set is typically a binary tree, and it cannot represent a hash table.
If you want to build your own hash table, use a vector as the base for the buckets.
All the buzzt!
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
CornedBee is correct, it makes little sense to use a set to hold your hash table. That said, you were asking about how to hold multiple numbers at the same index. If you did want/need to use set
anyway, you would want to use multimap instead of set. That's because you have a key (in this case 10) and a value (in this case 50, 100 or 120) and you are allowed to have multiple entries with
the same key. That's what multiset and multimap allow.
Since it is better to do this with a vector, you still have to handle collisions. One solution is to find the next empty slot after the index you need. So if f(50) is 10 and f(100) is also 10,
but 50 is already at the index 10, then you'll have to put 100 at index 11 (assuming 11 is not full).
A common alternative that I like better would be to store a vector of lists (or a vector of vectors). So at index 10 would be a list that contains 50, 100 and 120. You can just walk that list to
display them or to find a specific one.
its for a project i need to use set
Can you use multiset or multimap?
If not, then I guess you'd have to make your own class that stores the values that hash to the specific key (probably in a vector or list inside the class). You'd then have to make that class
sortable (by the key only of course) before putting it in the set.
05-02-2007 #2
05-02-2007 #3
Registered User
Join Date
Jan 2005
05-02-2007 #4
Registered User
Join Date
Mar 2007
05-02-2007 #5
Registered User
Join Date
Jan 2005
05-02-2007 #6 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/89395-set-stl.html","timestamp":"2014-04-21T16:58:21Z","content_type":null,"content_length":"57696","record_id":"<urn:uuid:2ce30f0d-0910-44b1-8407-8ce675ef5588>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why is the volume conjecture important?
up vote 22 down vote favorite
The volume conjecture, a formula relating hyperbolic volume of a knot complement with the semiclassical limit of a family of coloured Jones polynomials, is widely considered the biggest open problem
in quantum topology. It is one of a large family of conjectures and research programmes which have to do with detecting classical geometry with semiclassical limits.
Embarassing as it is to say in public, I only very partially understand why people care so much about such conjectures.
What fantastic consequences would there be for low dimensional topology if the volume conjecture were proven tomorrow? What if all the related conjectures were proven too? How would it improve
our understanding of classical topology? More broadly, how would it advance mathematics?
quantum-topology 3-manifolds gt.geometric-topology
8 A worst-case concern could be that quantum invariants of knots are a type of one-way function in the sense of crytography. They are readily computable but it might be extremely difficult to
compute strong topological data from them, like say the symmetry group of a knot, or the Gromov norm of a link complement. I see the persistence of the volume conjecture as a suggestion this might
not be the case. – Ryan Budney Sep 1 '10 at 4:13
A blog post on this question: ldtopology.wordpress.com/2011/11/11/… – Daniel Moskovich Nov 11 '11 at 15:03
add comment
2 Answers
active oldest votes
I don't know any consequences for low-dimensional topology. My impression is that it would indicate how powerful TQFT invariants are at distinguishing 3-manifolds or knots. For example, the
up vote volume conjecture (or maybe variants by the Murakamis) would imply that the colored Jones polynomials distinguish knots from the unknot. This corollary is also claimed by Jorgen Andersen.
13 down There are only finitely many hyperbolic knots of a given volume, so the volume conjecture would imply that the colored Jones polynomials are close to distinguishing hyperbolic knots.
2 In particular, if colored Jones polynomials distinguish the unknot then it follows that finite type invariants distinguish the unknot. – Noah Snyder Sep 1 '10 at 4:30
But it would NOT tell you which easy-to-calculate quantum invariant would distinguish between given manifold M and given manifold N. Playing devil's advocate, what use would proving the
power of the set of ALL quantum invariants (let's toss in the coloured HOMFLYPTs as well) be if it doesn't help us to calculate anything concrete? Doesn't the fundamental group (and
peripheral group, if you like) do that already? Also, as you point out, Andersen can prove that the coloured Jones distinguish the unknot, without resorting to anything nearly as heavy
as the volume conjecture. – Daniel Moskovich Sep 1 '10 at 11:47
Regarding Andersen's claim: is there any text relating to this claim, such as a paper outlining a program, slides or notes of a relevant talk, or at least an abstract containing a
written claim? I understand it's an old story (5 years? more?) but people still seem to be enthusiastic about it - apparently for a good reason? @Noah Snyder: is there any reference for
this implication? – Sergey Melikhov Nov 29 '10 at 3:42
@ Sergey: I don't know of any paper, I've only seen him claim it in a colloquium talk and in conversations. He has a way of defining quantum invariants related to SU(2) representations
1 (I think this is joint work with Ueno). He uses asymptotics of Toeplitz operators to show that these invariants detect non-trivial SU(2) reps. Then he claims that these invariants are
equivalent to the Reshetikhin-Turaev invariants, which for Dehn surgery on a knot may be computed by the colored Jones polynomials. Finally, work of Kronheimer-Mrowka imply that
non-trivial Dehn surgeries on knots have SU(2) reps. – Ian Agol Nov 29 '10 at 4:21
add comment
Someone else will have to discuss the applications in topology, but I can point out at least one reason the volume conjecture is interesting.
It's often said that no one knows how to define the functional integral for Chern-Simons theory. This isn't literally true. The Reshetikhin-Turaev construction can be interpreted --
tautologically -- as defining a volume measure on a certain space of functionals. (This is just like in quantum mechanics, where one interprets the kernel $\langle q_i|e^{-Ht}|q_f\rangle$
up vote as the volume of the space of paths $\phi: [0,t] \to \mathbb{R}$ which begin at $q_i$ and end at $q_f$.) What we don't know how to do is define the path integral measure as a continuum
12 down limit of regularized integrals that look like $\frac{1}{Z}e^{iCS(A)}dA$.
The volume conjecture (in particular the version where log of the Jones polynomial looks like vol(3-manifold) plus i times the Chern-Simons functional) tells us that the tautological
measure you get from Reshetikhin-Turaev actually has something to do with the Chern-Simons action!
add comment
Not the answer you're looking for? Browse other questions tagged quantum-topology 3-manifolds gt.geometric-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/37339/why-is-the-volume-conjecture-important?sort=newest","timestamp":"2014-04-19T20:16:45Z","content_type":null,"content_length":"63377","record_id":"<urn:uuid:78787d99-002e-4a63-861e-eb170020f6b8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
March 14th 2012, 03:30 PM #1
Mar 2012
New here
I'm new to the forum. I'm a 21 year old college student trying to become a high school math teacher. School's been going as planned so far, but these proof based math classes are extremely
difficult for me and I need help like no other. Right now I'm taking Intro to Real Analysis. Next semester I'll be taking Advanced Algebra, and Discrete, then hopefully I'll be student teaching
the following semester.
I came here for help, like most, so I could really use it.
I already posted a question I had for my homework, I am lost and in dire need of help.
It's under the applied math section I think.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/new-users/195973-new-here.html","timestamp":"2014-04-18T09:56:24Z","content_type":null,"content_length":"24699","record_id":"<urn:uuid:ea3edc46-df7b-4a7b-97dd-9af8e195e0d3>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: July 2005 [00122]
[Date Index] [Thread Index] [Author Index]
Re: Partial diff equations
• To: mathgroup at smc.vnet.net
• Subject: [mg58520] Re: [mg58510] Partial diff equations
• From: Andrzej Kozlowski <akozlowski at gmail.com>
• Date: Tue, 5 Jul 2005 06:34:12 -0400 (EDT)
• References: <200507050557.BAA29453@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
On 5 Jul 2005, at 14:57, David Boily wrote:
> I have a not difficult to integrate but huge system of partial
> differential equations that I would never attempt to solve by hand.
> So I
> tried to feed it to mathematica and got the message bellow. I got
> annoyed and
> tested DSolve with a trivial problem only to realize that, apparently,
> mathematica is not very good when it comes to partial diff equations.
> Indeed, how come mathematica can't solve this simple system:
> DSolve[{D[f[x,y],x]==2 x y^2, D[f[x,y],y]==2 x^2 y}, f[x,y], {x, y}]
> the solution is trivial (f[x,y]=x^2 y^2), but if I enter the above
> command I get:
> DSolve::overdet:
> The system has fewer dependent variables than equations, so is
> overdetermined.
> any info would be appreciated,
> Thanks,
> David Boily
> Centre for Intelligent Machines
> McGill University
> Montreal, Quebec
Computer algebra programs are becoming pretty good at solving ODE's
and PDE's numerically (I would rate Mathematica's capabilities in
this field since version 5 as excellent) but they are hopeless at
finding symbolic solutions. I do not know anybody who is using a
computer program for any serious symbolic PDE work. However, first of
all, in my case Mathematica 5.1 does not produce the message you
report; it simply returns the input unevaluated almost immediately,
which means that it does not not know any general methods to apply to
a system of this kind, and it does not fall in into any special cases
it knows how to deal with. It does not however mean that it is
entirely useless in this sort of situation; one can sometimes reduce
such a system to something more palatable to it, but of course human
input is needed.
In your case we first define the system we want to work on:
sys = {D[f[x, y], x] == 2*x*y^2, D[f[x, y], y] == 2*x^2*y};
When you have a system of equations it often helps to replace it
first by a single more general equation. One can often use Eliminate
to make Mathemtica find one:
(message removed)
2*x^2*y == Derivative[0, 1][f][x, y] &&
2*x*y^2 == Derivative[1, 0][f][x, y] &&
x*Derivative[1, 0][f][x, y] ==
y*Derivative[0, 1][f][x, y] &&
Derivative[1, 0][f][x, y]^2 ==
2*y^3*Derivative[0, 1][f][x, y]
The first two equations are just the ones we had already but the
third one looks promising, so we try to DSolve it:
DSolve[v[[3]], f[x, y], {x, y}]
{{f[x, y] -> C[1][x*y]}}
This is certainly progress. Any solutions of the original equations
must be of this form. So we simply define f as
f[x_, y_] := c[x *y]
and now our sys becomes:
sys1=Simplify[sys, x != 0 && y != 0]
{2*x*y == Derivative[1][c][x*y],
2*x*y == Derivative[1][c][x*y]}
O.K., so we see that all we need to do is to DSolve:
DSolve[Union[sys1] /. x*y -> t, c[t], t]
{{c[t] -> t^2 + C[1]}}
hence we get the solution f[x,y]:=x^2*y^2+constant
Obviously this requires a lot of human input and making some good or
lucky choices (if I had chosen v[[4]] instead of v[[3]] I would not
have got anywhere) but occasionally this kind of approach may be
useful even in quite complex cases (I have several examples of this
Andrzej Kozlowski
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2005/Jul/msg00122.html","timestamp":"2014-04-19T19:54:02Z","content_type":null,"content_length":"37772","record_id":"<urn:uuid:efbbf76c-d323-4859-a996-29bb8addd327>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about General on Math ∩ Programming
This week is Spring break at UI Chicago. While I’ll be spending most of it working, it does give me some downtime to reflect. We’ve come pretty far, dear reader, in these almost three years. I
learned, you learned. We all laughed. My blog has become my infinite source of entertainment and an invaluable tool for synthesizing my knowledge.
But the more I write the more ideas I have for articles, and this has been accelerating. I’m currently sitting on 55 unfinished drafts ranging from just a title and an idea to an almost-completed
post. A lot of these ideas have long chains of dependencies (I can’t help myself but write on all the background math I can stomach before I do the applications). So one day I decided to draw up a
little dependency graph to map out my coarse future plans. Here it is:
Now all you elliptic curve fanatics can rest assured I’ll continue working that series to completion before starting on any of these big projects. This map basically gives a rough picture of things
I’ve read about, studied, and been interested in over the past two years that haven’t already made it onto this blog. Some of the nodes represent passed milestones in my intellectual career, while
others represent topics yet to be fully understood. Note very few specific applications are listed here (e.g., what might I use SVM to classify?), but I do have ideas for a lot of them. And note that
these are very long term plans, some of which are likely never to come to fruition.
So nows your chance to speak up. What do you want to read about? What do you think is missing?
Want to make a great puzzle game? Get inspired by theoretical computer science.
Two years ago, Erik Demaine and three other researchers published a fun paper to the arXiv proving that most incarnations of classic nintendo games are NP-hard. This includes almost every Super Mario
Brothers, Donkey Kong, and Pokemon title. Back then I wrote a blog post summarizing the technical aspects of their work, and even gave a talk on it to a room full of curious undergraduate math
But while bad tech-writers tend to interpret NP-hard as “really really hard,” the truth is more complicated. It’s really a statement about computational complexity, which has a precise mathematical
formulation. Sparing the reader any technical details, here’s what NP-hard implies for practical purposes:
You should abandon hope of designing an algorithm that can solve any instance of your NP-hard problem, but many NP-hard problems have efficient practical “good-enough” solutions.
The very definition of NP-hard means that NP-hard problems need only be hard in the worst case. For illustration, the fact that Pokemon is NP-hard boils down to whether you can navigate a vastly
complicated maze of trainers, some of whom are guaranteed to defeat you. It has little to do with the difficulty of the game Pokemon itself, and everything to do with whether you can stretch some
subset of the game’s rules to create a really bad worst-case scenario.
So NP-hardness has very little to do with human playability, and it turns out that in practice there are plenty of good algorithms for winning at Super Mario Brothers. They work really well at
beating levels designed for humans to play, but we are highly confident that they would fail to win in the worst-case levels we can cook up. Why don’t we know it for a fact? Well that’s the $P e NP$
Since Demaine’s paper (and for a while before it) a lot of popular games have been inspected under the computational complexity lens. Recently, Candy Crush Saga was proven to be NP-hard, but the list
doesn’t stop with bad mobile apps. This paper of Viglietta shows that Pac-man, Tron, Doom, Starcraft, and many other famous games all contain NP-hard rule-sets. Games like Tetris are even known to
have strong hardness-of-approximation bounds. Many board games have also been studied under this lens, when you generalize them to an $n \times n$ sized board. Chess and checkers are both what’s
called EXP-complete. A simplified version of Go fits into a category called PSPACE-complete, but with the general ruleset it’s believed to be EXP-complete [1]. Here’s a list of some more classic
games and their complexity status.
So we have this weird contrast: lots of NP-hard (and worse!) games have efficient algorithms that play them very well (checkers is “solved,” for example), but in the worst case we believe there is no
efficient algorithm that will play these games perfectly. We could ask, “We can still write algorithms to play these games well, so what’s the point of studying their computational complexity?”
I agree with the implication behind the question: it really is just pointless fun. The mathematics involved is the very kind of nuanced manipulations that hackers enjoy: using the rules of a game to
craft bizarre gadgets which, if the player is to surpass them, they must implicitly solve some mathematical problem which is already known to be hard.
But we could also turn the question right back around. Since all of these great games have really hard computational hardness properties, could we use theoretical computer science, and to a broader
extent mathematics, to design great games? I claim the answer is yes.
[1] EXP is the class of problems solvable in exponential time (where the exponent is the size of the problem instance, say $n$ for a game played on an $n \times n$ board), so we’re saying that a
perfect Chess or Checkers solver could be used to solve any problem that can be solved in exponential time. PSPACE is strictly smaller (we think; this is open): it’s the class of all problems
solvable if you are allowed as much time as you want, but only a polynomial amount of space to write down your computations. ↑
A Case Study: Greedy Spiders
Greedy spiders is a game designed by the game design company Blyts. In it, you’re tasked with protecting a set of helplessly trapped flies from the jaws of a hungry spider.
In the game the spider always moves in discrete amounts (between the intersections of the strands of spiderweb) toward the closest fly. The main tool you have at your disposal is the ability to
destroy a strand of the web, thus prohibiting the spider from using it. The game proceeds in rounds: you cut one strand, the spider picks a move, you cut another, the spider moves, and so on until
the flies are no longer reachable or the spider devours a victim.
Aside from being totally fun, this game is obviously mathematical. For the reader who is familiar with graph theory, there’s a nice formalization of this problem.
The Greedy Spiders Problem: You are given a graph $G_0 = (V, E_0)$ and two sets $S_0, F \subset V$ denoting the locations of the spiders and flies, respectively. There is a fixed algorithm $A$ that
the spiders use to move. An instance of the game proceeds in rounds, and at the beginning of each round we call the current graph $G_i = (V, E_i)$ and the current location of the spiders $S_i$. Each
round has two steps:
1. You pick an edge $e \in E_i$ to delete, forming the new graph $G_{i+1} = (V, E_i)$.
2. The spiders jointly compute their next move according to $A$, and each spider moves to an adjacent vertex. Thus $S_i$ becomes $S_{i+1}$.
Your task is to decide whether there is a sequence of edge deletions which keeps $S_t$ and $F$ disjoint for all $t \geq 0$. In other words, we want to find a sequence of edge deletions that
disconnects the part of the graph containing the spiders from the part of the graph containing the flies.
This is a slightly generalized version of Greedy Spiders proper, but there are some interesting things to note. Perhaps the most obvious question is about the algorithm $A$. Depending on your tastes
you could make it adversarial, devising the smartest possible move at every step of the way. This is just as hard as asking if there is any algorithm that the spiders can use to win. To make it
easier, $A$ could be an algorithm represented by a small circuit to which the player has access, or, as it truly is in the Greedy Spiders game, it could be the greedy algorithm (and the player can
exploit this).
Though I haven’t heard of the Greedy Spiders problem in the literature by any other name, it seems quite likely that it would arise naturally. One can imagine the spiders as enemies traversing a
network (a city, or a virus in a computer network), and your job is to hinder their movement toward high-value targets. Perhaps people in the defense industry could use a reasonable approximation
algorithm for this problem. I have little doubt that this game is NP-hard [2], but the purpose of this article is not to prove new complexity results. The point is that this natural theoretical
problem is a really fun game to play! And the game designer’s job is to do what game designers love to do: add features and design levels that are fun to play.
Indeed the Greedy Spiders folks did just that: their game features some 70-odd levels, many with multiple spiders and additional tools for the player. Some examples of new tools are: the ability to
delete a vertex of the graph and the ability to place a ‘decoy-fly’ which is (to the greedy-algorithm-following spiders) indistinguishable from a real fly. They player is usually given only one or
two uses of these tools per level, but one can imagine that the puzzles become a lot richer.
[2]: In the adversarial case it smells like it’s PSPACE-complete, being very close to known PSPACE-hard problems like Cops and Robbers and Generalized Geography. ↑
I can point to a number of interesting problems that I can imagine turning into successful games, and I will in a moment, but before I want to make it clear that I don’t propose game developers study
theoretical computer science just to turn our problems into games verbatim. No, I imagine that the wealth of problems in computer science can serve as inspiration, as a spring board into a world of
interesting gameplay mechanics and puzzles. The bonus for game designers is that adding features usually makes problems harder and more interesting, and you don’t need to know anything about proofs
or the details of the reductions to understand the problems themselves (you just need familiarity with the basic objects of consideration, sets, graphs, etc).
For a tangential motivation, I imagine that students would be much more willing to do math problems if they were based on ideas coming from really fun games. Indeed, people have even turned the
stunningly boring chore of drawing an accurate graph of a function into a game that kids seem to enjoy. I could further imagine a game that teaches programming by first having a student play a game
(based on a hard computational problem) and then write simple programs that seek to do well. Continuing with the spiders example they could play as the defender, and then switch to the role of the
spider by writing the algorithm the spiders follow.
But enough rambling! Here is a short list of theoretical computer science problems for which I see game potential. None of them have, to my knowledge, been turned into games, but the common features
among them all are the huge potential for creative extensions and interesting level design.
Graph Coloring
Graph coloring is one of the oldest NP-complete problems known. Given a graph $G$ and a set of colors $\{ 1, 2, \dots, k \}$, one seeks to choose colors for the vertices of $G$ so that no edge
connects two vertices of the same color.
Now coloring a given graph would be a lame game, so let’s spice it up. Instead of one player trying to color a graph, have two players. They’re given a $k$-colorable graph (say, $k$ is 3), and they
take turns coloring the vertices. The first player’s goal is to arrive at a correct coloring, while the second player tries to force the first player to violate the coloring condition (that no
adjacent vertices are the same color). No player is allowed to break the coloring if they have an option. Now change the colors to jewels or vegetables or something, and you have yourself an
award-winning game! (Or maybe: Epic Cartographer Battles of History)
An additional modification: give the two players a graph that can’t be colored with $k$ colors, and the first player to color a monochromatic edge is the loser. Add additional move types (contracting
edges or deleting vertices, etc) to taste.
Art Gallery Problem
Given a layout of a museum, the art gallery problem is the problem of choosing the minimal number of cameras so as to cover the whole museum.
This is a classic problem in computational geometry, and is well-known to be NP-hard. In some variants (like the one pictured above) the cameras are restricted to being placed at corners. Again, this
is the kind of game that would be fun with multiple players. Rather than have perfect 360-degree cameras, you could have an angular slice of vision per camera. Then one player chooses where to place
the cameras (getting exponentially more points for using fewer cameras), and the opponent must traverse from one part of the museum to the other avoiding the cameras. Make the thief a chubby pig
stealing eggs from birds and you have yourself a franchise.
For more spice, allow the thief some special tactics like breaking through walls and the ability to disable a single camera.
This idea has of course been the basis of many single-player stealth games (where the guards/cameras are fixed by the level designer), but I haven’t seen it done as a multiplayer game. This also
brings to mind variants like the recent Nothing to Hide, which counterintuitively pits you as both the camera placer and the hero: you have to place cameras in such a way that you’re always in vision
as you move about to solve puzzles. Needless to say, this fruit still has plenty of juice for the squeezing.
Pancake Sorting
Pancake sorting is the problem of sorting a list of integers into ascending order by using only the operation of a “pancake flip.”
Just like it sounds, a pancake flip involves choosing an index in the list and flipping the prefix of the list (or suffix, depending on your orientation) like a spatula flips a stack of pancakes.
Now I think sorting integers is boring (and it’s not NP-hard!), but when you forget about numbers and that one special configuration (ascending sorted order), things get more interesting. Instead,
have the pancakes be letters and have the goal be to use pancake flips to arrive at a real English word. That is, you don’t know the goal word ahead of time, so it’s the anagram problem plus finding
an efficient pancake flip to get there. Have a player’s score be based on the number of flips before a word is found, and make it timed to add extra pressure, and you have yourself a classic!
The level design then becomes finding good word scrambles with multiple reasonable paths one could follow to get valid words. My mother would probably play this game!
Bin Packing
Young Mikio is making sushi for his family! He’s got a table full of ingredients of various sizes, but there is a limit to how much he can fit into each roll. His family members have different
tastes, and so his goal is to make everyone as happy as possible with his culinary skills and the options available to him.
Another name for this problem is bin packing. There are a collection of indivisible objects of various sizes and values, and a set of bins to pack them in. Your goal is to find the packing that
doesn’t exceed the maximum in any bin and maximizes the total value of the packed goods.
I thought of sushi because I recently played a ridiculously cute game about sushi (thanks to my awesome friend Yen over at Baking And Math), but I can imagine other themes that suggest natural
modifications of the problem. The objects being packed could be two-dimensional, there could be bonuses for satisfying certain family members (or penalties for not doing so!), or there could be a
super knife that is able to divide one object in half.
I could continue this list for quite a while, but perhaps I should keep my best ideas to myself in case any game companies want to hire me as a consultant. :)
Do you know of games that are based on any of these ideas? Do you have ideas for features or variations of the game ideas listed above? Do you have other ideas for how to turn computational problems
into games? I’d love to hear about it in the comments.
Until next time!
Introducing Elliptic Curves
With all the recent revelations of government spying and backdoors into cryptographic standards, I am starting to disagree with the argument that you should never roll your own cryptography. Of
course there are massive pitfalls and very few people actually need home-brewed cryptography, but history has made it clear that blindly accepting the word of the experts is not an acceptable course
of action. What we really need is more understanding of cryptography, and implementing the algorithms yourself is the best way to do that. [1]
For example, the crypto community is quickly moving away from the RSA standard (which we covered in this blog post). Why? It turns out that people are getting just good enough at factoring integers
that secure key sizes are getting too big to be efficient. Many experts have been calling for the security industry to switch to Elliptic Curve Cryptography (ECC), because, as we’ll see, the problem
appears to be more complex and hence achieves higher security with smaller keys. Considering the known backdoors placed by the NSA into certain ECC standards, elliptic curve cryptography is a hot
contemporary issue. If nothing else, understanding elliptic curves allows one to understand the existing backdoor.
I’ve seen some elliptic curve primers floating around with all the recent talk of cryptography, but very few of them seem to give an adequate technical description [2], and legible implementations
designed to explain ECC algorithms aren’t easy to find (I haven’t found any).
So in this series of posts we’re going to get knee deep in a mess of elliptic curves and write a full implementation. If you want motivation for elliptic curves, or if you want to understand how to
implement your own ECC, or you want to understand the nuts and bolts of an existing implementation, or you want to know some of the major open problems in the theory of elliptic curves, this series
is for you.
The series will have the following parts:
1. Elliptic curves over finite fields
1. Finite fields primer (just mathematics)
2. Elliptic curve cryptography and random number generation
Along the way we’ll survey a host of mathematical topics as needed, including group theory, projective geometry, and the theory of cryptographic security. We won’t assume any familiarity with these
topics ahead of time, but we do intend to develop some maturity through the post without giving full courses on the side-topics. When appropriate, we’ll refer to the relevant parts of the many
primers this blog offers.
A list of the posts in the series (as they are published) can be found on the Main Content page. And as usual all programs produced in the making of this series will be available on this blog’s
Github page.
The first post will be published on Monday 2014-02-10. Hope you enjoy it!
[1] Okay, what people usually mean is that you shouldn’t use your own cryptography for things that actually matter, but I think a lot of the warnings are interpreted or extended to, “Don’t bother
implementing cryptographic algorithms, just understand them at a fuzzy high level.” I imagine this results in fewer resources for people looking to learn cryptography and the mathematics behind it,
and at least it prohibits them from appreciating how much really goes into an industry-strength solution. And this mindset is what made the NSA backdoor so easy: the devil was in the details. ↑
[2] From my heavily biased standpoint as a mathematician. ↑
Thinking about Graduate School? Consider Mathematical Computer Science at UI Chicago!
It’s that time of year where senior undergraduates are considering whether to go to graduate school. And I wouldn’t be surprised if many students were afraid of the prospect, perhaps having read that
popular genre of articles these days that tell you graduate school will turn you into an emotional wreck and that only a psychopathic masochist would put themselves through it.
The problem with these articles is they’re usually written by both outliers and those who put themselves in situations with no other options. I’ve felt my time at UI Chicago, however, has provided me
nothing but options and excitement! So if you’re thinking about graduate school in mathematics or theoretical computer science, here’s my pitch for
Why you should come to UI Chicago and study theoretical computer science
We’re social.
In fact, UI Chiago’s mathematics department is the most social of any math department I’ve ever heard of. I think this is the biggest benefit for me. On my first day here, I was surprised that
everyone was totally normal and not the typical weird antisocial stereotype one associates with people who like math. Our department has a huge list of seminars going on every day of the week, and a
small party every Friday called “Tea” that has a large attendance. We often go out to bars and restaurants, and have other outings. We even have a Facebook group (for grad students only) and a ping
pong league that the professors sometimes join. We currently have over 150 graduate students in our department, and I know around 70 by name.
We have world-class faculty.
Some of my colleagues came to UIC specifically to work with David Marker on model theory, or Lou Kauffman on knot theory. At least one researcher here has over two hundred publications! We have big
names in algebraic geometry, hypergraph combinatorics, dynamical systems, low-dimensional topology, and a very active logic group. Our theoretical computer science group (mixed with our combinatorics
group) is small but vibrant and growing fast. We just got three new mathematical computer science students this year, and I’m doing everything I can to convert some of the other students over to our
We’re in the middle of a thriving intellectual community.
Chicago is the center of the Midwest US, and there are a ton of universities not only in the city but within a few hours drive. There are regular seminars and colloquia at the University of Chicago,
Northwestern, and other smaller institutions like the Toyota Technical Institute (which has very strong researchers). Then there are the universities of Wisconsin, Indiana, and Michigan which all
have strong theoretical computer science groups (and of course other mathematics groups) and we get together for conferences like Midwest Theory Day.
Our department is not cutthroat competitive.
I hear rumors about top mathematics and computer science programs that (unintentionally) pit students against each other for the attention of a few glorified professors. That simply doesn’t happen
here. Everyone is friendly and people regularly collaborate. You can approach any professor and ask to do a reading course with them or ask them what kinds of open problems they’re thinking about,
and most of them will gladly sit down with you and explain all the neat ideas in their heads. Even the hardest, most sarcastic professors genuinely care about their students. I think, along with
being social, this makes our department one of the friendliest and most stress-free places to get a PhD.
We’re in a great city.
Chicago is really fun! I don’t know what else to say about this.
Our department staff is very supportive.
Our director and assistant director of graduate studies are extremely helpful at getting new students situated and ensuring they have funding. It’s not uncommon for students who start in the PhD
program to decide after one or two years that a PhD is not right for them. Usually they will stop with the requirements for a master’s degree, and there are no hard feelings. Students who do this are
even encouraged to return if they decide they want to finish their PhD later. In the mean time, our department guarantees tuition waivers and stipends to all of its teaching assistants (and there are
alternatives to teaching as well), so you can focus on your studies and not have to think too much about money.
And even more, if you decide to study theoretical computer science at UI Chicago you get a whole bunch of other benefits:
You get to hang out and do research with me!
(Okay maybe that’s not a serious benefit to consider)
Your post-grad school job opportunities widen.
Jobs are hard to come by for the purest of pure mathematics researchers. Research positions are in short supply, and unless you want to go into industry with an applied math degree the remaining
option is to teach at a 4-year institution. But if you study theoretical computer science, now you are qualified to do all kinds of things. Work at industry research labs like Microsoft Research,
Google Research, or Yahoo! Research. Work at government labs like Lincoln Labs and Lawrence Livermore National Labs, both of which I interned at. You can shoot for a professorship or do a postdoc
like a regular mathematics PhD would. If you’re hand with Python you could go into the software industry and get a high-demand job at any major company in cryptography or operations research (both of
which depend on ideas from TCS). And you always keep the option of teaching at a 4-year.
You have many options for internships during summers.
I, my colleagues, and even my advisor did research internships during the Summers at various research labs and industry companies. This is a particularly nice benefit of doing mathematical computer
science in grad school, because it augments your normal graduate student stipend by enough to live much more comfortably than otherwise (that being said, for extra money a lot of my pure math
colleagues will tutor on the side, and tutoring comes at a high price these days). It’s not uncommon to receive additional funding through these opportunities as well.
You get to travel a lot.
The main publication venue in computer science is the conference, and that means there are conferences happening all over the world all the time. In fact, I just got back from a conference in Aachen,
Germany, earlier this year I was at Berkeley and Stanford, I am helping to run a conference in Florida early next year, and I am looking at conferences in Beijing and Barcelona next Summer. All of
the trips you take to present your published research is paid for, so it’s just pure awesome.
You enjoy the breadth of problems in computer science.
Computer science is unique in that it connects to almost every field of mathematics.
1. Like statistics? There’s statistical machine learning and randomized algorithm design.
2. Like real analysis and dynamical systems? There’s convex optimization, support vector machines, and tons of computational aspects of PDE’s.
3. Like algebra or number theory? There’s cryptography.
4. Like combinatorics? There’s combinatorial optimization.
5. Like game theory? I just got back from a conference on algorithmic game theory.
6. Like geometry and representation theory? There’s a Geometric Complexity Theory program working toward P vs NP.
7. Like logic? You might be surprised to know that the cleanest proofs of the incompleteness theorems are via Turing machines.
8. Like topology? There are researchers (not at UIC) working on computational topology, like persistent homology which we’ve been slowly covering on this blog.
The list just goes on and on, and this isn’t even mentioning the purely pure theoretical computer science topics which have a flavor of their own.
Programming options exist, but you aren’t forced to write programs.
Some of the greatest computer science researchers cannot write simple computer programs, and if you’re just interested in theory there is plenty of theory to go around. On the other hand, we have
researchers in our department studying aspects of supercomputing, and options for collaboration with researchers in the (engineering) computer science department. Over there they’re studying things
like biological networks, machine learning and robotics, and all kinds of hands-on applied stuff that you might be interested in if you read this blog.
So if you’re interested in joining us for next year and have any questions, feel free to drop me or the professors in the MCS group or the director of graduate studies an email. | {"url":"http://jeremykun.com/category/general/","timestamp":"2014-04-16T13:54:27Z","content_type":null,"content_length":"112471","record_id":"<urn:uuid:03bc69d8-52ca-42be-babf-eeaaeafe072c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Interactions between active particles and dynamical structures in chaotic flow
(a) Snapshot of one unit cell of the flow field at zero phase (that is, for t such that sin Ωt = 0). Arrows show the local fluid velocity, and the shading shows the vorticity. As time progresses, the
flow field oscillates sinusoidally in the horizontal direction. (b) Poincare section (at zero phase) for a fluid element initially in the chaotic sea, for B = 0.12 and Ω = 6.28. Only one quarter of
the unit cell is shown (corresponding to the lower-left vortex in (a)); the rest of the unit cell is related by symmetry. The central empty region is a period-1 elliptic island; the surrounding empty
regions are a period-3 island chain. (c) Finite-time Lyapunov exponent (FTLE) field at zero phase. Again, only one quarter of the unit cell is shown.
Chaotic diffusion coefficient D normalized by D 0, the diffusion coefficient for fluid elements, as a function of the swimming speed v s , for spherical swimmers with σ s = σ r = 0. Error bars are
computed from the statistical fluctuations between many sets of simulations. Two distinct regions of suppressed long-time transport are seen, corresponding to trapping by the period-3 islands and by
the period-1 islands.20
Chaotic diffusion coefficient D normalized by D 0 as a function of σ s , for v s = 0, σ r = 0, and α = 0. When compared with Fig. 2, transport is more strongly suppressed, but the distinct signatures
of each type of elliptic island are no longer present.
Average time for a swimmer to cross a cell boundary as a function of its spatial location for (a) v s = 0.004 and σ s = σ r = 0 and (b) σ s = 0.15 and v s = σ r = 0. Only one quarter of the flow
domain is shown. The shade/color bar gives the cell-crossing time in flow cycles. For the deterministic swimmer in (a), trapping is strong in the period-3 islands and on a small ring just inside the
period-1 island. The purely stochastic swimmer can wander into the core of the period-1 island, where trapping is strongest.
Chaotic diffusion coefficients D in the two-dimensional parameter space spanned by v s and σ s for σ r = 0. The shade/color bar shows D relative to D 0. The black line separates the regions of
suppressed transport (D/D 0 < 1) from those of enhanced transport (D/D 0 > 1).
Chaotic diffusion coefficients D in the two-dimensional parameter space spanned by v s and σ r for σ s = 0. The shade/color bar shows D relative to D 0. The black line separates the regions of
suppressed transport (D/D 0 < 1) from those of enhanced transport (D/D 0 > 1). Transport is suppressed in larger region of parameter space than it was for purely translational stochasticity case in
Fig. 5.
Spatially resolved maps of the average time to cross a cell boundary for v s = 0.004, and σ r = (a) 0.1, (b) 1.0, (c) 3.0, and (d) 6.0. σ s = 0 for all panels. The shade/color bar gives the
cell-crossing times in flow cycles. The times are much longer than the comparable v s = 0.004, σ r = 0 case shown in Fig. 4(a).
(a) Chaotic diffusion coefficient D normalized by D 0 as a function of eccentricity α for deterministic swimmers with v s = 0.002. Transport is much more strongly suppressed for ellipsoids of
intermediate eccentricities than it is for spheres. (b)–(d) Probability density functions of swimmer position for (b) α = 0, (c) α = 0.5, and (d) α = 1. The strong suppression of transport for
ellipsoidal particles is due to the formation of attractors.
(a) Chaotic diffusion coefficient D normalized by D 0 as a function of eccentricity α for deterministic swimmers with v s = 0.08. For this speed, transport of ellipsoidal particles is strongly
enhanced relative to spheres. (b)–(d) Probability density functions of swimmer position for (b) α = 0, (c) α = 0.5, and (d) α = 1. The strong enhancement of transport is due to the clustering of
ellipsoids on the stable manifolds of the hyperbolic fixed points.
Mean nearest neighbor distance δ NN scaled by the value for passive particles δ0 as a function of eccentricity for (a) v s = 0.002 and (b) v s = 0.08. High aspect ratio swimmers tend to be closer to
each than spherical swimmers are, leading to enhanced encounter rates.
Scitation: Interactions between active particles and dynamical structures in chaotic flow | {"url":"http://scitation.aip.org/content/aip/journal/pof2/24/9/10.1063/1.4754873","timestamp":"2014-04-17T22:15:45Z","content_type":null,"content_length":"83584","record_id":"<urn:uuid:d72f10c2-0fbc-4ff5-a2bb-a2b67003fe18>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Use the figure to find the measures. Area of base = Lateral area = Total area = Volume =
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fed04a3e4b0bbec5cfc8875","timestamp":"2014-04-17T21:48:17Z","content_type":null,"content_length":"45202","record_id":"<urn:uuid:050deebc-9619-4b2c-b51c-4bad89dea699>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interior Angles in Convex Polygons
6.1: Interior Angles in Convex Polygons
Difficulty Level:
At Grade
Created by: CK-12
Practice Interior Angles in Convex Polygons
Below is a picture of Devil’s Post pile, near Mammoth Lakes, California. These posts are cooled lava (called columnar basalt) and as the lava pools and cools, it ideally would form regular hexagonal
columns. However, variations in cooling caused some columns to either not be perfect or pentagonal.
First, define regular in your own words. Then, what is the sum of the angles in a regular hexagon? What would each angle be? After completing this Concept you'll be able to answer questions like
Watch This
CK-12 Foundation: Chapter6InteriorAnglesinConvexPolygonsA
Watch the first half of this video.
James Sousa: Angles of Convex Polygons
Recall that interior angles are the angles inside a closed figure with straight sides. As you can see in the images below, a polygon has the same number of interior angles as it does sides.
A diagonal connects two non-adjacent vertices of a convex polygon. Also, recall that the sum of the angles in a triangle is $180^\circ$
Investigation: Polygon Sum Formula
Tools Needed: paper, pencil, ruler, colored pencils (optional)
1. Draw a quadrilateral, pentagon, and hexagon.
2. Cut each polygon into triangles by drawing all the diagonals from one vertex. Count the number of triangles.
Make sure none of the triangles overlap.
3. Make a table with the information below.
Name of Polygon Number of Sides Number of $\triangle s$ (Column 3) $\times$$^\circ$$\triangle$ Total Number of Degrees
Quadrilateral 4 2 $2 \times 180^\circ$ $360^\circ$
Pentagon 5 3 $3 \times 180^\circ$ $540^\circ$
Hexagon 6 4 $4 \times 180^\circ$ $720^\circ$
4. Do you see a pattern? Notice that the total number of degrees goes up by $180^\circ$$n$$n - 2$$(n - 2) \times 180^\circ$
Polygon Sum Formula: For any $n-$$(n - 2) \times 180^\circ$
A regular polygon is a polygon where all sides are congruent and all interior angles are congruent.
Regular Polygon Formula: For any equiangular $n-$$\frac{(n-2)\times 180^\circ}{n}$
Example A
Find the sum of the interior angles of an octagon.
Use the Polygon Sum Formula and set $n = 8$
$(8 - 2) \times 180^\circ = 6 \times 180^\circ = 1080^\circ$
Example B
The sum of the interior angles of a polygon is $1980^\circ$
Use the Polygon Sum Formula and solve for $n$
$(n - 2) \times 180^\circ & = 1980^\circ\\180^\circ n - 360^\circ & = 1980^\circ\\180^\circ n & = 2340^\circ\ & = 13 \qquad \text{The polygon has} \ 13 \ \text{sides.}$
Example C
How many degrees does each angle in an equiangular nonagon have?
First we need to find the sum of the interior angles in a nonagon, set $n = 9$
$(9 - 2) \times 180^\circ = 7 \times 180^\circ = 1260^\circ$
Second, because the nonagon is equiangular, every angle is equal. Dividing $1260^\circ$$140^\circ$
Watch this video for help with the Examples above.
CK-12 Foundation: Chapter6InteriorAnglesinConvexPolygonsB
Concept Problem Revisited
A regular polygon has congruent sides and angles. A regular hexagon has $(6-2)180^\circ=4\cdot180^\circ=720^\circ$$720^\circ$$120^\circ$
The interior angle of a polygon is one of the angles on the inside. A regular polygon is a polygon that is equilateral (has all congruent sides) and equiangular (has all congruent angles).
Guided Practice
1. Find the measure of $x$
2. The interior angles of a pentagon are $x^\circ, x^\circ, 2x^\circ, 2x^\circ,$$2x^\circ$$x$
3. What is the sum of the interior angles in a 100-gon?
1. From the Polygon Sum Formula we know that a quadrilateral has interior angles that sum to $(4-2) \times 180^\circ=360^\circ$
Write an equation and solve for $x$
$89^\circ + (5x - 8)^\circ + (3x + 4)^\circ + 51^\circ & = 360^\circ\\8x & = 224\\x & = 28$
2. From the Polygon Sum Formula we know that a pentagon has interior angles that sum to $(5-2) \times 180^\circ=540^\circ$
Write an equation and solve for $x$
$x^\circ + x^\circ + 2x^\circ + 2x^\circ + 2x^\circ&=540^\circ\\ 8x&=540\\x&=67.5$
3. Use the Polygon Sum Formula. $(100-2) \times 180^\circ=17,640^\circ$
1. Fill in the table.
# of sides Sum of the Interior Angles Measure of Each Interior Angle in a Regular $n-$
3 $60^\circ$
4 $360^\circ$
5 $540^\circ$ $108^\circ$
6 $120^\circ$
2. What is the sum of the angles in a 15-gon?
3. What is the sum of the angles in a 23-gon?
4. The sum of the interior angles of a polygon is $4320^\circ$
5. The sum of the interior angles of a polygon is $3240^\circ$
6. What is the measure of each angle in a regular 16-gon?
7. What is the measure of each angle in an equiangular 24-gon?
8. Each interior angle in a regular polygon is $156^\circ$
9. Each interior angle in an equiangular polygon is $90^\circ$
For questions 10-18, find the value of the missing variable(s).
19. The interior angles of a hexagon are $x^\circ, (x + 1)^\circ, (x + 2)^\circ, (x + 3)^\circ, (x + 4)^\circ,$$(x + 5)^\circ.$$x$
Files can only be attached to the latest version of Modality | {"url":"http://www.ck12.org/book/CK-12-Geometry-Concepts/r2/section/6.1/","timestamp":"2014-04-16T10:26:41Z","content_type":null,"content_length":"156376","record_id":"<urn:uuid:06699eee-20eb-4555-b58f-628369d12a00>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Book Value Per Share
Book Value Per Share
Book Value Per Share - The book value per share is calculated based on the book value of a company, which is then divided by out many shares the company has outstanding.
Book Value per share (BV)
(Equity Capital + Reserves & Surplus)/Shares Outstanding
Book Value per share is derived from the Shareholders' Equity or Net Worth of the Company divided by the number of equity shares outstanding ...
book value per share " the assets of a company, minus the liabilities, divided by the number of shares outstanding; this is one method of gauging the true value of shares
bulls " investors who believe that the stock market will go up ...
Book Value Per Share:
Net asset worth of a company's common stock.
Box Spread:
A four-sided option spread that involves a long call and a short put at one strike price as well as a short call and a long put at another strike price. E.g.
Book Value Per Share. A company's book value is a price ratio calculated by dividing total net assets (assets minus liabilities) by total shares outstanding.
Book Value per Share - A company's book value divided by its shares outstanding. Book value per share reflects accounting valuation but not necessarily market valuation.
Book Value Per Share - The per-share value of a stock based on the figures shown on a firm's balance sheet. This value is typically less than a stock's market price. Some analysts view book value per
share as a "price floor" for a stock.
Book value per share
The ratio of stockholder's equity to the average number of common shares.
Book value per share: Total book value divided by the number of shares outstanding. Measured as a percentage change as of the annual Index screening date compared to the prior 12 months. Higher
values indicate greater growth orientation.
Book value per share (BVPS) : Book value of common equity / Common shares outstanding at balance sheet date.
Book value per share = Book value / Total common shares outstanding
Assets ...
Book value per share is often calculated for you in the various Internet financial stock search programs available.
BOOK VALUE PER SHARE. A share of stock's equity value, computed by dividing a company's net worth (assets minus liabilities) by the number of shares outstanding.
Book Value per Share = (Total Shareholder Equity - Preferred Equity) / Common Shares Outstanding
Market to Book = Total Market Capitalization / Total Book Value ...
What Is a Book Value Per Share?
What Are Blank Stock Certificates?
What Are the Best Tips for Selling Stock Certificates?
beta lower than 1.0 indicates that the stock will usually change to a lesser extent than that of the market. The higher the beta, the greater the investment risk.) bid price: The price one is willing
to pay for a security book value per ...
Book profit Book runner Book to bill Book to market Book value Book value per share Book-Entry Book-entry securities Bootstrap Bootstrapping Borrow Borrowed reserves Borrower fallout Boston ...
The book value of a company may be divided by the number of outstanding shares of common stock to get the book value per share of common stock. [OTS] book value per share The ratio of stockholder
equity to the average number of common shares.
book value per share (investment & finance)
bookkeeping (investment & finance)
bookrunner (investment & finance)
booster shot report (investment & finance)
booths (investment & finance)
bootstrap (investment & finance) ...
Dilution occurs when a company issues additional shares of stock, and as a result the earnings per share and the book value per share decline.
Often of interest to value investors, the book value per share ratio is an expression of how much in actual value would be left for each share if the company went out of business.
This overall figure is commonly expressed as book value per share, which divides shareholders' equity by the number of common shares outstanding. Book Value per share is located in the statistical
array on the Value Line page.
Price to Book = Current Market Price/Book Value per share
Although price to book ratio still has some utility today, the world has changed since Ben Graham's day.
P/B is the ratio of a stock's price to its book value per share. A stock selling at a high PB ratio, such as 3 or higher, may represent a popular growth stock with minimal book value.
Usually Book Value Per Share. Calculated by dividing the Net Worth of a Company (common stock plus retained earnings) by the number of shares outstanding.
Market price per share / Book value per share.
Market price of the share and book values for any listed company are available straight from financial web sites. So there is no need to compute it.
Price to book value ratio (P/B or PBV) = Price of stock / Book value per share.
Price to book value or PBV describe how big the market value of respect the book shares a company.
The ratio of a stock's latest closing price divided by its book value per share. Book value per share is obtained by dividing the book value (total assets minus total liabilities) by total shares
current closing price of the stock as a percentage of the latest book value per share), and even a low price-to-earnings ratio (i.e. current share price as a percentage of its per share earnings).
Valuation of illiquid and unlisted and/or thinly traded shares/debentures: For shares, this could be the book value per share or an estimated market price based on performance of other shares in the
Book value The assessed value of a company`s assets. ("Book value per share," which is frequently used in assessing the potential value of a company`s stock, is defined as the per-share assessed
value of a company`s assets.
Price/Book Ratio
A ratio of the price of a stock to its company's book value per share. Companies that are older, slower-growing, or depressed in price because of poor current earnings performance generally sell at
low price/book value.
Effect on earnings per share and book value per share if all convertible securities were converted and all warrants and stock options were exercised.
book value per share (business definition)
bookkeeper (business definition)
bookmark (business definition)
bookrunner (business definition)
boomerang method (business definition)
boondoggle (business definition)
boot (business definition) ...
Price-to-earnings ratios (P/E) below a certain absolute limit.
Dividend yields above a certain absolute limit.
Book value per share at a certain level relative to the share price.
Market-book ratio
Market price of a share divided by book value per share.
Market break
See: Break ...
Closing price of the stock on the last trading day of the fiscal year dividend by the fiscal year book value per share. Book value is the same figure as common stock equity from the 10-Q or 10-K.
Price/Earnings ...
Common stock ratios
Ratios that are designed to measure the relative claims of stockholders to earnings (cash flow per share), and equity (book value per share) of a firm.
Share, Book Value, Book, Stock, Market | {"url":"http://en.mimi.hu/stockmarket/book_value_per_share.html","timestamp":"2014-04-21T05:09:28Z","content_type":null,"content_length":"30795","record_id":"<urn:uuid:011f1aff-d3b8-48d8-9fa6-e0b5bb8b9036>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
kotarak / monad - 2a42029
Move monad.clj up one directory for AOT
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o. | {"url":"https://bitbucket.org/kotarak/monad/commits/2a42029764175aace6f5919d9a9d39a8282e0e36?at=tip","timestamp":"2014-04-18T02:45:13Z","content_type":null,"content_length":"148756","record_id":"<urn:uuid:efe8ae8c-c750-49d8-a221-a3ba5dc7ba41>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
Watertown, MA Algebra Tutor
Find a Watertown, MA Algebra Tutor
...In addition, I tutor and have tutored in mathematics from elementary school to high school and throughout college. I have a good command of the material, even well beyond the regents
themselves. I've programmed extensively in Mathematica.
47 Subjects: including algebra 1, algebra 2, reading, chemistry
...I have MS degrees in Physics (University of Stuttgart, Stuttgart - Germany and in Electrical Engineering (University of Florida,Gainesville - Florida. I am certified in Physics and
Mathematics. I taught eight years High school Physics.
6 Subjects: including algebra 1, physics, trigonometry, precalculus
...NOW IS THE TIME - Algebra 1 introduced many topics and skills which will be PRACTICED A LOT in Algebra 2. If your child has come this far but still has problems with math skills, it's not too
late - she/he can catch up and even become fond of math! Algebra 2 is very important to master, as there are 2 or more years of math in the future.
9 Subjects: including algebra 1, algebra 2, geometry, SAT math
I have worked over 20 years as an engineer. I understand how science and math are used in industry. I like to help students understand the importance of trying to determine if answers make sense.
10 Subjects: including algebra 1, algebra 2, physics, calculus
...To name a few: congenital encephalopathy, seizure disorder, developmental delay, neurogenic bladder and COPD. Further, I worked as social worker in a pediatric nursing care facility. All 80 of
my patients were medically and cognitively impaired.
45 Subjects: including algebra 1, algebra 2, chemistry, reading
Related Watertown, MA Tutors
Watertown, MA Accounting Tutors
Watertown, MA ACT Tutors
Watertown, MA Algebra Tutors
Watertown, MA Algebra 2 Tutors
Watertown, MA Calculus Tutors
Watertown, MA Geometry Tutors
Watertown, MA Math Tutors
Watertown, MA Prealgebra Tutors
Watertown, MA Precalculus Tutors
Watertown, MA SAT Tutors
Watertown, MA SAT Math Tutors
Watertown, MA Science Tutors
Watertown, MA Statistics Tutors
Watertown, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/watertown_ma_algebra_tutors.php","timestamp":"2014-04-16T07:39:54Z","content_type":null,"content_length":"23870","record_id":"<urn:uuid:2a86d3ad-3267-46a3-b36d-bb6d9ebda9a7>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cylinder and motor speed calculations
I'm working on the inverse kinematics and such on a robot for a school project.
The robot has three arms of almost equal lengths.
I'm using Excel for my calculations.
I have worked out the inverse kinematics, Jacobi for "joint velocities" (excuse my English), and required cylinder lengths.
Now I have to work out the following:
My input will be any desired "wrist velocity", any position data, any static force on the wrist
Cylinder velocity and required oil flow and pressure to achieve that speed
Motor velocity (to run the pump and achieve necessary oil flow)
Static force in all joints and cylinder actuators
I have my lecture notes but I'm having a hard time getting this right.
Any help is greatly appreciated!! | {"url":"http://www.societyofrobots.com/robotforum/index.php?topic=7826.0","timestamp":"2014-04-17T22:01:47Z","content_type":null,"content_length":"43281","record_id":"<urn:uuid:fd1130e3-993d-428c-b51a-bba57dec67ac>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Cosmic Energy Inventory - M. Fukugita &
P.J.E. Peebles
2.8. Products of Stellar Evolution
It is in principle straightforward to compute the integrated outputs of stellar evolution - energy, neutrinos, helium, and heavy elements - given models for the IMF, the star formation history, and
stellar evolution. Since the details of the results of stellar evolution computations are not easily assembled, we use approximate estimates by procedures similar to those developed in Section 2.3.1
and 2.3.2 for the stellar population and its evolution. The results add to the checks of consistency of our estimates of the stellar production of helium and heavy elements and the resulting total
energy release, and are used to estimate the inventory entries for the neutrino cosmic energy density.
All stars with masses m < 1 m[] are still on the main sequence. We assume that on average 5% of the hydrogen in these subsolar stars has been consumed, with energy production efficiency 0.0071. Most
of the stars with masses m > 1 m[] have already undergone full evolution and left compact remnants, while the fraction ~ (m / m[])^-2.5 is still on the main sequence. We assume that in the latter
stars on average 5% of the hydrogen has been consumed, as for subsolar stars. For the evolved stars we do not attempt to follow the details of nuclear burning and mass loss. Instead, we adopt
estimates of the nuclear fuel consumed or mass lost in a few discrete stages of evolution, in a similar fashion to the approach used in Section 2.3.1 to tally stellar remnants.
When the amount of hydrogen consumed in stellar burning is 10% of the mass of a star it leaves the main sequence. In the model in Section 2.3.1, stars with masses in the range 1 < m < 8m[] eventually
produce white dwarfs that mainly consist of a carbon-oxygen core. In standard stellar evolution models, hydrogen burning extends outward to a shell after core hydrogen exhaustion, and helium burning
similarly continues in a shell after core helium exhaustion. That leaves a CO core with the mass given by equation (69). The helium layer outside the CO core is thin for m < 2.2 m[], for which helium
ignition takes place as a flash, but in stars with masses m > 2.2 m[] a significant amount of helium is produced outside the core, transported by convection to the envelope, and liberated. A 5 m[]
star liberates 0.4 m[] of helium and produces a core with mass 0.85 m[] (e.g., Kippenhahn, Thomas & Weigert 1965). We model the helium production as a function of the initial star mass m as
in solar mass units. The energy production in hydrogen shell burning is the product of this mass with the post-stellar hydrogen mass fraction, 0.71 (eq. [85]), and the efficiency factor, 0.0071.
Helium burning in the core produces energy with efficiency factor 0.0010.
For stars with masses m > 8m[] we adopt the helium yield from Table 14.6 of Arnett (1996), ^9 which we parametrise as
This connects to equation (114) at 8.7 m[]. We take the CO core mass as a function of the initial stellar mass from Arnett (1996):
We use an interpolation of eqs. [69] and [116] for stellar masses between 8 and 13m[]. The energy release is 0.0071 × 0.71 per unit mass for He production, and 0.0014 for CO core formation and the
further heavy element production.
The energy output obtained by integration over the IMF and PDMF is
The partition into each phase of stellar evolution and stellar mass range is given in Table 5, where the numbers are normalised to equation (117). About 60% of the energy is produced in the evolved
Table 5. Stellar energy production ^a
stage of stellar evolution 0.08 - 1 m[] 1 - 8 m[] 8 - 100 m[] sum
main sequence 0.11 0.20 0.12 0.43
H shell burning 0.18 0.29 0.48
core evolution 0.05 0.04 0.09
sum 0.11 0.43 0.46 1.00
^a Normalised to ^-6.
The estimate of the total energy generation in equation (117) is in satisfactory agreement with our estimate of the nuclear binding energy, [BE] = 5.7 × 10^-6 (eq. [97]), and the energy production
required to produce our estimate of the present radiation energy density, [] = (5.1± 1.5) × 10^-6 (eq. [106] corrected for neutrino emission, as discussed in Section 2.8.2). That is, our models for
stellar formation and evolution are consistent with our estimates of the accumulation of heavy elements and the stellar background radiation.
In our models for stellar formation and evolution the mass of helium that has been liberated to interstellar space is
This may be compared to our estimate from the production of heavy elements and the associated production of helium, [Y] = 1.7± 1 × 10^-4 (eq. [94]). The heavy element production is calculated in a
similar way. The current wisdom is that a neutron star or black hole is left at the end of the evolution of a star with mass m > 8 m[], and the rest of the mass is returned to the interstellar
medium. On subtracting the remnant mass (indicated in Table 2) from the total heavy element mass produced, we find that in our model the heavy element production is
The more direct estimate in equation (92) is [Z]' ^-5.
We suspect the checks on helium and heavy element production in equations (118) and (119) are as successful as could be expected. In particular, equation (116) for the CO core mass is not tightly
constrained. One must consider also the possibility that some supernovae do not produce compact remnants, as is suggested by the cases of Cas A and SN1987A. The maximum heavy element production when
there are no remnant neutron stars or black holes is three times the value in equation (119), and larger than [Z]' (eq. [92]).
2.8.2. Neutrinos from Stellar Evolution
We need the fraction f[] of the energy released as neutrinos by the various processes of nuclear burning. In stars with masses m < 1.4 m[], energy generation in hydrogen burning is dominated by the
slow reaction p + p d + e^+ + [e]. This produces neutrinos with mean energy 0.265 MeV, which amounts to f[] = 0.020 times the energy generated in helium synthesis. In a Solar mass star electron
capture of ^7Be adds 0.005 to the fraction f[]. The neutrino energy emission fraction is larger in higher mass (m > 1.4 m[]) main-sequence stars in which the CNO cycle dominates. In this process, the
neutrinos produced in the ^+ decays of ^13N and ^15O carry away the energy fraction f[] = 0.064. The fraction increases to 0.075 for m m[], when the subchain ^15N-^16O-^17F-^17O-^14N starts
dominating and neutrinos are produced by the beta decay of ^17F. This sidechain also dominates during shell burning.
The integral of these factors over the stellar IMF normalised to the present-day mass density [star] = 0.0027 (eq. [27]), together with our prescription for energy generation in Table 5, yields the
neutrino energy production,
The temperatures and densities that are reached up to carbon burning are low enough that there is negligible neutrino energy loss from neutrino pair production processes.
The temperatures after carbon burning are high enough that the neutrino energy loss dominates, that is, f[] Weaver et al. 1978). Thus we may take it that the extra binding energy ^20Ne with respect
to the binding energy of carbon represents the neutrino energy emitted in the late stages of stellar evolution. On multiplying the heavy element mass (the sum of entries 1, 2 and 4 to 9 in Table 3)
by Z[i] / Z) BE)[i] (where <BE)> = 0.0004 for the solar mix of element abundances, and Z[i] / Z = 0.35 is from Grevesse & Sauval (2000), we obtain
The sum of equations (120) and (121) is 7% of the total energy production (eq. [117]). The present energy density of the stellar neutrinos, in entry 8.1 in Table 1, is the product of this sum with
the redshift loss factor ~ 0.5 (eq. [35]).
2.8.3. White Dwarfs and Neutron Stars
Most of the gravitational energy liberated in white dwarf formation goes to neutrinos, and in the latest stage to X-rays. Since the latter is small the contribution to the neutrino energy density
(entry 8.2 in the inventory) is the product of the gravitational binding energy in entry 5.3 with the redshift factor.
More than 99% of the energy released in core collapse also is carried away by neutrinos. Thus we similarly obtain entry 8.3 by multiplying entry 5.4 by the redshift factor.
^9 This helium yield is significantly larger than Arnett's helium core mass, m[He] = 0.43m - 2.46. Back. | {"url":"http://ned.ipac.caltech.edu/level5/March04/Fukugita/Fukugita2_8.html","timestamp":"2014-04-17T22:01:06Z","content_type":null,"content_length":"16650","record_id":"<urn:uuid:5c293b96-162a-42c5-bd55-93b94a374395>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
Turin, GA ACT Tutor
Find a Turin, GA ACT Tutor
...I also have experience tutoring students in math and algebra.I have a Bachelor of Music degree in vocal performance from Mercer University. I have performance experience in opera, art song,
oratorio, and other genres of singing. I have been teaching private voice lessons since the Spring of 2012.
25 Subjects: including ACT Math, reading, calculus, statistics
...In high school, I graduated as valedictorian, and I have experience tutoring math subjects ranging from pre-algebra to AP Calculus. Additionally, during my time at MIT, I ran a free science
summer camp for local disadvantaged middle school students for two years. In addition to helping my young...
18 Subjects: including ACT Math, English, algebra 2, calculus
...I currently teach special needs students in an elementary education environment. My Masters degree is in Early Elementary Education and I am also certified in Reading-Middle Grades. I have
taught Reading and Phonics at the Elementary and Middle School levels.
48 Subjects: including ACT Math, English, reading, writing
I have been teaching in the area of Mathematics and French for the past five years in a private Christian school (middle and high school) in Marietta, GA. I do enjoy tutoring or interacting one
on one with students. I do strongly believe that each student is capable of succeeding, especially in Math.
13 Subjects: including ACT Math, calculus, discrete math, differential equations
...I am certified to teach science and mathematics in Ohio, was a "preferred substitute teacher" in two school districts in NE Ohio, and have tutored and taught science and mathematics in high
school and middle school in Ohio for 9 years and since 2009 in Georgia. In my experience tutoring chemistr...
12 Subjects: including ACT Math, chemistry, algebra 1, algebra 2
Related Turin, GA Tutors
Turin, GA Accounting Tutors
Turin, GA ACT Tutors
Turin, GA Algebra Tutors
Turin, GA Algebra 2 Tutors
Turin, GA Calculus Tutors
Turin, GA Geometry Tutors
Turin, GA Math Tutors
Turin, GA Prealgebra Tutors
Turin, GA Precalculus Tutors
Turin, GA SAT Tutors
Turin, GA SAT Math Tutors
Turin, GA Science Tutors
Turin, GA Statistics Tutors
Turin, GA Trigonometry Tutors
Nearby Cities With ACT Tutor
Brooks, GA ACT Tutors
Chattahoochee Hills, GA ACT Tutors
Concord, GA ACT Tutors
Gay, GA ACT Tutors
Grantville, GA ACT Tutors
Haralson ACT Tutors
Hogansville ACT Tutors
Lovejoy, GA ACT Tutors
Palmetto, GA ACT Tutors
Sargent, GA ACT Tutors
Sharpsburg, GA ACT Tutors
Whitesburg, GA ACT Tutors
Williamson, GA ACT Tutors
Woolsey, GA ACT Tutors
Zebulon, GA ACT Tutors | {"url":"http://www.purplemath.com/Turin_GA_ACT_tutors.php","timestamp":"2014-04-20T11:15:50Z","content_type":null,"content_length":"23622","record_id":"<urn:uuid:80083009-9240-4968-99a0-4e63021e5fb2>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Describe a vertex composing polygon contours, and whether it belong to an edge or not. More...
#include <OctahedronPolygon.hpp>
EdgeVertex (bool b)
EdgeVertex (const Vec3d &v, bool b)
Vec3d vertex
The normalized vertex direction. More...
bool edgeFlag
Set to true if the vertex is part of at least one edge segment, i.e. it lays on the boundary of the polygon. More...
Describe a vertex composing polygon contours, and whether it belong to an edge or not.
Definition at line 31 of file OctahedronPolygon.hpp.
bool EdgeVertex::edgeFlag
Set to true if the vertex is part of at least one edge segment, i.e. it lays on the boundary of the polygon.
Definition at line 39 of file OctahedronPolygon.hpp.
The documentation for this struct was generated from the following file: | {"url":"http://www.stellarium.org/doc/0.12.2/structEdgeVertex.html","timestamp":"2014-04-16T19:11:39Z","content_type":null,"content_length":"7415","record_id":"<urn:uuid:577bff44-e4d3-437d-870a-35619bb7c3dc>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
Weight, Volume, and Density
If you have a homogenous substance, such as water, the mass of an amount of the substance is proportional to its volume. The ratio of mass to volume is then a constant, called the density. The
density of an object is therefore found by dividing the mass by the volume. In other words,
density = mass ÷ volume.
In this assignment we will calculate the densities of various objects. Because our volume measuring devices use metric measurements, measure mass in grams.
Problem 1. Weigh 300 ml of water and use your measurement to calculate the density of water. Weigh two other volumes of water and repeat the calculation of the density of water.
Problem 2. Find the weight and volume of two rocks. Calculate the density of each rock. Describe how you measured the volume of the rocks.
Problem 3. Calculate the density of one of the following items: a marble, a penny, or a nickel. Before you do this, think about how you can measure the mass in a way that gives you as accurate a
measurement as possible with the tools you have. Describe how you measured the volume of the object you used.
Problem 4. If you had a cubic meter of your first rock, approximately what would be the mass of this amount of rock? To find this, you can use the formula defining density to find mass, since mass =
density × volume. You will also have to do some unit conversions. | {"url":"http://sierra.nmsu.edu/morandi/CourseMaterials/WeightVolumeDensity.html","timestamp":"2014-04-17T09:33:23Z","content_type":null,"content_length":"2109","record_id":"<urn:uuid:d700a8bf-43fb-493a-ad79-7e1bfd32de47>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
Early Life
Aryabhata(some time misspelled as ‘Aryabhatta’) was one of the first Indian mathematicians and astronomers belonging to the classical age. He was born in 476 BC in Tarenaga, a town in Bihar, India.
It is however definite that he travelled to Kusumapara (modern day Patna) for studies and even resided there for some time. It is mentioned in a few places that Aryabhata was the head of the
educational institute in Kusumapara. The University of Nalanda had an observatory in its premises so it is hypothesized that Aryabhata was the principal of the university as well. On the other hand
some other commentaries mention that he belonged to Kerala.
Mathematical Work
Aryabhata wrote many mathematical and astronomical treatises. His chief work was the ‘Ayrabhatiya’ which was a compilation of mathematics and astronomy. The name of this treatise was not given to it
by Aryabhata but by later commentators. A disciple by him called the ‘Bhaskara’ names it ‘Ashmakatanra’ meaning ‘treatise from the Ashmaka’. This treatise is also referred to as ‘Ayra-shatas-ashta’
which translates to ‘Aryabhata’s 108’. This is a very literal name because the treatise did in fact consist of 108 verses. It covers several branches of mathematics such as algebra, arithmetic, plane
and spherical trigonometry. Also included in it are theories on continued fractions, sum of power series, sine tables and quadratic equations.
Aryabhata worked on the place value system using letters to signify numbers and stating qualities. He also came up with an approximation of pi ( ) and area of a triangle. He introduced the concept of
sine in his work called ‘Ardha-jya’ which is translated as ‘half-chord’.
Astronomical Work
Aryabhata also did a considerable amount of work in astronomy. He knew that the earth is rotating on an axis around the sun and the moon rotated around it. He also discovered the position of nine
planets and stated that these also revolved around the sun. He pointed out the eclipses; both lunar and solar. Aryabhata stated the correct number of days in a year that is 365. He was the first
person to mention that the earth was not flat but in fact a spherical shape. He also gave the circumference and diameter of the earth and the radius of the orbits of 9 planets.
More about Aryabhata
Aryabhata was a very intelligent man. The theories that he came up with at that time present a wonder to the scientific world today. His works were used by the Greeks and the Arabs to develop
further. A commentary by Bhaskara I a century later on Aryabhatiya says:
‘Aryabhata is the master who, after reaching the furthest shores and plumbing the inmost depths of the sea of ultimate knowledge of mathematics, kinematics and spherics, handed over the three
sciences to the learned world.’
Aryabhata’s Legacy
Aryabhata was an immense influence to mathematics and astronomy. Many of his works inspired Arabs more particularly. His astronomical calculations helped form the ‘Jalali calendar’. He has been
honored in many ways. The first Indian satellite is named after him as ‘Aryabhata’, so is the lunar crater. An Indian research center is called ‘Aryabhata Research Institute of Observational | {"url":"http://www.famous-mathematicians.com/aryabhata/","timestamp":"2014-04-18T18:10:39Z","content_type":null,"content_length":"35698","record_id":"<urn:uuid:2c8f19bf-79fb-4080-9f84-4c82f6546827>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Examples 6.5.10(a):
Does Rolle's theorem apply to (-3, 3) ? If so, find the number guarantied by the theorem to exist.
This function is continuous on the interval [-3, 3], and differentiable on (-3, 3). It is not differentiable at x = -3 and x = +3, but Rolle's theorem does not require the function to be
differentiable at the endpoints. Also, f(3) = f(-3) = 0. Therefore, Rolle's theorem does apply.
It guaranties the existence of a number c between -3 and 3 such that f'(c) = 0. It does not specify where exactly this number is located. However, a quick calculation shows that the number c is in
fact c = 0. | {"url":"http://www.mathcs.org/analysis/reals/cont/answers/rolle1.html","timestamp":"2014-04-20T11:19:43Z","content_type":null,"content_length":"5867","record_id":"<urn:uuid:1b3e8a2e-59e5-49a7-a355-3786864e747a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Curriculum
Continued from Pre-School Math Curriculum for School Readiness Page One.
Pre-school & School Readiness Math Curriculum - Standards, Lessons, Activities, Printable Worksheets
A pre-school math curriculum aimed at school readiness for math should be taught using appropriate pre-school lessons including interactive activities, learning games, printable worksheets,
assessments, and reinforcement. Manipulatives are very important for pre-school math lessons and developming math school readiness . Pre-school math curriculum should rely on many learning tools -
pre-school lessons with activities, worksheets, reinforcement exercises, and assessments. And a pre-school math curriculum should cover all the math strands, not just arithmetic. The major math
strands for pre-school curriculum are number sense and operations, algebra, geometry and spatial sense, measurement, and data analysis and probability. While these math strands might surprise you,
they are all critical lessons for a pre-school math curriculum.
Pre-school math lessons, worksheets, and activities teach pre-school math curriculum covering all the math strands. Pre-school math - also known as pre-kindergarten or pre-k math, provides an
opportunity for children younger than kindergarten age to get a basic understanding of counting, shapes, and other very simple math concepts. These very young children learn through stories, songs,
rhymes, finger-plays, and other creative methods that make pre-school math fun for them. Pre-school math includes learning more about geometrical figures and objects, measurement of length, weight,
capacity, time, and temperature, use of money, graphs and charts used for data analysis and prediction, and algebraic patterns.
Pre-school School Readiness Math - What are the Standards and Curriculum?
Time4Learning teaches a comprehensive school readiness pre-school math curriculum using fun, pre-school math activities to build a solid math foundation. Help your child excel in math, learn more
about Time4Learning's pre-school math lessons, curriculum, activities and worksheets.
Pre-school Math School Readiness Curriculum and Standards - Lessons, Activities, Worksheets - Geometry and Spatial Sense
Pre-school school readiness math lessons introduce simple shapes and three-dimensional objects including triangles, squares, circles, rectangles, ovals, diamonds, hearts, spheres, and cubes. Using
vocabulary to indicate position, pre-school math students learn to understand 'over and under', 'next to', 'in front of', 'behind', 'top and bottom'. They learn to demonstrate an awareness of
location, spatial awareness and physical proximity, as well as an ability to differentiate size.
Pre-school school readiness math includes activities that promote geometrical exploration and understanding such as playing shapes bingo and lotto games, and making shapes in a sand tray or with
play-doh. Pre-school math students learn from using flannel board shapes and puzzles, songs and rhymes requiring them to copy motions (such as Simple Simon), matching pictures and objects to shapes,
and real-world objects of various shapes.
Pre-school School ReadinessMath Curriculum and Standards - Lessons, Activities, Worksheets - Algebraic Thinking
During the course of their pre-school math school readiness program, students explore patterns and develop a sense about what properties are. They create patterns and order objects according to size,
shape, color, and detail. Children are asked to copy physical patterns, describe pattern attributes, and use the concepts of 'greater than', 'less than', and 'equal to'. Using every-day objects
pre-school math students learn to describe the similarities and differences they see. They use memory game matching cards and will be asked to collect and sort objects.
Is your child being homeschooled?
More on Pre-school school readiness math curriculum and standards.
*Math Standards are defined by each state. Time4Learning bases its use of pre-school math standards on the national bodies that recommend curriculum and standards and the interpretations of it by a
sampling of states notably Florida, Texas, and California. | {"url":"http://www.time4learning.com/pre-school-math.shtml","timestamp":"2014-04-19T04:19:52Z","content_type":null,"content_length":"28073","record_id":"<urn:uuid:754ac7fd-4c8e-4600-9854-6bf849117637>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |